Everybody wants to sell AI. Nobody understands it.

LG AI Robot
Pictured: LG Robot At CES 2024
// An FYI on AI.
Fergus Halliday
Aug 21, 2024
Icon Time To Read11 min read

If you're tired of hearing people talking about AI, you should try being a tech journalist.

As someone who spends a lot of time thinking, reading and writing about consumer tech, this conversation has been happening since long before the ChatGPT catapulted the technology into the mainstream. I'm not immune to the novel appeal of a nascent technology, but I am honestly and utterly exhausted by it in 2024.

As the volume of the AI conversation has risen it has little room for many of the things that made me want to write about tech in the first place. Worse still, the discussion and debate around AI is rife with misinformation and misunderstanding.

It's easy to get jaded, especially given how eager many of those involved are to encourage magical thinking regarding AI and how it works. Seemingly everybody is in the business of selling AI and very few seem to understand it. If you fall into the latter category, knowing where to start is hard. My take? The best place to begin is with the term itself.

What do we mean when we talk about AI?

Let's start with what you probably already know: AI stands for artificial intelligence.

As a field of scientific research, AI has existed for decades. It's not new. However, for most everyday consumers, a lot of what they know (and think they know) about AI probably doesn't come from the world of academia. It comes from popular culture.

From Frankenstein to The Terminator, humans have been fascinated by the concept for far longer than it has been the topic of scientific research and that cultural baggage often fills in the gaps where individual knowledge falls short.

The first thing you need to know about AI in 2024 is that the visions of what AI looks like and what it can do in fiction is not what companies like Apple, Microsoft and Google are typically talking about when they use the term.

When a normal person thinks of the term AI, they probably think about a computer that thinks for itself. There's a fair bit of philosophical wiggle room here, from what is a computer to what it means to think. In contrast, when massive tech titans are talking about their respective investments in AI, they are talking about a handful of technologies enabled by recent breakthroughs in this one specific field of research and how they can be applied to achieve tasks that would otherwise require a human.

These technologies include the following:

  • Machine learning: Computer systems that amend their algorithms based on real-time data. If a traditional system would apply a rule blindly, one that incorporates machine learning is able to modify and amend its behavior in the face of changing situations. It's essentially a more nuanced algorithm.
  • Deep learning: A sub-field within machine learning that combines pattern recognition and neural networks to handle more intricate tasks.
  • Neural networks: Computer systems that are designed to mimic the way that the human brain works.
  • Natural language processing: When a computer is fed examples of sounds and taught how to identify them. It is the backbone of chatbots, translation services, and voice assistants.
  • Computer vision: Where a computer is fed examples of images and taught how to identify them.
  • Large language models: A computer system that has been fed a large amount of text-based data. That data is organised using neural networks and then the system is able to make statistical predictions based on patterns in that data.

AI as it exists is more about finding a new way to build information technology systems to tackle the tasks that existing ones struggle to solve than it is making the computers that we already achieve a Skynet-like level of sentience and self-awareness.

How does today's tech use AI?

MSI Prestige Series Computex 2024

Part of the problem illustrated above is that AI encompasses a ton of highly specific, overlapping and complicated technologies. It is difficult to distill those many decades of nuanced research into a compact and consumer-friendly acronym without losing something in translation.

According to IBRS’ Dr Joseph Sweeney, the looseness with which the acronym AI gets thrown around is similar to the term digital transformation.

“It means everything to everybody. In its strictest sense, it is any software that mimics human-like behaviour, whether that be analytics or finding patterns in things or generating something that a human would be able to generate,” he said.

Dr Sweeney’s perspective on AI technologies is to sort them into three buckets.

The first is machine learning, which covers software that finds patterns in datasets to large as to be incomprehensible to humans and then makes predictions accordingly. The second is generative AI, which takes a pattern that has been found and replicates it in a way that just just that little bit different. Finally, there’s Graph AI, which promises to find the context within complex networks of data.

Speaking to Reviews.org, RMIT's Dr Angel Zhong said that the term AI is often used as an umbrella term for various technologies like large language models (LLMs), machine learning, deep learning, and natural language processing.

"When brands lump these together under “AI,” it can obscure the specific capabilities and limitations of each technology. This can lead to misunderstandings about what a product can actually do," she said.

Dr Zhong pointed to a number of questions that consumers can ask when trying to evaluate the real-world value of AI technology in a given product. The first of these to ask what specific AI technology is being used.

"Understanding the AI's primary function in the product—whether it’s for personalization, automation, or data analysis—can clarify its role," she said.

Once you know what you're dealing with, you can then drill deeper and ask how the AI enhances the product’s performance and what (if any) benefits it provides compared to non-AI alternatives. AI or not, it's also good to ask questions about the data involved, how it is collected, and whether it is generated by the user, sensors or publicly available.

"You should also ask about the measures in place to protect user data and ensure privacy and security. If the AI can make decisions autonomously, it’s important to know what kind of decisions it makes and how these are monitored or controlled," she said.

If you're looking at a consumer product and trying to determine the real-world value that its AI-powered features bring to the table, a passing mention that the hardware features AI doesn't really tell you a whole lot.

To try and make sense of the madness, here are five things you should consider before buying into any promises that AI will make a given piece of tech better.

  1. What kind of AI technologies are being utilised here?
  2. What role does AI have in the product: Personalisation, automatation or data analysis?
  3. What can the AI version of this product do that the version without it cannot?
  4. How is the data involved being handled?
  5. What do the user and professional reviews say about the impact and value of the AI features?

How does NVIDIA fit into this?

Aside from the odd tech demo, Nvidia isn't really in the business of making AI-powered tech so much as it is in the business of selling the underlying hardware to those who do. Not all neural networks are born equal and the type made popular by large-language models like ChatGPT are transformer-based.

Rather than being fed a ton of data prior to usage, these AI-powered systems are able to pick up on context and the meaning of a given text input by guessing at the relationships of sequential data. This isn't the only type of model but it is rapidly becoming the most popular approach thanks to the success of apps like ChatGPT.

Of course, the usefulness of any answer an AI-powered system can provide is contingent on its ability to provide that information in a timely manner. For various technical reasons, Nvidia's GPUs are geared towards doing exactly this.

As put by Zhong, Nvidia's relationship with AI is focused on hardware rather than software.

"While many companies are involved in developing AI software and algorithms, Nvidia provides the high-performance computing infrastructure necessary for these technologies to operate efficiently."

"This focus on hardware positions Nvidia as a key enabler of AI advancements, rather than a direct developer of AI applications or solutions."

In contrast, Zhong explained that the other players in the AI space, such as Google, Microsoft, and OpenAI, are more engaged in creating AI algorithms, developing AI-powered applications, or offering AI services."

"These companies focus on developing AI models and software platforms, while Nvidia supplies the underlying hardware that supports these innovations," she said.

As the demand for AI services has intensified, so too has demand for hardware by the company.  Back in January 2024, Meta CEO Mark Zuckerberg told The Verge it planned to field a total of 600,000 NVIDIA GPUs to power its AI apps.

The hype around AI has boosted demand for Nvidia GPUs in much the same way that the cryptocurrency boom did, propelling the value of the company's stock to new heights. As of August 2024, the company now sits with an estimated market cap of $2.8 trillion.

AI hasn't made NVIDIA more valuable than Apple, but it's getting close.

Why does AI keep making mistakes?

The ghost in the machine

As more and more people have been able to go hands-on with AI, it has become increasingly clear that accuracy isn't the technology's greatest strength. Depending on who you're talking to, that weakness is either a temporary set of growing pains or a flaw inherent to the way that large language models work.

When you ask ChatGPT or any of its imitators and rivals a question, you are not getting a real answer. You are getting the system's best guess at what a correct answer looks like. This tendency makes certain applications of generative AI much more problem-prone than others.

Dr Zhong's perspective is that the majority of AI hallucinations stem from the limitations of training data and the inherent complexity of language.

"LLMs are trained on vast amounts of text data from the internet, which can include inaccuracies and biases. Additionally, the complexity and ambiguity of language make it difficult for these models to consistently generate accurate responses," she explained.

"The generative nature of LLMs, which create text based on patterns in their training data, can sometimes produce plausible-sounding but incorrect information," she added.

Investment in generative might be at an all-time high, but betting underlying mechanism will change anytime soon doesn't seem like a smart gamble. ChatGPT does not and cannot learn or know anything. It can only get better at guessing and, funny thing, there's always going to be some room for error if the word estimate is involved.

On the other side of the aisle, AI advocates have made the case that a solution to the hallucination problem is just a matter of time. With enough data, perhaps the gap between 100% reliability and where the technology currently sits will shrink to the point where hallucinations are an anomaly rather than a regular occurrence. It might never reach perfection but maybe 99.5% is be fine for most users and applications.

The problem here is that given that AI systems like ChatGPT have already scraped much of the web, it's not clear where that additional data will come from.

"While completely eliminating hallucinations might be challenging due to the nature of language and how LLMs operate, the goal is to reduce their frequency and impact as much as possible through ongoing improvements," Dr Zhong said.

AI-powered technologies don't know anything, aside from how to guess the answer to a question. That capability is a marvelous magic trick but it's hard to rely on and even harder to ignore the reality that it might be baked in on a foundational level.

In some situations, it's even desirable. The ability of an AI-powered system to deviate from expectations might be critical to its application in creative industries.

That said, Dr Sweeney pushed back on the idea that generative AI is creative. He noted that the industry had yet to find any combination of AIs capable of generating or finding new concepts.

“Creativity is not taking different things together and synthesising them. It feels creative but it’s not. It’s actually synthesis and summarisation," he said.

According to Dr Zhong, the exact tolerance for AI hallucinations depends on the context in which the AI is used.

"In high-stakes fields like medical diagnoses or legal advice, accuracy is critical, and the tolerance for errors is very low due to the serious consequences inaccuracies can have. Conversely, in less critical applications, such as casual conversations with chatbots or entertainment, users might accept a higher level of inaccuracy as the potential impact is minimal."

She highlighted the importance of user expectations, calling for more transparency about the limits of AI.

"If users understand that AI might make occasional mistakes and are in a position to verify or disregard questionable information, they might be more tolerant of errors," she said.
What about the Turing test?

Named after Alan Turing, this test assesses the capability of an AI-based system to pass itself off as human. For a long time, the Turing Test was regarded as something of a benchmark for AI performance. However, the reality is that it's only one of several benchmarks and that just because something can pass it doesn't mean it is sentient or alive in the way that people are.

Where is all this going?

HP AI PC launch event

The long and winding path that led to this first wave of AI hype stretches begins in the 1950s. Advancements in machine and deep learning built on this foundation in the decades that followed, eventually leading to the likes of ChatGPT, Microsoft Copilot and Google's Gemini.

Of course, the key thing that this version of history leaves out is the many crashes in investment and interest between each round of AI breakthrough. Commonly referred to as "AI winters", these downturns saw a complete collapse in investment in the sector once a widespread disillusionment with its inability to deliver on its lofty promises set in.

Speaking to Reviews.org at TechLeaders earlier this month, Telsyte managing director Foad Fadaghi said that tech giants like Microsoft see AI as less of a season and more of an arms race.

"Whether it succeeds or not, they still have to invest in it because there's too much threat to them if they don't invest and they fall behind. The investment side of things is nothing to do with consumer demand. It's about having a stake in the future of computing going forward," he explained.

Fadaghi said that on the other side of that this current wave of investment will come a secondary one, which will be more focused on the actual real world applications and benefits of AI as a technology. His prediction? Paying for AI will eventually become as normalised as paying for digital goods is. 

"We're still so early in this wave. I don't think we've hit the peak," he said.

Dr Zhong agreed that the risk of the next AI winter is relatively low, but not out of the question.

"Overhyped expectations could lead to disillusionment if AI fails to deliver as anticipated, possibly reducing interest and funding. Economic downturns or shifts in funding priorities might also impact investment levels, while unforeseen technical challenges could slow progress," she said.

It's hard to know for certain whether (or when) AI winter is coming, but it's very easy to find potential evidence if you're looking for it.

This current wave of AI hype is the spitting image of the cryptocurrency bubble that burst just a few years ago and while tools like ChatGPT have done a lot for  organisations, businesses and individuals who want to bring AI-powered tools into the workplace, that democratization has also made it easier to run up against the limits of what the technology can do.

More than that, it's highlighted the gap between the vision of the future that companies like Nvidia and Microsoft are selling and the version of it that exists today.

Dr Sweeney noted that “in its current state” generative AI is not reaping the productivity gains that have been promised. The technology may get better, but that may matter less than whether it does so fast enough.

That said, he noted that most of the hyperscale services are happy to lose money on AI “because they’re trying to buy market share and create new monopolies.”

“They’re anticipating that the cost for this type of computing performance will fall,” he said.

For these companies, what happens if that doesn't come to pass or if this behavior attracts the ire of regulators is less important than the imperative to keep up with the rest of the industry.

It's not entirely clear what will happen when the AI bubble bursts but short of a Butlerian Jihad there's no putting the genie back in the bottle. Even if it doesn't replace your entire industry, AI will likely hang around in the same way that innovations like voice assistants and virtual reality have.

For as many stories as there are about how the real world applications of the tech fall short, there are always going to be operators who opt for the lowest-cost way to do business. The idea that all businesses are going to be entirely powered by AI is a fantasy built on a misunderstandings and misbehavior. 

A disruption and a revolution are not the same thing. AI breakthroughs have allowed us to use computers in new ways and address problems that they previously could not.  Still, a tool is nothing if it's not held in a pair of human hands and for all that AI might change things, that fundamental truth isn't likely to change anytime soon.

Fergus Halliday
Written by
Fergus Halliday is a journalist and editor for Reviews.org. He’s written about technology, telecommunications, gaming and more for over a decade. He got his start writing in high school and began his full-time career as the Editor of PC World Australia. Fergus has made the MCV 30 Under 30 list, been a finalist for seven categories at the IT Journalism Awards and won Most Controversial Writer at the 2022 Consensus Awards. He has been published in Gizmodo, Kotaku, GamesHub, Press Start, Screen Rant, Superjump, Nestegg and more.

Related Articles

AirPods 4
Apple brings spatial audio to the AirPods 4
Apple has announced updates to the AirPods 4 and AirPods Max.
Apple Watch Series 10 announced at iPhone 16 event
Time for a new Apple Watch?
Samsung Galaxy Ring
The Galaxy Ring is coming to Australia next month
Are you ready to put a ring on it? Samsung is.
Ecovacs Deebot X5 Pro Omni review: Handsfree freshness
Finally, a worthy replacement for yours truly.