Common misconceptions about artificial intelligence
Artificial intelligence is all around us and is used everyday life from online shopping to smart devices, cars, and cybersecurity. The world of AI is as advanced as ever, but there are some common mistaken assumptions which can be disproved with the help of a paper written by author Melanie Mitchell titled ‘Why AI is Harder Than We Think.’
The paper analyses four commonly-held mistaken beliefs about the humansation of AI, narrow AI and general AI, and “brain in a vat” theory which looks at whether intelligence can be associated with other parts of the body as well as the brain, and if AI can match human intelligence.
“AI systems work exactly like the human mind”
Humans and AI. Image: Getty Images.
There are many terms that are able to explain the function of certain AI algorithms. These can include words such as ‘think,’ ‘learn,’ ‘understand,’ and ‘read’. However, even though these can simplify the meaning of a software’s mechanics, Mitchell believes that they can lead people to the mistaken conclusion that AI works in the same way as human minds.
In the paper, Mitchell states that these terms can “mislead the public” when trying to understand artificial intelligence. According to Mitchell, it doesn’t just mislead the public but professionals as well. She states: “(The terms) can unconsciously shape the way even AI experts think about their systems and how closely these systems resemble human intelligence.”
The General Language Understanding Evaluation (GLUE) benchmark is a perfect example of this. It evaluates a language model’s generalised capabilities through a set of tasks against the task that it has been developed to undertake and then a result is produced.. Both humans and AI systems are involved, and despite the misconception, an AI system with a higher score than a human does not mean it is better at understanding a language.
Mitchell states: “While machines can outperform humans on these particular benchmarks, AI systems are still far from matching the more general human abilities we associate with the benchmarks.”
“Narrow AI is the same as general AI”
This is referred to by Mitchell as “narrow intelligence is on a continuum with general intelligence.” The ability for AI to one problem at a time does not equal its ability to solve more complex problems. Mitchell describes it as an assumption that is made often. “If people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI,” she said.
The reality, however, is quite different. While natural language processing systems are able to solve a whole range of different problems from language translation to answering questions, unfortunately the overall scope of artificial intelligence cannot yet stretch to tasks such as holding open-ended conversations which is a much more complicated challenge. In helping overcome this, common sense will be necessary.
AI and common sense
Common sense is still a challenge in the AI world. Image: Getty Images
Despite the existence of IQ scores and other programmes, it is still quite difficult to pinpoint exactly what intelligence is both in the artificial sense and natural. Both of these types of intelligence have the ability to adapt and evolve, complicating the problem further.
Mitchell writes that a better, more developed vocabulary is needed for talking about what machines can do. She adds: “We will need a better scientific understanding of intelligence as it manifests in different systems in nature,” hinting that another problem is common sense.
Common sense is similar to natural instinct or intuition. A search on Google defines it as “sound, practical judgment concerning everyday matters, or a basic ability to perceive, understand, and judge in a manner that is shared by nearly all people.”
However, currently AI systems do not have the ability or capacity to learn common sense, which means they are more likely to be unpredictable in their responses and are unable to provide answers to complex questions. In her paper, Mitchell writes: “No one yet knows how to capture such knowledge or abilities in machines.” However, due to the advancement of technology, this may become possible in the future.
The “brain in a vat” theory
Another misconception is that intelligence is “all in the brain” and can only be measured as such. This is known as the “brain in a vat” theory which proposes that it is possible, using algorithms and data, to produce artificial intelligence that matches human intelligence, as is found in the human mind.
But, on the other hand, Mitchell suggests that this idea may not be true. She states: “A growing cadre of researchers is questioning the basis of the ‘all in the brain’ information processing model for understanding intelligence and for creating AI.”
According to research, “neural structures controlling cognition are richly linked to those controlling sensory and motor systems, and that abstract thinking exploits body-based neural ‘maps.’”
Mitchell suggests in the paper that the idea that intelligence signs of intelligence can be picked from areas other than the human brain is not obvious. “Human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world. It’s not at all clear that these attributes can be separated,” she said.
According to Mitchell, the way to understand these common myths about both natural and artificial intelligence is by understanding ourselves and being aware of our own thoughts. This, she says, will allow for the creation of more trustworthy, robust and intelligent AI.
A link to the full paper can be found here.
- Upskilling Global Workers in AI with EY’s Beatriz Sanz SaizAI Strategy
- Intuitive Machines: NASA's Odysseus bets on Private CompanyData & Analytics
- The Impact of AI on Cybersecurity: A Need for PreparednessAI Strategy
- Salesforce: Businesses Must Better Prepare for AI RevolutionData & Analytics