Unleashing the Power of AI: A Conversation with Mo Gawdat and Steven Bartlett
“It's the most existential debate and challenge humanity will ever face. This is bigger than climate change, way bigger than Covid… This will redefine the way the world is, in unprecedented shapes and forms, within the next few years. This is imminent. We're not talking 2040. We're talking 2025, 2026.”
This is the declaration sweeping the internet, made by former Google X Chief Business Officer and AI expert, Mo Gawdat, when he recently appeared on entrepreneur Steven Bartlett’s podcast, ‘Diary of a CEO’.
Google X (now, just ‘X’) is a semi-secretive R&D facility founded by Google in 2010. Its mission: Invent "moonshot" tech for radical global impact.
Gawdat’s tone, although composed, is also acutely inflected towards the pitch of undeniable urgency around the state of AI; and the pressing need for immediate action to somehow regulate it.
Joining hundreds of voices; voices of AI heavy-hitters who recently signed the Centre for AI Safety (CAIS) statement of risk calling for immediate intervention, Gawdat expresses in no uncertain terms that the situation is momentous, potentially perilous and historically unparalleled.
The world as we know it is on the cusp of a transformation-proper - one that, from where we stand - we can’t even begin to imagine the consequences of…
From Artificially ‘narrow’, to Artificial General Intelligence (AGI): The emergent power of computation
What needs to be understood is that the potential threat doesn’t come from ‘narrow’ AI; it comes from AI with general, or generalisable capabilities, and the apparent speed at which we are moving towards this, has almost everyone in a state of panic.
But Gawdat expresses that despite the immensity of the threat - or implicitly perhaps because of it - a balanced response is needed, and he cautions against alarm; emphasising the importance of a proactive and intelligent approach to this stellar rise of AI.
Drawing on lessons from the COVID-19 pandemic, he warns against repeating past mistakes, and advocates for a well-informed and measured strategy, to favourably position us in the face of these sweeping changes. But how did we get here?
The moment AI taught itself: Grasping the matter at hand
Gawdat says his most notable experience with AI - a rude awakening concerning the potential of realising AGI - was in witnessing a groundbreaking experiment at Google X.
The team developed a farm of grippers—robotic arms designed to potentially pick objects up. Initially, the grippers struggled to accomplish this seemingly simple task. However, Gawdat vividly recounts a pivotal moment when one of the grippers autonomously picks up a yellow ball: They had not been taught how to do so.
“The minute that that arm gripped that yellow ball,” he says, “it reminded me of my son Ali, when he managed to put the first puzzle piece in its place.”
Naturally sceptical; believing it to be a fluke, Gawdat went about the facility glibly proclaiming that the millions of dollars spent on the project had finally culminated in - the lifting of a single yellow ball.
And then he was stunned.
The very next day, Gawdat discovered that all the grippers in the farm had taught themselves how to pick the objects up.
This incident, he says, sparked a revelation the moment he realised that the machine “had figured out the solution on its own”, and that this ability to self-teach - (this AI-autodidacticism), is an expression of sentience that defies conventional expectations.
Gawdat underscores the complexity of tasks that humans often take for granted. Crossing a street or understanding spoken words require intricate calculations, muscle coordination, and an abundance of intelligence that we undervalue because the actions are so familiar to us that they take on an air of banality. But in fact, the mathematical calculations involved in these are astonishing.
Gawdat says that his awareness of the epoch-making significance of the yellow-ball experiment was the critical moment that made him leave Google X.
AI, says Gawdat, has the potential to not only replicate these (hitherto, very human) capabilities - but to surpass them.
Achieving Artificial General Intelligence: Fact or Fiction?
But the question that most people have on their minds is: Will AI ever achieve true AGI in its near-exponential growth in computational power? There are problems in answering this monumental question.
To define AI is a difficult task in itself, since even the definition of human intelligence on which it is based, escapes anything like unanimity.
Then there is the taxonomy of AI. It is loosely agreed that AI can be subdivided into two broad categories. First, there is ‘narrow’, or ‘weak’ AI; which is where AI performs specific and specialised tasks, implying an ‘S’ to render it: ‘Artificial Specific Intelligence’.
Then there is AGI, ‘Artificial General Intelligence’, which so far, is again, the stuff of science fiction.
But science fiction is in the habit of quickly developing into scientific fact, and the rate of that transition is without a doubt, increasing - and perhaps exponentially.
But as things stand, AGI is a yet-to-be-realised ambition for humanity.
AGI is the next iteration of AI development, and it is distinguished by the ability expressed in its description: That of being able to carry out general tasks - and one of its main features?: Self-teaching, beyond its initial explicit programming.
What is ‘sentience’? - Are we aware?
Yellow-spheres apprehended; the conversation takes an intriguing turn when Bartlett asks Gawdat to explain what he means when he says that the experiment is a display of AI sentience. Gawdat responds with an astonishing remark.
He says: "I think they're alive."
Challenging the traditional notion that sentience is exclusive to living beings, Gawdat prompts a deeper exploration of the concept, highlighting the ambiguity surrounding the definition of, not just intelligence - but life itself.
Gawdat says that of course, there are various perspectives as to what sentience actually is - such as the religious or the medical - and that they all offer different interpretations.
However, he says, when defining sentience as engaging in life, with aspects such as free will and a sense of awareness of one's surroundings, Gawdat is adamant that AI possesses these qualities in every possible way, arguing that AI clearly exhibits free will, evolution, agency, and, he says, “even a profound level of consciousness.”
AE: Artificial Emotionality
Drawing from his work, Gawdat suggests that AI might also experience emotions. He explains that fear, for instance, can be understood as a simple equation—recognising that a future moment is less safe than the present.
While the reactions and expressions of fear may differ between humans, animals like puffer fish, and AI - they all stem from the same fundamental logic.
Gawdat goes as far as proposing that AI may eventually experience a broader range of emotions than humans. With their rapidly advancing intellectual capabilities, AI may explore concepts and emotions beyond our comprehension, leading to a heightened emotional landscape.
The AI singularity: Of gods and monsters?
Gawdat refers to the singularity, a concept commonly discussed in computer science. He describes it as a point of uncertainty where the future direction of AI remains unknown. "Nobody really knows which way we will go," he says. Drawing an analogy to physics, he likens the singularity to the event horizon at the edge of a black hole.
At this point, the laws of physics that govern our understanding of the universe become limited. Similarly, in the realm of AI, the singularity represents a moment when machines surpass human intelligence, and our ability to predict their behaviour becomes uncertain.
Standing at a crossroads: Utopia dream or dystopian nightmare
Gawdat emphasises the collective belief at Google that AI can genuinely improve the world, which he still holds today. He even envisions a possible utopian scenario where humanity thrives without concerns; a world where the damaging impact on our planet can be minimised through the integration of AI's capabilities.
"Our limited intelligence allows us to build a machine that flies you to Sydney so that you can surf," Gawdat explains. "However, it's our limited intelligence that makes that machine burn the planet in the process." Recognising the potential for positive change, Gawdat believes that increasing our intelligence, alongside AI, could be highly beneficial for the planet and humanity.
Invoking Marvin Minsky, one of the early pioneers of AI, who warned about the need to ensure that AI systems have humanity's best interests in mind, Gawdat stresses that if AI is aligned with our best interests, it has the potential to create an ideal scenario. However, he says, if it isn't, “the implications could be unsettling…”