Jun 8, 2021

Legend: Demis Hassabis

DemiHassabis
AI
CognitiveNeuroscience
deepmind
2 min
What can AI learn from cognitive neuroscience and vice versa? Meet the man trying to find out, Demis Hassabis

Demis Hassabis, dubbed ‘the superhero of artificial intelligence’, is a man with a deep understanding of the mind. A classic overachiever, he started playing chess aged four, achieving master standard at 13. With his chess winnings he bought and modified his first computer, a ZX Spectrum. Meanwhile, he continued to study, completing his A-levels at 16 to gain a place at Cambridge University.

Demi Hassabis: Porsche

Before going up, Hassabis took a gap year, eschewing backpacking in favour of working for a computer games company, Bullfrog. At 17, he codesigned and led the programming of Theme Park, an award-winning and influential simulation game that sold more than 10 million copies. Having completed his gap year, he bought a Porsche and turned up at Fresher’s Week in it.

Demi Hassabis: cognitive neuroscience

Hassabis returned to the gaming industry after his studies and was instrumental in pushing the boundaries of in-game AI. But he would return to academia in 2009 when he went to University College London to study for a PhD in cognitive neuroscience. His papers – notably one establishing a link between constructive and reconstructive processes in the brain – were considered major breakthroughs in the field.

Demi Hassabis: return to AI

Inevitably, Hassabis would return to the field of AI armed with a deeper understanding of human intelligence and how its application might be used in technology. He founded DeepMind in 2010 with the intention of “solving intelligence” then using intelligence “to solve everything else.”

Demi Hassabis: interview

In an interview with The Verge in 2017, Hassabis explained that while the link between AI and neuroscience was important, the subject was too complicated to transfer directly to use-case scenarios common to AI solution finding.

“If you’re an AI expert today and you have no neuroscience background at all and you try getting into it, it’s quite daunting,” he said. “I think there’s something like 50,000 papers a year — I can’t remember the exact number — published in neuroscience. So there’s a huge body of work to try and make sense of, most of which is not going to be relevant to AI, meaning you’re looking for nuggets of crucial information in a huge haystack.”

Demi Hassabis: DeepMind sale to Google

Hassabis company DeepMind was partially acquired by Google in 2014 for $400 million. It has been incorporated into Google Health and was behind the creation of AlphaGo, the program that defeated Go world champion Lee Sedol. It beat him 4-1 at the famously complex game.
 

Share article

Jun 17, 2021

Chinese Firm Taigusys Launches Emotion-Recognition System

Taigusys
China
huawei
AI
Elise Leise
3 min
Critics claim that new AI emotion-recognition platforms like Taigusys could infringe on Chinese citizens’ rights

In a detailed investigative report, the Guardian reported that Chinese tech company Taigusys can now monitor facial expressions. The company claims that it can track fake smiles, chart genuine emotions, and help police curtail security threats. ‘Ordinary people here in China aren’t happy about this technology, but they have no choice. If the police say there have to be cameras in a community, people will just have to live with it’, said Chen Wei, company founder and chairman. ‘There’s always that demand, and we’re here to fulfil it’. 

 

Who Will Use the Data? 

As of right now, the emotion-recognition market is supposed to be worth US$36bn by 2023—which hints at rapid global adoption. Taigusys counts Huawei, China Mobile, China Unicom, and PetroChina among its 36 clients, but none of them has yet revealed if they’ve purchased the new AI. In addition, Taigusys will likely implement the technology in Chinese prisons, schools, and nursing homes.

 

It’s not likely that emotion-recognition AI will stay within the realm of private enterprise. President Xi Jinping has promoted ‘positive energy’ among citizens and intimated that negative expressions are no good for a healthy society. If the Chinese central government continues to gain control over private companies’ tech data, national officials could use emotional data for ideological purposes—and target ‘unhappy’ or ‘suspicious’ citizens. 

 

How Does It Work? 

Taigusys’s AI will track facial muscle movements, body motions, and other biometric data to infer how a person is feeling, collecting massive amounts of personal data for machine learning purposes. If an individual displays too much negative emotion, the platform can recommend him or her for what’s termed ‘emotional support’—and what may end up being much worse. 

 

Can We Really Detect Human Emotions? 

This is still up for debate, but many critics say no. Psychologists still debate whether human emotions can be separated into basic emotions such as fear, joy, and surprise across cultures or whether something more complex is at stake. Many claim that AI emotion-reading technology is not only unethical but inaccurate since facial expressions don’t necessarily indicate someone’s true emotional state. 

 

In addition, Taigusys’s facial tracking system could promote racial bias. One of the company’s systems classes faces as ‘yellow, white, or black’; another distinguishes between Uyghur and Han Chinese; and sometimes, the technology picks up certain ethnic features better than others. 

 

Is China the Only One? 

Not a chance. Other countries have also tried to decode and use emotions. In 2007, the U.S. Transportation Security Administration (TSA) launched a heavily contested training programme (SPOT) that taught airport personnel to monitor passengers for signs of stress, deception, and fear. But China as a nation rarely discusses bias, and as a result, its AI-based discrimination could be more dangerous. 

 

‘That Chinese conceptions of race are going to be built into technology and exported to other parts of the world is troubling, particularly since there isn’t the kind of critical discourse [about racism and ethnicity in China] that we’re having in the United States’, said Shazeda Ahmed, an AI researcher at New York University (NYU)

 

Taigusys’s founder points out, on the other hand, that its system can help prevent tragic violence, citing a 2020 stabbing of 41 people in Guangxi Province. Yet top academics remain unconvinced. As Sandra Wachter, associate professor and senior research fellow at the University of Oxford’s Internet Institute, said: ‘[If this continues], we will see a clash with fundamental human rights, such as free expression and the right to privacy’. 

 

Share article