Experts say humans could be wrestling AI for control by 2035

Tech leaders are split about how much control people will retain over essential decision-making as digital systems and AI spread, according to Pew research

Nothing less than the future of human agency is in question as more individuals embrace advanced technology to streamline their lives, according to a new study conducted by Pew Research Center and Elon University's Imagining the Internet Center.

The study asked 540 technology experts, developers, business and policy leaders, researchers, academics, and activists if smart machines, bots, and systems powered by artificial intelligence will be designed to allow humans to easily control most tech-aided decision-making relevant to their lives by 2035.

The study results were concerning, with only 28% of respondents believing that smart machines will be designed to prioritise human control over tech-aided decision-making. Many experts expressed concerns about the increasing automation of business, government, and social systems, which could lead to humans losing their ability to exercise judgment and make decisions independently of these systems.

However, some experts remain optimistic, asserting that humans have benefited from technological advances throughout history. They believe that new regulations, norms, and literacies will emerge to help ease the shortcomings of technology as automated digital systems become more deeply woven into daily life.

Pew’s nonscientific canvassing discovered:

  • 56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
  • 44% said they agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.

The experts noted that these technologies will have positive and negative consequences for human agency and that people have historically allowed other entities to make decisions for them or have been forced to do so by various authorities and tools.

The experts largely agree that digital technology tools will increasingly become vital to people's decision-making process, providing them with vast amounts of information to explore choices and access expertise as they navigate the world. However, both sides of the issue also acknowledge that this is a critical turning point that will determine the authority, autonomy, and agency of humans as digital technology spreads into more aspects of daily life.

“The future will clearly cut both ways. On the one hand, better information technologies and better data have improved and will continue to improve human decision-making,” says Alf Rehn, professor of innovation, design and management at the University of Southern Denmark. “On the other, black box systems and non-transparent AI can whittle away at human agency, doing so without us even knowing it is happening. The real challenge will lie in knowing which dynamic is playing out strongest in any given situation and what the longer-term impact might be.”

Barry Chudakov, founder and principal, Sertain Research, has a bold prediction: “By 2035, the relationship between humans and machines, bots and systems powered mostly by autonomous and artificial intelligence will look like an argument with one side shouting and the other side smiling smugly. 

“The relationship is effectively a struggle between the determined fantasy of humans to resist - ‘I’m independent and in charge and no, I won’t give up my agency!’ - and the seductive power of technology designed to undermine that fantasy - ‘I’m fast, convenient, entertaining! Pay attention to me!’”

Kathryn Bouskill, anthropologist and AI expert at the Rand Corporation, says some very basic functions of everyday life are already elusive to some. “People have little idea how we build AI systems, control them and fix them. Many are grasping for control, but there is opaqueness in terms of how these technologies have been created and deployed by creators who oversell their promises. 

“Right now, there is a huge chasm between the public and AI developers,” says Bouskill. “We need to ignite real public conversations to help people fully understand the stakes of these developments.”

Share

Featured Articles

US to Form AI Task Force to Confront AI Threats to Safety

The United States aims to form an AI task force to explore AI safety and possible regulations, amid a global debate over developing responsible technology

Wipro to Advance Enterprise Gen AI Adoption with IBM watsonx

Wipro and IBM are expanding their partnership to launch a Gen AI platform, aiming to provide easy and secure AI adoption solutions for businesses

Dr Joy Buolamwini: Helping Tech Giants Recognise AI Biases

With a longstanding commitment to AI ethics, Dr Joy Buolamwini continues to hold AI developers accountable as they work to tackle machine learning biases

Big Tech Companies Agree to Tackle AI Election Fraud

AI Strategy

Microsoft to Offer AI Support to Europe Amid Economic Slump

AI Strategy

Microsoft: Digitally Transforming India via AI Upskilling

AI Strategy