An AGI Revolution: Balancing Progress with Public Concern
AGI is a hypothetical type of intelligence. If realised, AGI could learn to accomplish any intellectual task that human beings or animals can perform.
Although it will require several years to fully achieve AGI, substantial progress is being made in its transformation. However, recent studies into AGI intelligence suggest that people generally do not want an AI that is smarter than the average person.
A poll conducted by YouGov highlights that over two in five Britons (43%) believe that the onset of human-level AI would make society worse, with just a quarter (26%) saying it would make society better.
What do people think about AGI developments?
OpenAI CEO Sam Altman recently said that AGI will be “the most powerful technology yet invented” and could bring about a world that gets “more abundant and much better every year.”
In this vein, AI Magazine has previously reported that AGI marks a significant advancement in the capability of technology. The intention is that it could perform any task the same way that a human brain can, which could include tasks such as music composition to logistics.
A recurring concern about generative AI (Gen AI) tools such as ChatGPT is that they will take away people’s jobs. This is especially prevalent in 2024, as large companies continue to cut workforces in favour of AI development. With this in mind, YouGov surveyed people about how they feel concerning AGI’s impact on human workforces and companies.
Survey findings confirm that, whilst half say the development of an AGI would be positive for businesses (50%), most believe it would be “bad news” for workers (57%).
In addition, YouGov data highlights that those surveyed are most likely to be happy with AI that is less intelligent than the average person, with 47% comfortable with the idea. Tools with a narrower scope like ChatGPT or Midjourney would fall into this category, as they can hold conversations with users or create realistic images.
However, when it comes to AI that is powered by human-level intelligence, much like hypothetical AGI, YouGov cites that people are more apprehensive. Just 37% of those surveyed say they are comfortable with it. Likewise, a hypothetical AI superintelligence would likely create even more unease, with just 26% saying they are comfortable with the idea of an AI that’s more intelligent than the average human, while 60% say they are not.
Business development: Weighing up the anxieties
AGI does hold the potential to revolutionise businesses in the future. Technology companies like Google are already fuelling AI systems that are teaching themselves how to complete tasks - a discovery which could be monumental for business development, but only if harnessed safely and responsibly.
Ultimately, given that theoretical AGI systems possess the ability to outperform human intelligence, it is understandable that humanity shares concerns about its impact on global society. If left unchecked, the potential of AGI growth could pose risks that humanity cannot currently comprehend in today’s society.
AI advisors have already implored governments to consider AGI regulation moving forward in the interest of humanity. AI Council and CEO of Faculty AI Marc Warner has previously suggested that the establishment of a regulatory framework for AGI would be the best way forward, in order to protect humanity from potential risks.
Causes for concern could be unintended consequences to developing deep learning models, such as a loss of control, or even for AGI to surpass human intelligence and decision-making abilities. This could even result in ethical, social and economic implications for humanity as well as enterprises.
******
Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
AI Magazine is a BizClik brand