AI Godfather raises warnings about machine learning growth

In the wake of Geoffrey Hinton’s resignation from Google, companies must weigh up future developments of artificial intelligence alongside the reality

As he enters retirement, the ‘AI godfather’ has issued warnings and regrets about the risks AI tools now present. Geoffrey Hinton resigned from Google over the weekend, expressing concerns over the dangers of AI, stating that soon it could become more intelligent than humans and could be easily manipulated by so-called bad actors. 

Quoted by the BBC, he said that some of the dangers of AI chatbots were "quite scary".

"Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

AI has taken the news by storm recently, with experts recently sharing concerns over the growing intelligence of these systems, following the rapid development of OpenAI's GPT-4 and its widespread accessibility. With anxieties concerning disinformation, businesses would do well to inform themselves of the realities of machine learning.

Businesses have had to learn how to be more technologically savvy when it comes to machine-based learning. A notable recent example is how Samsung was reported to have banned ChatGPT use company-wide following a data leak via the tool last month. According to a memo seen by Bloomberg, the restriction is temporary and expected to last until the company builds “security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency.”

Despite supporting these developments as something positive, those working in the technology sector have also spoken out about the evolution of AI and ChatGPT and associated risks. John Smith, EMEA CTO, Veracode has stated that it is important to remember that “publicly available models, like ChatGPT and Bard, are still in their infancy.”

“There’s no doubt OpenAI will be taking steps to mature its approach in this area. In the meantime, businesses should carefully weigh up the risks of sending sensitive information outside of their organisations … Sharing private or sensitive information into a system capable of learning and reusing it – even if that specific feature is not currently enabled – could cause big issues later down the line.”

What is clear is that businesses must learn how to better utilise AI tools like ChatGPT in order to stay ahead and keep in line with developments.

Alec Boere, Associate Partner for AI and Automation, Europe at Infosys Consulting suggests that to mitigate fears surrounding AI, responsibility should be at the forefront of the enterprise when implementing AI models - with a particular focus being placed on the five core pillars of trust.

“These core focus areas in the delivery of AI-based solutions stem from the human and cultural approach led from within the enterprise. For example, businesses must have diverse teams to avoid transferring human bias into the technical design of AI - as the AI is driven by human input. Businesses should also avoid using outdated data, because these algorithms will then only amplify the patterns from the past and not design new ones for the future.” 

“Whilst OpenAI has opened the ChatGPT door, greater controls need to be put in place, allowing for the management of data sources and more guardrails to ensure trust. To help maintain this trust, every organisation should have policies to ensure they are being AI responsible and should be working with organisations like the CBI and TechUK to help shape government policies too.”

Share

Featured Articles

Andrew Ng Joins Amazon Board to Support Enterprise AI

In the wake of Andrew Ng being appointed Amazon's Board of Directors, we consider his career from education towards artificial general intelligence (AGI)

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

OpenAI announces updates for its GPT-4 Turbo model to improve efficiencies for AI developers and to remain competitive in a changing business landscape

Meta Launches AI Tools to Protect Against Online Image Abuse

Tech giant Meta has unveiled a range of new AI tools to filter out unwanted images via its Instagram platform and is working to thwart threat actors

Microsoft in Japan: Investing in AI Skills to Boost Future

Cloud & Infrastructure

Microsoft to Open New Hub to Advance State-of-the-Art AI

AI Strategy

SAP Continues to Develop its Enterprise AI Cloud Strategy

AI Applications