AI Godfather raises warnings about machine learning growth

In the wake of Geoffrey Hinton’s resignation from Google, companies must weigh up future developments of artificial intelligence alongside the reality

As he enters retirement, the ‘AI godfather’ has issued warnings and regrets about the risks AI tools now present. Geoffrey Hinton resigned from Google over the weekend, expressing concerns over the dangers of AI, stating that soon it could become more intelligent than humans and could be easily manipulated by so-called bad actors. 

Quoted by the BBC, he said that some of the dangers of AI chatbots were "quite scary".

"Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

AI has taken the news by storm recently, with experts recently sharing concerns over the growing intelligence of these systems, following the rapid development of OpenAI's GPT-4 and its widespread accessibility. With anxieties concerning disinformation, businesses would do well to inform themselves of the realities of machine learning.

Businesses have had to learn how to be more technologically savvy when it comes to machine-based learning. A notable recent example is how Samsung was reported to have banned ChatGPT use company-wide following a data leak via the tool last month. According to a memo seen by Bloomberg, the restriction is temporary and expected to last until the company builds “security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency.”

Despite supporting these developments as something positive, those working in the technology sector have also spoken out about the evolution of AI and ChatGPT and associated risks. John Smith, EMEA CTO, Veracode has stated that it is important to remember that “publicly available models, like ChatGPT and Bard, are still in their infancy.”

“There’s no doubt OpenAI will be taking steps to mature its approach in this area. In the meantime, businesses should carefully weigh up the risks of sending sensitive information outside of their organisations … Sharing private or sensitive information into a system capable of learning and reusing it – even if that specific feature is not currently enabled – could cause big issues later down the line.”

What is clear is that businesses must learn how to better utilise AI tools like ChatGPT in order to stay ahead and keep in line with developments.

Alec Boere, Associate Partner for AI and Automation, Europe at Infosys Consulting suggests that to mitigate fears surrounding AI, responsibility should be at the forefront of the enterprise when implementing AI models - with a particular focus being placed on the five core pillars of trust.

“These core focus areas in the delivery of AI-based solutions stem from the human and cultural approach led from within the enterprise. For example, businesses must have diverse teams to avoid transferring human bias into the technical design of AI - as the AI is driven by human input. Businesses should also avoid using outdated data, because these algorithms will then only amplify the patterns from the past and not design new ones for the future.” 

“Whilst OpenAI has opened the ChatGPT door, greater controls need to be put in place, allowing for the management of data sources and more guardrails to ensure trust. To help maintain this trust, every organisation should have policies to ensure they are being AI responsible and should be working with organisations like the CBI and TechUK to help shape government policies too.”


Featured Articles

Virgin Atlantic accelerates AI transformation with Amperity

Leading enterprise customer data platform will help Virgin Atlantic leverage a data-driven strategy to deliver highly personalised customer experiences

Sustainability LIVE: Event for AI leaders

Featuring experts from companies including Microsoft, Google, AWS, Meta and Tech Mahindra, Sustainability LIVE offers a number of sessions for tech leaders

VMware and NVIDIA AI Foundation unlocks business potential

VMware and NVIDIA have partnered in a private AI foundation with a wide range of offerings, to aid businesses to better adopt and customise AI models

TimeAI Summit Oct 2023 to unite tech giants and visionaries


MIT suggest generative AI is democratising AI access

Machine Learning

How AI could help airlines mitigate contrail climate impact

Machine Learning