Amazon Takes On AI Hallucinations Across Its AI Portfolio
Amazon is taking action at addressing the issue of AI hallucinations through a series of significant AI updates across a number of its AWS enterprise applications.
The announcement highlights the ongoing push organisations are placing on the major obstacle Gen AI is facing in roll out of more enterprise applications by improving the reliability and accuracy of its AI products.
Considering GlobalData forecasts that the Gen AI sector expected to grow from US$1.8bn in 2022 to US$33bn in 2027, addressing the issue early on in the rapid growth cycle is proving crucial.
According to Amazon, the update will reduce hallucinations by around 75% for a number of Gen AI use cases.
Amazon’s efforts for accuracy
At the core of this update is the enhancement of Amazon's Gen AI agents with additional memory capacity.
“This allows agents to provide more personalised and more seamless experiences, especially for complicated tasks,” said Vasi Philomin, Amazon’s Vice President of Gen AI.
This memory improvement is crucial in addressing the issue of AI hallucinations, as it gives the system a more comprehensive understanding of the conversation or task.
Examining AI hallucinations
AI hallucinations are instances where AI systems, particularly large language models (LLMs) generate false, inaccurate, or nonsensical information that is not grounded in reality or the given input data.
AI hallucinations have been a significant concern in the tech industry, with recent incidents highlighting the potential risks.
For instance, before Bard rebranded to Gemini and received updates, Google’s flagship Gen AI platform generated non-existent or even fake information in response to a prompt.
The danger of such hallucinations on an enterprise side could be the LLM giving out false information to a prompt, being taken at face value, and then going on to prove to be a faulty part in a larger system built on top of that.
This instance applied AWS’ Amazon Bedrock for instance, a cloud platform used for building AI applications, could see just that occurrence.
Amazon announced its updates will therefore extend to its Bedrock service, as well as updates to its Amazon Q chatbot to make improved suggestions for writing software code, addressing one of the more popular uses for generative AI.
Amazon’s Q chatbot uses retrieval augmented generation (RAG), an AI technique that enhances the output of LLMs by incorporating external, up-to-date information sources.
This itself is deemed to provide more accurate responses to Gen AI programmes with cut off dates, although it is still susceptible. By adding more memory, RAG systems like the Q chatbot can access a larger pool of accurate, up-to-date data, improving retrieval accuracy and enabling better handling of complex queries.
Accuracy in AI
Ensuring that a model's response can be attributed to the correct enterprise source data and is relevant to the user's original query, Amazon is taking a significant step towards more reliable AI outputs.
And by focusing on reducing hallucinations and improving the accuracy of AI-generated content, Amazon is addressing one of the most significant barriers to widespread AI adoption.
******
Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024
******
AI Magazine is a BizClik brand