Anthropic Unveils Claude 3: Its Most Powerful AI Chatbot Yet

Anthropic describes Claude 3 as its most powerful family of AI models to date, offering a new range of business use cases and promoting responsible AI

AI start-up company Anthropic announces Claude 3, a new family of AI models that offer a wide range of new capabilities.

Models Opus and Sonnet are now available, with the Haiku model expected to be made available soon. Each model aims to offer a more powerful performance and allow users to select the optimal balance of intelligence, speed and cost for specific business use cases.

Significantly, Anthropic states that the Claude 3 family will be available at a lower cost than competitor models currently on the market. Unlike earlier versions, Claude 3 is able to understand text and photo inputs, making it multi-modal.

New features to enable larger-scale AI deployments

Claude 3 comes with a range of features, including answering more questions, understanding longer instructions and being more accurate. Given that the models can understand more context, Anthropic says that they can therefore process more information.

The largest and most intelligent model, Claude 3 Opus, offers high levels of performance when it comes to highly complex tasks. Utilising the power of generative AI (Gen AI), Opus can navigate open-ended prompts with remarkable fluency and human-like understanding.

Also available is Claude 3 Sonnet which delivers strong performance at a lower cost and is engineered for high endurance in large-scale AI deployments. Ultimately, these models are designed to be easier to use and simpler to instruct for use cases like natural language classification and sentiment analysis.

Anthropic also highlights that the new models will be an improvement to its previous model, Claude 2.1, with Sonnet in particular operating twice as fast and able to excel in rapid-response tasks like knowledge retrieval or sales automation.

“In our quest to have a highly harmless model, Claude 2 would sometimes over-refuse,” Anthropic co-founder Daniela Amodei told CNBC. “When somebody would kind of bump up against some of the spicier topics or the trust and safety guardrails, sometimes Claude 2 would trend a little bit conservative in responding to those questions.”

“We’ve tried very diligently to make these models the intersection of as capable and as safe as possible.”

A commitment to responsible AI

Anthropic has developed the Claude 3 family of models to be trustworthy, as well as capable. In its announcement, it highlights that it has several dedicated teams to track risks that include misinformation, election interference and autonomous replication skills.

With the company having conducted research into the importance of safety training, Anthropic is dedicated to building AI models that are safe and reliable. In fact, in January 2024, the company joined a new US pilot programme to democratise access to safe AI, alongside Microsoft, OpenAI and others.

Continuing to develop methods that improve the safety and transparency of the Claude models, Anthropic is working to further mitigate privacy issues. Part of this is also addressing biases in increasingly sophisticated models, with the company stating that Claude 3 shows less bias than its previous models - all part of its efforts to promote greater neutrality in its AI models.

In its statement, Anthropic says: “As we push the boundaries of AI capabilities, we’re equally committed to ensuring that our safety guardrails keep apace with these leaps in performance. 

“Our hypothesis is that being at the frontier of AI development is the most effective way to steer its trajectory towards positive societal outcomes.”

************

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

************

AI Magazine is a BizClik brand

Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology