Anthropic Unveils Claude 3: Its Most Powerful AI Chatbot Yet

Anthropic describes Claude 3 as its most powerful family of AI models to date, offering a new range of business use cases and promoting responsible AI

AI start-up company Anthropic announces Claude 3, a new family of AI models that offer a wide range of new capabilities.

Models Opus and Sonnet are now available, with the Haiku model expected to be made available soon. Each model aims to offer a more powerful performance and allow users to select the optimal balance of intelligence, speed and cost for specific business use cases.

Significantly, Anthropic states that the Claude 3 family will be available at a lower cost than competitor models currently on the market. Unlike earlier versions, Claude 3 is able to understand text and photo inputs, making it multi-modal.

New features to enable larger-scale AI deployments

Claude 3 comes with a range of features, including answering more questions, understanding longer instructions and being more accurate. Given that the models can understand more context, Anthropic says that they can therefore process more information.

The largest and most intelligent model, Claude 3 Opus, offers high levels of performance when it comes to highly complex tasks. Utilising the power of generative AI (Gen AI), Opus can navigate open-ended prompts with remarkable fluency and human-like understanding.

Also available is Claude 3 Sonnet which delivers strong performance at a lower cost and is engineered for high endurance in large-scale AI deployments. Ultimately, these models are designed to be easier to use and simpler to instruct for use cases like natural language classification and sentiment analysis.

Anthropic also highlights that the new models will be an improvement to its previous model, Claude 2.1, with Sonnet in particular operating twice as fast and able to excel in rapid-response tasks like knowledge retrieval or sales automation.

“In our quest to have a highly harmless model, Claude 2 would sometimes over-refuse,” Anthropic co-founder Daniela Amodei told CNBC. “When somebody would kind of bump up against some of the spicier topics or the trust and safety guardrails, sometimes Claude 2 would trend a little bit conservative in responding to those questions.”

“We’ve tried very diligently to make these models the intersection of as capable and as safe as possible.”

A commitment to responsible AI

Anthropic has developed the Claude 3 family of models to be trustworthy, as well as capable. In its announcement, it highlights that it has several dedicated teams to track risks that include misinformation, election interference and autonomous replication skills.

With the company having conducted research into the importance of safety training, Anthropic is dedicated to building AI models that are safe and reliable. In fact, in January 2024, the company joined a new US pilot programme to democratise access to safe AI, alongside Microsoft, OpenAI and others.

Continuing to develop methods that improve the safety and transparency of the Claude models, Anthropic is working to further mitigate privacy issues. Part of this is also addressing biases in increasingly sophisticated models, with the company stating that Claude 3 shows less bias than its previous models - all part of its efforts to promote greater neutrality in its AI models.

In its statement, Anthropic says: “As we push the boundaries of AI capabilities, we’re equally committed to ensuring that our safety guardrails keep apace with these leaps in performance. 

“Our hypothesis is that being at the frontier of AI development is the most effective way to steer its trajectory towards positive societal outcomes.”


Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024


AI Magazine is a BizClik brand


Featured Articles

Samsung’s AI-Era Vision Coincides With its New Chip Tech

Samsung announced its AI-Era Vision will be fuelled by its new chip tech alongside its new Samsung AI Solutions platform

SolarWinds Report Finds Limited Business Confidence in AI

SolarWinds argues few IT professionals are confident in their organisation’s readiness to integrate AI, citing data limitations and security concerns

Apple & OpenAI: Elon Musk Threatens Apple Device Ban

The Tesla, SpaceX and xAI billionaire warns that he will ban Apple devices within his companies if Apple integrates OpenAI within its operating system

AI Now on Agenda for Apple as OpenAI to add ChatGPT to Siri

AI Strategy

AI Accelerator Offers Startups Free Use of Nvidia GPU Server

Machine Learning

How Amazon Used AI to Design the F1 Trophy

AI Applications