Meta’s Llama 2: The next generation of open source LLMs

Meta has released Llama 2, an open-source AI model, in the hopes that it will further promote responsible and safe use of AI and LLMs within the industry

Meta has announced the launch of Llama 2 and that it is available for free for research and commercial use.

It is exciting for many within technology and AI sectors to see a large organisation such as Meta engage with open-source tools. Open-sourcing Llama 2, as well as making it free to use, allows users to build on and learn from its structure.

According to Meta, Llama 2 pretrained models are trained on two trillion tokens, doubling the context length than Llama 1. Its fine-tuned models have also been trained on over one million human annotations.

Open sourced AI leading the charge

Meta’s previous Llama model was released earlier in 2023 and also aimed to allow researchers without substantial infrastructure required to study them, democratising access to the rapidly advancing field of AI.

This AI model being launched comes swiftly after Meta’s heavily publicised new social media platform Threads, which acts as an extension to Instagram. Since 2016, Meta claims that it has invested more than US$16bn in building up the teams and technologies needed to protect users, whilst remaining focused on advancing integrity efforts and investments to protect its online community.

The company has partnered with Microsoft to introduce the next generation of large language model (LLM). Llama 2 was pretrained on publicly available online data sources, with the fine-tuned model, Llama-2-chat, leveraging publicly available instruction datasets and over one million human annotations.

Meta has also released a use guide for Llama 2 that provides best practices and considerations for building products powered by LLMs in a responsible way, also covering various stages of development from inception to deployment.

Azure customers can also use the technology to deploy the 7B, 13B and 70B-parameter Llama 2 models easily and more safely on the platform.

Paving the way for more responsible use

There has been a global call for AI to have more regulations to enforce responsible use. The UN in particular has always advocated for the responsible use of AI, with calls for global discussions about how international collaboration can prevent continued increases in fraud, ransomware, cyber attacks and even surveillance operations.

In particular, Meta has stated that Llama 2 has undergone testing by external partners and internal teams to identify performance gaps. This aims to mitigate problematic responses in chat use cases and ultimately enhance safety and performance.

The company has spoken about the importance of policies around generative AI being informed by global experts and have created a forum as a result. This is hoping to act as a governance tool to deliberate on the values underpinning AI, LLMs and other new AI technologies.

With funding being increased worldwide to boost LLMs, the broad adoption of such systems could triple national productivity growth rates and make them a key driver of economic growth.


Featured Articles

IBM's Salesforce Partnering Shows watsonx's Enterprise Reach

IBM and Salesforce's expansion of their partnership shows how watsonx’s is making inroads in enterprises across sectors

Are Business and Government Diverging on AI Safety?

As the UK government seeks to expand its AI Safety Institute just as OpenAI disbands its team on long-term AI safety, we look at the gap in approach to AI

Alteryx Industry-First AI Copilot Sees New Era of Analytics

Alteryx unveils AiDIN Copilot, the first AI assistant that chats with users to build data analysis workflows

Tamr’s Anthony Deighton: Integrating AI into Enterprise Data

Data & Analytics

IBM and Tech Mahindra Seek to Power Up Gen AI Adoption


NASA's First Chief AI Officer Shows AI's Value Cross Sector

AI Strategy