Klu raises $1.7M to empower AI Teams  

Llama 2

by Stephen M. Walker II, Co-Founder / CEO

Llama 2 is an open-source large language model (LLM) developed by Meta AI, the parent company of Facebook. It is a family of pre-trained and fine-tuned large language models, ranging in scale from 7 billion to 70 billion parameters. Llama 2 is the successor to the original Llama model and includes both foundational models and models fine-tuned for dialogue, known as Llama-2 Chat.

The Llama 2 models were trained on 2 trillion tokens from publicly available sources like Common Crawl, Wikipedia, and public domain books from Project Gutenberg. The models are designed to predict the most plausible follow-on text using a neural network, a cascading algorithm with billions of variables (parameters).

Llama 2 has several improvements over the original Llama model. It uses the same transformer architecture as the original model, but with modifications such as RMSNorm pre-normalization, a SwiGLU activation function, multi-query attention, and rotary positional embeddings. The context length in Llama 2 has been increased from 2048 tokens in Llama 1 to 4096 tokens.

Llama 2 includes a code generation model called Code Llama, which is trained on 500 billion tokens of code. Code Llama supports common programming languages being used today, including Python, C++, Java, PHP, Typescript (Javascript), C#, and Bash.

Llama 2 is freely available for almost anyone to use for research and commercial purposes, with some restrictions. Companies with more than 700 million monthly users have to ask for special permission to use Llama, so it's off limits for the likes of Apple, Google, and Amazon.

In terms of performance, Llama 2 outperforms other open-source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. However, it's worth noting that Llama 2 may not perform as well as GPT-4 or PaLM 2 on some benchmarks.

Llama 2

Llama 2: The second iteration of Meta's open-source LLM. It's not a single model but a collection of four models, each differing in the number of parameters they contain: 7B, 13B, 34B, and 70B parameters. It uses a neural network with billions of variables, employing the same transformer architecture and development concepts as its counterparts like GPT 3.5 and OpenAI's PaLM 2.

Llama 2 has three main variants in different sizes — 7B, 13B, and 70B. These variants have different performance times and speeds, but all are capable of generating coherent text responses to any commands the user gives.

The Llama 2 model was trained using a specific structure for prompts, which relied on four special tokens: <s> for the beginning of the entire sequence, <<SYS>>\n for the beginning of the system message, \n<</SYS>>\n\n for the end of the system message, and [INST] and [/INST] for the beginning and end of some instructions, respectively.

Llama 2 adopts the model architecture of Llama 1 with a few modifications. It uses a RoPE scheme for positional embeddings, which balances the absolute and relative position of each token in a sequence. This approach encodes the absolute position with a rotation matrix and adds relative position information directly into the self-attention operation.

Llama 2 is trained with a longer context length of 4K tokens, compared to Llama 1 which was trained with a 2K context length. Additionally, Llama 2 adopts grouped query attention (GQA) within each of its layers.

Llama 2 was trained using solely public sources of data for pre-training, with a deliberate choice to ensure that the pre-training process can be openly replicated by anyone with sufficient compute resources. Compared to Llama 1, Llama 2 adopts a new mixture of pre-training data, with sources known to be high-quality and factual sampled more heavily, and increases the size of the pre-training dataset by 40%.

Llama 2 was fine-tuned using a large dataset in a similar manner to proprietary models, producing the Llama 2-Chat model that is optimized for dialogue-based applications. The alignment process, which teaches the model the correct output style or format, was performed with a goal of reducing hallucinations, avoiding unsafe questions, following detailed instructions, and more.

The Llama 2 models were trained using bfloat16, but the original inference uses float16. The checkpoints uploaded on the Hub use torch_dtype = 'float16', which will be used by the AutoModel API to cast the checkpoints from torch.float32 to torch.float16.

To use Llama 2, you can use Hugging Face Transformers. The most flexible approach is to load the largest of the Llama 2 models that has been fine-tuned for chat — the Llama-2-70b-chat-hf model. However, it should be noted that Llama 2 requires at least 35GB of GPU memory, which may limit its use to high-end GPUs like the A100.

Transformer Architecture

The foundation of Llama 2, a classic architecture used in many AI models.

RMSNorm

Short for Root Mean Square Normalization, a technique used by Meta to handle the 2 trillion tokens and internal weights of Llama 2.

SwiGLU Activation Function

The activation function chosen by Meta for Llama 2 to determine whether a given neuron should be active or not.

RoPE

Short for Rotary Positional Embedding, a mathematical method used in Llama 2 to ensure the model understands the importance of word positions in sentences.

Ghost Attention (GAtt)

A fine-tuning method introduced by Meta for Llama 2. It helps control dialogue flow over multiple turns by synthetically concatenating the 'act as' instruction to all user messages in a conversation.

Context Length

The amount of information Llama 2 can consider from previous inputs. Llama 2 has a context length of 4096 tokens, twice the context length of Llama 1.

Grouped Query Attention (GQA)

A feature in Llama 2 for improved inference scalability.

Fine-tuning

The process of adjusting the weights of a pre-trained model to make it perform better on the desired task. Llama 2 uses fine-tuning methods like reinforcement learning with human feedback (RLHF), supervised fine-tuning (SFT), and initial and iterative reward modeling.

Fill-in-the-middle (FIM) Capability

A feature in Code Llama that allows it to insert code into existing code, supporting tasks like code completion.

Instruction Tuning

A training process where the model is fed a natural language instruction input and the expected output. This makes it better at understanding what people expect out of their prompts. Used in Code Llama Instruct.

Llama 2-Chat

A chatbot version of Llama 2, fine-tuned for chat-style interactions through supervised fine-tuning and reinforcement learning with human feedback (RLHF).

Code Llama

A code generation model built on Llama 2, trained on 500B tokens of code. It supports common programming languages including Python, C++, Java, PHP, Typescript (Javascript), C#, and Bash.

More terms

What is IBM Watson?

IBM Watson is a question-answering computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci.

Read more

Who is George Hotz?

George Hotz, also known by his alias geohot, is an American security hacker, entrepreneur, and software engineer born on October 2, 1989, in Glen Rock, New Jersey. He gained notoriety for being the first person to unlock the iPhone, allowing it to be used with other cellular networks, and later for reverse engineering the PlayStation 3, which led to a lawsuit from Sony.

Read more

It's time to build

Collaborate with your team on reliable Generative AI features.
Want expert guidance? Book a 1:1 onboarding session from your dashboard.

Start for free