Klu raises $1.7M to empower AI Teams  

RAGAS

by Stephen M. Walker II, Co-Founder / CEO

RAGAS provides a suite of metrics to evaluate different aspects of RAG systems without relying on ground truth human annotations. These metrics are divided into two categories: retrieval and generation.

Metrics of RAGAS

  1. Retrieval Metrics: These metrics evaluate the performance of the retrieval system. They include:

    • Context Relevancy: This measures the signal-to-noise ratio in the retrieved contexts.
    • Context Recall: This measures the ability of the retriever to retrieve all the necessary information needed to answer the question. It is calculated by using the provided ground truth answer and an LLM to check if each statement from it can be found in the retrieved context.
  2. Generation Metrics: These metrics evaluate the performance of the generation system. They include:

    • Faithfulness: This measures hallucinations, or the generation of information not present in the context.
    • Answer Relevancy: This measures how to-the-point the answers are to the question.

The harmonic mean of these four aspects gives you the RAGAS score, which is a single measure of the performance of your QA system across all the important aspects.

How to Use RAGAS

To use RAGAS, you need a few questions and, if you're using context recall, a reference answer. Most of the measurements do not require any labeled data, making it easier for users to run it without worrying about building a human-annotated test dataset first.

Here's a Python code snippet showing how to use RAGAS for evaluation:

from ragas.metrics import faithfulness, answer_relevancy, context_relevancy, context_recall
from ragas.langchain import RagasEvaluatorChain

# make eval chains
eval_chains = {m.name: RagasEvaluatorChain(metric=m) for m in [faithfulness, answer_relevancy, context_relevancy, context_recall]}

# evaluate
for name, eval_chain in eval_chains.items():
    score_name = f"{name}_score"
    print(f"{score_name}: {eval_chain(result)[score_name]}")

In this code, RagasEvaluatorChain is used to create evaluator chains for each metric. The __call__() method of the evaluator chain is then used with the outputs from the QA chain to run the evaluations.

RAGAS is a powerful tool for evaluating RAG pipelines, providing actionable metrics using as little annotated data as possible, cheaper, and faster. It helps developers ensure their QA systems are robust and ready for deployment.

More terms

What is an intelligent personal assistant?

An intelligent personal assistant is a software agent that can perform tasks or services for an individual. These tasks or services are typically related to managing information or providing assistance with common tasks.

Read more

What is the Rete algorithm?

The Rete algorithm is a well-known AI algorithm that is used for pattern matching. It was developed by Charles Forgy in the 1970s and is still in use today. The Rete algorithm is based on the idea of production rules, which are if-then statements that describe a set of conditions and a corresponding action. The Rete algorithm is designed to efficiently evaluate a set of production rules against a set of data. It does this by creating a network of nodes, which represent the production rules, and then matching the data against the nodes. If a match is found, the corresponding action is taken. The Rete algorithm is a powerful tool for AI applications that require pattern matching, such as data mining, text classification, and image recognition.

Read more

It's time to build

Collaborate with your team on reliable Generative AI features.
Want expert guidance? Book a 1:1 onboarding session from your dashboard.

Start for free