RAGAS provides a suite of metrics to evaluate different aspects of RAG systems without relying on ground truth human annotations. These metrics are divided into two categories: retrieval and generation.
Metrics of RAGAS
-
Retrieval Metrics: These metrics evaluate the performance of the retrieval system. They include:
- Context Relevancy: This measures the signal-to-noise ratio in the retrieved contexts.
- Context Recall: This measures the ability of the retriever to retrieve all the necessary information needed to answer the question. It is calculated by using the provided ground truth answer and an LLM to check if each statement from it can be found in the retrieved context.
-
Generation Metrics: These metrics evaluate the performance of the generation system. They include:
- Faithfulness: This measures hallucinations, or the generation of information not present in the context.
- Answer Relevancy: This measures how to-the-point the answers are to the question.
The harmonic mean of these four aspects gives you the RAGAS score, which is a single measure of the performance of your QA system across all the important aspects.
How to Use RAGAS
To use RAGAS, you need a few questions and, if you're using context recall, a reference answer. Most of the measurements do not require any labeled data, making it easier for users to run it without worrying about building a human-annotated test dataset first.
Here's a Python code snippet showing how to use RAGAS for evaluation:
from ragas.metrics import faithfulness, answer_relevancy, context_relevancy, context_recall
from ragas.langchain import RagasEvaluatorChain
# make eval chains
eval_chains = {m.name: RagasEvaluatorChain(metric=m) for m in [faithfulness, answer_relevancy, context_relevancy, context_recall]}
# evaluate
for name, eval_chain in eval_chains.items():
score_name = f"{name}_score"
print(f"{score_name}: {eval_chain(result)[score_name]}")
In this code, RagasEvaluatorChain
is used to create evaluator chains for each metric. The __call__()
method of the evaluator chain is then used with the outputs from the QA chain to run the evaluations.
RAGAS is a powerful tool for evaluating RAG pipelines, providing actionable metrics using as little annotated data as possible, cheaper, and faster. It helps developers ensure their QA systems are robust and ready for deployment.