Zach Anderson
February 26, 2025 12:07
Langchain introduces Openevals and Agendevals to simplify the evaluation process for large language models to provide developers with pre -established tools and frameworks.
Langchain, a prominent player in the artificial intelligence field, has launched two new packages, Openvals and Agendevals, aimed at simplifying the evaluation process of a large language model (LLM). According to Langchain, this package provides developers with powerful frameworks and strong frameworks and evaluator sets that can simplify the evaluation of powerful frameworks, LLM drive applications and agents.
Understanding the role of evaluation
Often, Eval is important for determining the quality of LLM output. This includes two main components: the data in the evaluation and the metrics used in the evaluation. The quality of the data has a significant impact on the ability of the evaluation that reflects the actual usage. Langchain emphasizes the importance of selecting high -quality data sets adjusted according to certain cases.
The metrics for evaluation are usually customized according to the application goal. To solve the general evaluation demand, Langchain developed Openeval and Agendevals to share pre -produced solutions that emphasize general evaluation trends and best practices.
General evaluation types and best practices
Openevals and Agentevals focus on two main approaches to the evaluation.
- Customized evaluators: LLM-AAA-JUDGE assessment, which can be widely applied, allows developers to adjust their pre-established examples to meet specific requirements.
- Specific Case Evaluation: Designed for specific applications such as extracting structured content from the document or by managing tool currency and agent trajectory. Langchain plans to expand these libraries to include more target evaluation technology.
LLM-AA-JUDGE evaluation
LLM-AS-AA-JUDGE evaluation is widely spread because it is useful for evaluating natural language production. This evaluation is not a reference, so it can make objective evaluation without answering the grounds. Openvals support this process by providing customized starter promptes, integrating some examples, and creating an inference opinion on transparency.
Structural data evaluation
For applications that require structured output, Openvals provides tools so that the output of the model is attached to a pre -defined format. This is important for tasks, such as extracting structured information from a document or verifying parameters for tool calls. Openvals supports the exact match configuration for structured outputs or LLM-AS-AA-JUDGE validation.
Agent Evaluation: Traunch Evaluation
The agent evaluation focuses on a series of behavioral sequence that agents take to perform. This includes evaluating the trajectory of the tool selection and application. Agentevals provides a mechanism that assesses and guarantees and assesses and assesses the agent’s correct tools and follows the appropriate sequence.
Tracking and future development
Langchain is recommended to use Langsmith to track the evaluation over time. Langsmith provides tracking, evaluation and experimental tools that support the development of LLM applications. Notable companies such as Elastic and Klarna use Langsmith to evaluate the Genai application.
Langchain’s initiative, which wants to systematize best practices, continues and plans to introduce more specific evaluators for general use cases. It is recommended that developers will contribute to their own evaluators or suggest improvements through Github.
Image Source: Shutter Stock