Video

[Webinar] How to Evaluate LLMs: Benchmarks, Vibe Checks, Judges, and Beyond

This session zeroes in on the importance of LLM evaluation for improving models and applications, assessing an LLM’s task suitability, and determining the necessity for fine-tuning or alignment. With multiple evaluation methods available, each carrying its own set of pros and cons, we will guide you through choosing the right approach. Additionally, we’ll demonstrate how to leverage LangChain evaluators and LangSmith for conducting comparative analyses of leading open-source LLMs in the 7B parameter class.

Key takeaways:

Explore the different LLM evaluation methods, from academic benchmarks and vibe checks to human evaluations and using LLMs as judges.

  • Learn how to use LangChain evaluators to run evals using LLM as Judge and log your traces to LangSmith
  • Witness the application of LLMs as judges in real evaluations of prominent 7B models, including Gemma 7B.
  • Discover essential tips and best practices for a comprehensive approach to LLM evaluation.


Watch now!

If you want to learn more, get started with Deci’s Generative AI Platform today.

You May Also Like

[Webinar] How to Speed Up YOLO Models on Snapdragon: Beyond Naive Quantization

[Webinar] How to Boost Accuracy & Speed in Satellite & Aerial Image Object Detection

[Webinar] How to Efficiently Scale Video Analytics at the Edge

Share
Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")