Video

Webinar: Open Source LLMs vs APIs: Pros, Cons & Everything in Between

Delivering high-performing generative AI applications hinges on having control over your models—from fine-tuning and optimization to deployment. Although convenient, Closed-source models served via APIs provide little to no control over the model’s parameters or deployment. Open source models offer a path to enhanced flexibility. However, performance and customization of open source models can be an uphill battle. To unlock this potential fully, developers need tools specifically designed for the challenges of generative AI.

In this webinar, Yonatan Geifman, Co-Founder and CEO of Deci, navigates these complexities. Expect insights into advanced inference acceleration techniques, strategic deployment, and ways to optimize your workflow and enhance model performance.

Watch now to broaden your generative AI expertise:

  • Gain a deep understanding of the generative AI inference stack and learn how to make informed decisions when selecting tools for optimal resource allocation and latency reduction.
  • Discover strategies to accelerate LLM inference, including efficient batching techniques, multi-GPU utilization, selective quantization, and hybrid compilation.
  • Become familiar with Deci’s high-performance SDK, designed to supercharge your models’ performance in on-premises deployments.


If you want to learn more about optimizing your generative AI applications, book a demo here.

You May Also Like

[Webinar] How to Speed Up YOLO Models on Snapdragon: Beyond Naive Quantization

[Webinar] How to Evaluate LLMs: Benchmarks, Vibe Checks, Judges, and Beyond

[Webinar] How to Boost Accuracy & Speed in Satellite & Aerial Image Object Detection

Share
Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")