Video

Webinar: Can You Achieve GPU Performance When Running CNNs on a CPU?

GPU is often the first choice for running large deep learning models effectively. But is there a way to optimize inference performance on a CPU in order to achieve GPU-like performance?

In this webinar, you’ll learn how to maximize CPU compute power, enabling you to deploy larger models and implement new use cases on existing hardware or reduce your cloud costs.

Join our deep learning research engineers, Amos Gropp and Akhiad Bercovich, to:

  • Learn actionable tips on accelerating inference performance on CPU
  • Discover new state-of-the-art models that deliver unparalleled accuracy and runtime performance on CPU

Watch the webinar now.

You May Also Like

[Webinar] How to Speed Up YOLO Models on Snapdragon: Beyond Naive Quantization

[Webinar] How to Evaluate LLMs: Benchmarks, Vibe Checks, Judges, and Beyond

[Webinar] How to Boost Accuracy & Speed in Satellite & Aerial Image Object Detection

Share
Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")