Video

Webinar: Can You Achieve GPU Performance When Running CNNs on a CPU?

GPU is often the first choice for running large deep learning models effectively. But is there a way to optimize inference performance on a CPU in order to achieve GPU-like performance?

In this webinar, you’ll learn how to maximize CPU compute power, enabling you to deploy larger models and implement new use cases on existing hardware or reduce your cloud costs.

Join our deep learning research engineers, Amos Gropp and Akhiad Bercovich, to:

  • Learn actionable tips on accelerating inference performance on CPU
  • Discover new state-of-the-art models that deliver unparalleled accuracy and runtime performance on CPU

Watch the webinar now.

You May Also Like

Webinar: How to Optimize Latency for Edge AI Deployments

Gen AI Models: Open Source vs Closed Source—Pros, Cons & Everything in Between

Webinar: The Making of YOLO-NAS, a Foundation Model, with NAS

Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")