Case Study

The Future of CPU in Deep Learning Inference

Resource Featured Image

Find Out How Deci and Intel Boosted Deep Learning Inference and Performance of ResNet-50 by 11.8x while Maintaining Accuracy

Being able to run more efficient deep learning models on CPUs rather than GPUs has the potential to dramatically open up the market for the application of AI—delivering new practical solutions for multiple industries and technologies. However, running deep learning on CPU devices is complex, due to its high latency or low throughput.

In this case study you will learn about:

  • What are the complexities, challenges, and opportunities of running deep learning models on CPUs?
  • How did Deci’s AutoNAC technology, together with Intel’s OpenVINO toolkit, successfully reduce ResNet-50’s latency by a factor of 11.8x?
  • How your company can dramatically cut cloud costs and enable real-time performance on edge devices by moving from GPUs to CPUs.

Complete the form to get immediate access to the case study.

Access Case Study Now

Share
Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")