GPU is often the first choice for running large deep learning models effectively. But is there a way to optimize inference performance on a CPU in order to achieve GPU-like performance?
In this webinar, you’ll learn how to maximize CPU compute power, enabling you to deploy larger models and implement new use cases on existing hardware or reduce your cloud costs.
Join our deep learning research engineers, Amos Gropp and Akhiad Bercovich, to:
- Learn actionable tips on accelerating inference performance on CPU
- Discover new state-of-the-art models that deliver unparalleled accuracy and runtime performance on CPU
Watch the webinar now.