Video

GTC Talk: How to Improve Model Efficiency with Hardware Aware Neural Architecture Search

Running successful and efficient inference at scale requires meeting various performance criteria such as accuracy, latency, throughput, and model size, among others.

Neural Architecture Search (NAS) holds the power to automate the cumbersome deep learning model development process, as well as quickly and efficiently generate deep neural networks that are designed to meet specific production constraints. Deci’s AutoNAC (Automated Neural Architecture Construction) technology does this by finding the best algorithm that takes into account all of the many parameters that are required to create powerful and efficient deep learning models.

In this talk, Yonatan Geifman, Deci’s CEO and Co-founder, covers the evolution of NAS technology and recent advances that are making NAS viable for industry applications and commercial usage. He outlines the algorithmic optimization process with case studies and best practices for achieving best-in-class accuracy and latency results on Nvidia T4 GPU, Jetson Nano, and Xavier NX devices.

Learn how NAS-powered AutoNAC can optimize your models according to your specific use case.

You May Also Like

Webinar: How to Optimize Latency for Edge AI Deployments

Gen AI Models: Open Source vs Closed Source—Pros, Cons & Everything in Between

Webinar: The Making of YOLO-NAS, a Foundation Model, with NAS

Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")