Video

GTC Talk: An End-to-End Walkthrough for Deploying Deep Learning Models on Jetson

Join Ofer Baratz, Deep Learning Product Manager, and Nadav Cohen, VP Product, for a technical session packed with practical tips and tricks from model selection & training tools to running successful inference at the edge.

They demonstrate how to benchmark different models, leverage training best practices, easily implement TensorRT based compilation and quantization all while using the latest open source libraries and other free community tools.

After watching this talk, you will get practical knowledge on how to cut the guesswork, quickly gain SOTA performance, maximize your Jetson devices’ compute power, and boost runtime for any AI-based application.

You May Also Like

[Webinar] How to Speed Up YOLO Models on Snapdragon: Beyond Naive Quantization

[Webinar] How to Evaluate LLMs: Benchmarks, Vibe Checks, Judges, and Beyond

[Webinar] How to Boost Accuracy & Speed in Satellite & Aerial Image Object Detection

Share
Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")