Join Ofer Baratz, Deep Learning Product Manager, and Nadav Cohen, VP Product, for a technical session packed with practical tips and tricks from model selection & training tools to running successful inference at the edge.
They demonstrate how to benchmark different models, leverage training best practices, easily implement TensorRT based compilation and quantization all while using the latest open source libraries and other free community tools.
After watching this talk, you will get practical knowledge on how to cut the guesswork, quickly gain SOTA performance, maximize your Jetson devices’ compute power, and boost runtime for any AI-based application.