Knowledge Distillation

Share

Knowledge distillation reduces the challenge of deploying model on edge devices with limited resources by transferring knowledge from a large model to a small one without compromising accuracy.

As a result, the small model becomes a compressed, less-expensive version of the large model and can be deployed effectively on less powerful hardware.

To learn more about knowledge distillation, read the blog.

Filter terms by

Glossary Alphabetical filter

Related resources

TensorRT-framework-overview
Deployment
deci-pytorch-coreml-blog
Deployment
deci-infery-updates-blog-featured
Deployment
Share
Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")