Knowledge Distillation

Share

Knowledge distillation reduces the challenge of deploying model on edge devices with limited resources by transferring knowledge from a large model to a small one without compromising accuracy.

As a result, the small model becomes a compressed, less-expensive version of the large model and can be deployed effectively on less powerful hardware.

To learn more about knowledge distillation, read the blog.

Filter terms by

Related resources

deci-infery-updates-blog-featured
Engineering
pytorch-training-sg-new-features-featured-2
Open Source
new-hardware-support-featured
Engineering