Model Zoo

Get access to SOTA text, code, and image generation models, and computer vision models across various tasks including classification, segmentation, and detection.

YOLO-NAS-Sat Model Card Featured

YOLO-NAS-Sat is a small object detection model, pre-trained on COCO, fine-tuned on DOTA 2.0. Its YOLO-NAS-Sat M variant outpaces its YOLOv8 counterpart by 2.38 times.

DeciCoder-6B

DeciCoder-6B is a 6 billion parameter decoder-only code completion model trained on the Python, Java, Javascript, Ruby, Rust, C++, C, and C# subsets of the Starcoder Training Dataset.

DeciDiffusion2

DeciDiffusion 2.0 is a 732 million parameter text-to-image latent diffusion model.

DeciLM-7B-Instruct

DeciLM-7B-instruct is a derivative of the recently released DeciLM-7B language model, a pre-trained, high-efficiency generative text model with 7 billion parameters. DeciLM-7B-instruct is a model for short-form instruction following.

DeciLM-7B Model Card

DeciLM-7B is a 7.05 billion parameter decoder-only text generation model. This Apache 2.0-licensed model is currently the top-performing 7 billion parameter base language model on the Open LLM Leaderboard.

deci-model-card-yolo-nas-pose

YOLO-NAS Pose offers a superior latency-accuracy balance compared to YOLOv8 Pose. Specifically, the medium-sized version, YOLO-NAS Pose M, outperforms the large YOLOv8 variant with a 38.85% reduction in latency on an Intel Xeon 4th gen CPU, all while achieving a 0.27 boost in [email protected] score.

Discover Easy-to-Use Training

Simplify deep learning development with SuperGradients, an open-source, production-ready library for training PyTorch-based computer vision models.

deci-model-card-decidiffusion

DeciDiffusion 1.0 is an 820 million parameter text-to-image latent diffusion model trained on the LAION-v2 dataset and fine-tuned on the LAION-ART dataset.

Deci LM 6B Banner

DeciLM 6B is a 5.7 billion parameter decoder-only text generation model. It outpaces pretrained models in its class, with a throughput that's up to 15 times that of Llama 2 7B's.

deci-model-card-decicoder2

DeciCoder 1B is a 1 billion parameter decoder-only code completion model trained on the Python, Java, and Javascript subsets of Starcoder Training Dataset.

deci-model-card-yolo-nas-featured-3

YOLO-NAS is a groundbreaking object detection foundational model pre-trained on prominent datasets such as COCO, Objects365, and evaluated on COCO and RF100 dataset.

deci-model-card-t5-featured-2

T5

Dive into Google’s T5, a powerful Text-to-Text Transformer model. Understand its capabilities, applications, and how to use it efficiently.

deci-model-card-dekr-featured-2

DEKR is a pose estimation model pretrained on COCO 2017 and THE Crowd Pose dataset. It was introduced on April 06, 2021, in the paper titled, “Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression”, by Zigang Geng, Ke Sun, Bin Xiao , Zhaoxiang Zhang , Jingdong Wang.

Share
Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")