Make the most out of your Intel CPUs

Accelerate inference performance and
maximize the utilization of your Intel CPUs.

Deci delivers breakthrough inference performance on Intel's 4th Gen Intel Xeon Scalable processors

Deci’s hardware-aware models, custom generated with the AutoNAC engine, enable AI developers to achieve GPU-like AI inference performance on CPUs in production for both computer vision and NLP tasks.

cv
.
Click Here
nlp
Click Here

Build & Deploy Better DL Models, Faster

Deci Platform

Foundation or Custom Models

Choose an ultra performant model or generate a custom one.

Saas

AutoNAC

Neural Architecture Search Engine

On Prem

DataGradients

Dataset Analyzer

Train or
Fine-Tune

Use Deci’s library & custom recipe to train on-prem.

On Prem

SuperGradients

PyTorch Training Library

Optimize & Run

Apply acceleration techniques. 
Run self-hosted inference anywhere.

On Prem

Infery

Optimization & Inference Engine SDK

Get Similar Results for Your Specific Use Case

Enable Real Time

Inference on CPU

A multinational manufacturer and distributor of electricity and gas was looking to improve the latency of an automatic extraction of text from documents and images using OCR and NLP.

Using Deci’s AutoNAC engine the customer was able to gain 8.3x faster latency compared to its original model while also improving the accuracy from 77.17% to 80.21% (word-level).

How can Deci help your CPU based
computer vision and NLP applications?

Find The Optimal Intel CPU for your Model

Benchmark your models’ expected inference performance across multiple hardware types on Deci’s online hardware fleet.
Get actionable insights for the ideal hardware and production settings.

Speed Up Inference with a click of a button

Easily compile and quantize your computer vision and NLP models with to improve runtime on your Intel CPUs using Intel OpenVINO.

Build custom CNN architectures tailored for your Intel CPU and applications​

Automatically find accurate and efficient architectures tailored for your specific Intel device
and performance targets with Deci’s Automatic Neural Architecture Search (AutoNAC) engine.

Easily compare various models' performance on your existing Intel CPUs

Measure inference time of different models on various Intel CPUs directly on Deci’s cloud-based hardware fleet.

Share
Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")