Make the most out of your Intel CPUs

Accelerate inference performance and
maximize the utilization of your Intel CPUs.

Deci delivers breakthrough inference performance on Intel's 4th Gen Intel Xeon Scalable processors

Deci’s hardware-aware models, custom generated with the AutoNAC engine, enable AI developers to achieve GPU-like AI inference performance on CPUs in production for both computer vision and NLP tasks.

Click Here
Click Here

Build & Deploy Better DL Models, Faster

Deci Platform

Foundation or Custom Models

Choose an ultra performant model or generate a custom one.



Neural Architecture Search Engine

On Prem


Dataset Analyzer

Train or

Use Deci’s library & custom recipe to train on-prem.

On Prem


PyTorch Training Library

Optimize & Run

Apply acceleration techniques. 
Run self-hosted inference anywhere.

On Prem


Optimization & Inference Engine SDK

Get Similar Results for Your Specific Use Case

Enable Real Time

Inference on CPU

A multinational manufacturer and distributor of electricity and gas was looking to improve the latency of an automatic extraction of text from documents and images using OCR and NLP.

Using Deci’s AutoNAC engine the customer was able to gain 8.3x faster latency compared to its original model while also improving the accuracy from 77.17% to 80.21% (word-level).

How can Deci help your CPU based
computer vision and NLP applications?

Find The Optimal Intel CPU for your Model

Benchmark your models’ expected inference performance across multiple hardware types on Deci’s online hardware fleet.
Get actionable insights for the ideal hardware and production settings.

Speed Up Inference with a click of a button

Easily compile and quantize your computer vision and NLP models with to improve runtime on your Intel CPUs using Intel OpenVINO.

Build custom CNN architectures tailored for your Intel CPU and applications​

Automatically find accurate and efficient architectures tailored for your specific Intel device
and performance targets with Deci’s Automatic Neural Architecture Search (AutoNAC) engine.

Easily compare various models' performance on your existing Intel CPUs

Measure inference time of different models on various Intel CPUs directly on Deci’s cloud-based hardware fleet.

Add Your Heading Text Here
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")