Deep Learning
Acceleration Platform

Hardware is no longer a barrier. Build, optimize, and deploy models with stellar inference speed.

Accelerate Inference on Edge or Cloud

Get 3x-15x speedup optimization for inference throughput/latency while maintaining accuracy, enabling new use cases on your hardware of choice.

Reach Production Faster

Shorten development cycle from months to weeks with automated tools. No more endless iterations and dozens of different libraries.

Maximize Your Hardware Potential

Scale up with existing hardware. No need for infrastructure changes and extra costs. Gain up to 80% reduction in compute costs.

The First Deep Learning Production-Aware Platform

The Deci platform actively takes into account production constraints in the development process, such as optimizing the model for the target inference hardware, to dramatically increase your success rate in production.

Accelerated by Revolutionary Technology

Deci provides you with unmatched end-to-end accuracy-preserving inference acceleration for your neural models on the edge, mobile, or cloud.

AI Made by AI
with AutoNAC™

Deci’s platform is powered by AutoNAC (Automated Neural Architecture Construction) technology, our proprietary algorithmic optimization engine that squeezes the most out of any hardware. The AutoNAC engine contains a neural architecture search (NAS) component that revises a given trained model to optimally speed up its runtime, while preserving the model’s baseline accuracy.

Works Seamlessly with the Deep Learning Ecosystem

We made sure you can use the framework and tools of your choice. Deci is accessible through a beautiful and friendly UI or a simple API with full documentation and support.

Join the Brightest Deep Learning Leaders

Join the Brightest Deep Learning Leaders

“Deci delivers optimized deep learning inference on Intel processors as highlighted in MLPerf, allowing our customers to meet performance SLAs, reduce cost, decrease time to deployment, and gives them the ability to effectively scale.”

Monica Livingston

AI Solutions and Sales Director, Intel

"By collaborating with Deci, we aim to help our customers accelerate AI innovation and deploy AI solutions everywhere using our industry-leading platforms, from data centers to edge systems that accelerate high-throughput inference."

Arti Garg

Head of Advanced AI Solutions & Technologies, HPE

"We are excited to be working with Deci's platform - it provided amazing results and achieved 4.6x acceleration on a model we ran in production and helped us provide faster service to our customers.”

Daniel Shichman

CEO, WSC Sports Technologies

"With Deci we were able to get 6.4x higher throughput on our detection model. Its easy-to-use platform allowed us to quickly optimize and benchmark for choosing our best configuration of hardware and parameters."

Santiago Tamagno

CTO, UNX Digital by Grupo Prominente

"The classification model I uploaded and integrated using Infery achieved a 33% performance boost, which is very cool for 10 minutes of work!"

Amir Zait

Algorithm Developer, Sight Diagnostics

“With Deci, we increased by 2.6x the inference throughput of one of our multiclass classification models running on V100 machines - without losing accuracy. Deci can cut 50% off the inference costs in our cloud deployments worldwide.”

Chaim Linhart

CTO and Co-Founder, IBEX Medical Analysis

“Deci’s AISO [AI software optimization] is suitable for both training and inference modes. Deci has advanced innovation in search for optimal neural network architectures. The solution excels in every area of our assessment."

Michael Azoff

Chief Analyst, Kisaco Research

Proud to Partner with

Unleash Your
Deep Learning Models