Achieve Real-time
Deep Learning Inference
on Any Hardware

Deci's platform accelerates the latency/throughput of your deep learning models to enable inference on any hardware, without compromising accuracy.

SIGN UP FOR FREE

The Challenge

Successfully deploying deep learning models in commercial applications demands high performance in terms of latency, throughput, and resulting response time. Models with slow response times can lead to a myriad of undesirable outcomes, from poor user-experience and reduced customer satisfaction to safety hazards in autonomous driving. In many cases, substandard performance can be a showstopper, even preventing the model’s deployment in production. At best, these solutions mean either compromising on the model’s accuracy or suffering increased operational costs by defaulting to more expensive hardware.

Deci’s Deep Learning Platform

Deci’s platform enables your deep learning-based applications to run on your existing hardware. Set goals for high throughput and low latency for your target inference hardware and use the AutoNAC technology in a way that achieves its goals while maintaining the original accuracy.

Benefits:

  • Achieve your real-time inference performance objectives by reducing latency 
  • Reach top-notch runtime performance by minimizing the latency (or maximizing throughput) on your existing hardware by up to 10x
  • Run one or multiple models on constrained hardware, enabling inferencing on edge devices and mobile phones

All while maintaining the same model accuracy!

Achieve State-of-the-Art Performance, Powered by AutoNAC™

Boost your trained model’s throughput/latency for any hardware, with Deci’s AutoNAC algorithmic optimization, without compromising on accuracy.

GPU
  • Nvidia T4
  • ResNet-50
  • ImageNet
CPU
  • Intel Xeon Gold 6328H 8-core
  • ResNet-50
  • ImageNet

Applications

  • Video Analytics
  • Security Cameras
  • Autonomous Vehicles
  • Manufacturing
  • Image Editing
  • Your Application >

Deployment Options

Cloud Deci

Cloud /
Data Center

CPU and GPU

Edge Server

On-prem /
Edge Server

Edge Device /
Mobile

Cloud Deci

Cloud /
Data Center

  • CPU and GPU
Edge Server

On-prem /
Edge Server

Edge Device /
Mobile

"Intel and Deci partnered to break a new record at the MLPerf benchmark, accelerating deep learning by 11x on Intel’s Cascade Lake CPU. That’s amazing!
Deci’s platform and technology have what it takes to unleash a whole new world of opportunities for deep learning inference on CPUs."

Guy Boudoukh, Deep Learning Research, Intel AI Research

Relevant Resources

How Deci and Intel Hit 11.8x Inference Acceleration at MLPerf

Blog

Deci and Intel Hit 11.8x Inference Acceleration at MLPerf

Read Blog Post

Blog

The Correct Way to Measure Inference Time of Deep Neural Networks

Read Blog Post

Press Release

Deci Named One of CB Insights' 100 Most Innovative Startups

Read Press Release

Optimize Your Deep Learning Models for FREE