Platform Pricing

Deep learning lifecycle made simple, with outstanding performance.

Pre-trained
Foundation Models

Get access to Deci’s ultra-performant, NAS generated foundation models.

Includes
Support

End-to-End
Vision

Simplify and shorten dev process. Accelerate inference and deploy to production in no time.

Includes
Support

Enterprise

Meet specific performance goals for highly customized use cases.
Includes
Support

Standard

For developers looking to accelerate inference and deploy to production in no time.

Features
Support

Professional

For deep learning teams looking to achieve better than SOTA accuracy & inference performance.

Features
Support

Enterprise

For deep learning experts looking to meet specific performance goals for highly customized use cases.

Features
Support

Standard

Professional

Enterprise

Runtime Layer Optimization

Hardware Benchmarking

Compilation & Post Training Quantization (FP16 / INT8)

Inference Engine

Algorithmic Layer Optimization

AutoNAC Engine Runs

Quantization Aware Training

Hardware & Frameworks

Supported 

Supported Frameworks

Support & Legal

Support

Legal Terms

A wide range of hardware including edge devices

Standard

Support Team / 24 hours

Standard

A wide range of hardware including edge devices

Standard

Dedicated DL expert / 12 hours

Standard

Any hardware

Any framework

Dedicated DL expert / custom SLA

Custom

Testimonials

“Using Deci, we swiftly developed a model that enabled us to expand our offering and further scale our solution on existing CPU infrastructure with significant cost-efficiency.”

Zvika Ashani
Zvika Ashani, CTO at Irisity

“Controlling our inference cloud spend without compromising on performance is key for our business success. Deci enabled us to scale our workloads while reducing costs and improving our users’ experience.”

Dr. Yair Adato
Founder & CEO at BRIA

“At Adobe, we deliver excellent AI-based solutions across a wide range of cloud and edge environments. By using Deci, we significantly shortened our time to market and transitioned inference workloads from cloud to edge devices. As a result we improved the user experience and dramatically reduced our spend on cloud inference cost.”

Pallav Vyas
Senior Engineering Manager, Document AI & Innovation at Adobe

“Our advanced text to videos solution is powered by proprietary and complex generative AI algorithms. Deci allows us to reduce our cloud computing cost and improve our user experience with faster time to video by accelerating our models’ inference performance and maximizing GPU utilization on the cloud.”

Lior Hakim
Co-Founder & CTO at HourOne

“Applied Materials is at the forefront of materials engineering solutions and leverages AI to deliver best-in-class products. We have been working with Deci on optimizing the performance of our AI model, and managed to reduce its GPU inference time by 33%. This was done on an architecture that was already optimized. We will continue using the Deci platform to build more powerful AI models to increase our inspection and production capacity with better accuracy and higher throughput.”

Amir Bar
Head of SW and Algorithm, Applied Materials

“Deci delivers optimized deep learning inference on Intel processors as highlighted in MLPerf, allowing our customers to meet performance SLAs, reduce cost, decrease time to deployment, and gives them the ability to effectively scale.”

Monica Livingston
AI Solutions and Sales Director, Intel

“At RingCentral, we strive to provide our customers with the best AI-based experiences. With Deci’s platform, we were able to exceed our deep learning performance goals while shortening our development cycles. Working with Deci allows us to launch superior products faster.”

Vadim Zhuk
Senior Vice President R&D, RingCentral

“By collaborating with Deci, we aim to help our customers accelerate AI innovation and deploy AI solutions everywhere using our industry-leading platforms, from data centers to edge systems that accelerate high-throughput inference.”

Arti Garg
Head of Advanced AI Solutions & Technologies, HPE

Explore the Deep Learning Development Platform

With Deci’s deep learning development platform, developers can accelerate inference performance by up to 5x, for any hardware, without compromising accuracy; cut up to 80% on compute costs; and reduce time-to-production of models.

Selected as one of the top 100 AI startups in the world

Recognized as a tech innovator for edge AI

Achieved 11.8x acceleration in collaboration with Intel

Recognized as a tech innovator for edge AI

Frequently asked questions

Yes, the Basic, Professional, and Enterprise Plans are all annual subscriptions.

Absolutely. The basic plan allows you to quickly start optimizing your deep learning models. You can upgrade your plan at any time by contacting us.

 

AutoNAC, short for Automated Neural Architecture Construction, is Deci’s proprietary optimization technology. It is a Neural Architecture Search (NAS) algorithm that provides you with end-to-end accuracy-preserving hardware-aware inference acceleration. AutoNAC considers and leverages all components in the inference stack, including compilers, pruning, and quantization.

Deci’s deep learning development platform enables you to push models from the Deci Lab, optimized or not, directly to our inference engine for seamless deployment and ultra-fast serving. Quickly run inference on your models with Infery, a Python inference runtime engine that simplifies deep learning model inference across multiple frameworks and hardware using only 3 lines of code.

Absolutely. You can easily integrate Deci’s deep learning acceleration platform using our API access. Read more about our API access.

Unleash Your Deep Learning Models.

Share
Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")