Platform
Pricing

Build, optimize, and deploy outstanding deep learning models

Choose your plan

Benchmark Models
?Automatic benchmark of inference performance across multiple models, hardware types and batch sizes.
Inference Optimization
?Seamless inference optimization by automatically applying techniques such as weights quantization and graph compilation.
AutoNAC Inference Optimization
?Our proprietary AutoNAC technology provides unmatched accuracy-preserving inference acceleration, on the cloud, edge or mobile. Learn more.
Optimized Model Usage
?The maximum number of deep learning models you can use/download out of those optimized in the platform.
Target Hardware
?The inference hardware. Deci’s optimization is hardware-aware. It means that our technology can squeeze the maximum utilization out of the hardware targeted for inference in production.
Model Framework
?The deep learning development frameworks that are supported.
Deployment and Serving Options
?The inference server module that can run Deci platform models.
API Access
?API access for easy integration with your existing tools and CI/CD processes.
On-Premise
?The platform is hosted on Deci’s servers. We offer an on-prem deployment for enterprises.
Legal Terms
Support & SLA

Community

For developers looking to achieve unparalleled deep learning capabilities

-
Up to 5 models
Common cloud machines (CPU, GPU)
Common frameworks
?Optimize models trained on TensorFlow 2.0, TorchScript, ONNX, and Keras.
With RTiC
?Maximize the utilization of your CPU or GPU with Deci’s Runtime Inference Container (RTiC). Learn more.
-
Standard
?Read our platform terms of use to learn more.
Basic
View Details
?Automatic benchmark of inference performance across multiple models, hardware types and batch sizes.
Benchmark models
?Seamless inference optimization by automatically applying techniques such as weights quantization and graph compilation.
Inference optimization
?The maximum number of deep learning models you can use/download out of those optimized in the platform.
Use up to 5 optimized models
?The inference hardware. Deci’s optimization is hardware-aware. It means that our technology can squeeze the maximum utilization out of the hardware targeted for inference in production.
Optimize for common cloud machines (CPU, GPU)
?Optimize models trained on TensorFlow 2.0, TorchScript, ONNX, and Keras.
Supports common development frameworks
?Maximize the utilization of your CPU or GPU with Deci’s Runtime Inference Container (RTiC). Learn more.
Deployment and serving with RTiC
?API access for easy integration with your existing tools and CI/CD processes.
API access
?Read our platform terms of use to learn more.
Standard legal terms
Basic support and SLA

Organizations

For organizations looking to maximize the potential of deep learning

Unlimited
Any hardware - including edge and mobile
Any framework
?Get extended support for any deep learning model framework or format.
Custom
?Download the model in the format of your choice and seamlessly plug it into your existing inference stack.
?The platform is hosted on Deci’s servers. We offer an on-prem deployment for enterprises.
Custom
Extended
?Contractual Service Level Agreement.
View Details
?Automatic benchmark of inference performance across multiple models, hardware types and batch sizes.
Benchmark models
?Seamless inference optimization by automatically applying techniques such as weights quantization and graph compilation.
Inference optimization
?Our proprietary AutoNAC technology provides unmatched accuracy-preserving inference acceleration, on the cloud, edge or mobile. Learn more.
AutoNAC inference optimization
?The maximum number of deep learning models you can use/download out of those optimized in the platform.
Unlimited use of optimized models
?The inference hardware. Deci’s optimization is hardware-aware. It means that our technology can squeeze the maximum utilization out of the hardware targeted for inference in production.
Optimize for any hardware - including edge and mobile
?Get extended support for any deep learning model framework or format.
Use any model development framework
?Download the model in the format of your choice and seamlessly plug it into your existing inference stack.
Custom deployment and serving
?API access for easy integration with your existing tools and CI/CD processes.
API access
?The platform is hosted on Deci’s servers. We offer an on-prem deployment for enterprises.
On-premise
Custom legal terms
?Contractual Service Level Agreement.
Extended support & SLA

Frequently Asked Questions & Answers

  • Can I start with the Community plan and upgrade to the Organizations plan?

    Absolutely. The Community plan allows you to quickly start with your own deep learning models or start with a pre-loaded model from our ModelHub. You can upgrade your plan at any time by contacting us.

  • Is there a trial period for the Community plan?

    No. It’s free forever. Knock yourself out.

  • Do I need a credit card?

    No. Credit card is not needed.

  • Is there a limit to the number of models I can upload to the platform?

    No limit, go for it.

  • Is there a limit to the number of model optimizations I can carry out with the platform?

    There is no limit to the numbers of optimizations you can carry out. Keep in mind that if you are in the Community plan, you are limited to using any 5 of those optimized models.

  • What is AutoNAC inference optimization?

    AutoNAC, short for Automated Neural Architecture Construction, is Deci’s proprietary optimization technology. It is a Neural Architecture Search (NAS) algorithm that provides you with end-to-end accuracy-preserving hardware-aware inference acceleration. AutoNAC considers and leverages all components in the inference stack, including compilers, pruning, and quantization.

    Learn more about our technology.
    Learn more about the kind of results you can get.

  • Do I get the same level of optimization through the Community plan as opposed to the Organizations plan? If not, what is the difference?

    Both plans offer optimization but the levels are different. The key difference is the availability of AutoNAC, which enables you to accelerate your neural model’s inference runtime on the cloud or at the edge, for maximum performance while preserving accuracy.

    Deci’s platform for the Community plan offers out-of-the-box optimization techniques such as quantization. In addition, you can maximize the utilization of your CPU or GPU with our optimization capabilities that are integrated in our runtime inference container (RTiC).

    Learn more about our technology.

  • Is there a limit to the usage of optimized models?

    You can upload and optimize as many deep learning models as you want.

    The Community plan limits you to using any 5 of those optimized models for inference. Once you see how the different optimizations add value to the original model, you can choose which of the optimized models you want to deploy.

    The Organizations plan includes unlimited usage of optimized models. Get in touch to get a quote.

  • Can I invite a co-worker to collaborate with me on the platform?

    Absolutely. You can invite an unlimited number of co-workers using the ‘Invite’ button within the platform. All of you will be able to collaborate on the same workspace.

  • What data is leaving my perimeter and goes to Deci?

    Your data’s confidentiality and privacy are our highest priority. We comply with strict information security practices, and are ISO270001 and ISO 27799 compliant.

    When you optimize your own model, it is uploaded to our server. We never use your model for any purpose other than its optimization on the platform.

    For the Community plan, you do not need to upload any data aside from the model itself (e.g., dataset). Alternatively, you can use a pre-loaded model from our ModelHub.

    For the Organizations plan, your data will serve as another input to our optimization engine to ensure that you get the full value of AutoNAC. The process can be done entirely on your premises, without the data ever leaving your site.

    To learn more, please refer to our Privacy Policy.

  • Will Deci work with my current tools and CI/CD?

    Absolutely. You can easily integrate Deci’s platform using our API access. Read more about our API access.

  • Have other questions about how Deci works or the pricing plans available?

AutoNAC White Paper

Explore the Deep Learning Algorithmic Platform

With Deci’s platform, deep learning developers can accelerate inference performance by 2x to 10x, for any hardware, without compromising accuracy; cut up to 80% on compute costs; and reduce time-to-production of models.

EXPLORE PLATFORM

"Intel and Deci partnered to break a new record at the MLPerf benchmark, accelerating deep learning by 11x on Intel’s Cascade Lake CPU. That’s amazing!

Deci’s platform and technology have what it takes to unleash a whole new world of opportunities for deep learning inference on CPUs."

Guy Boudoukh

Deep Learning Research, Intel AI Research

“Deci shares our vision that AI in production requires operating in heterogeneous infrastructure environments. Their technology makes inferencing accessible and affordable anywhere, from the edge to the data center.

By collaborating with Deci, we aim to help our customers accelerate AI innovation and deploy AI solutions everywhere using our industry-leading platforms, from data center systems that are ideally suited to train deep learning models to edge systems that accelerate high-throughput inference.”

Arti Garg

Head of Advanced AI Solutions & Technologies, HPE

"We are excited to be working with Deci's platform - it provided amazing results and achieved 4.6x acceleration on a model we ran in production and helped us provide faster service to our customers.”

Daniel Shichman

CEO, WSC Sports Technologies

“Using Deci’s platform we achieved a 2.6x increase in inference throughput of one of our heavy multiclass classification models running on V100 machines - without losing accuracy. Deci can cut 50% off the deep learning inference compute costs in our cloud deployments worldwide. We are very impressed by Deci's technology!”

Chaim Linhart

CTO and Co-Founder, Ibex Medical Analytics

Discover What Makes Deci Unique

How Deci and Intel Hit 11.8x Inference Acceleration at MLPerf

How Deci and Intel Hit 11.8x Inference Acceleration at MLPerf

Blog Post

READ MORE
How Deci and Intel Hit 11.8x Inference Acceleration at MLPerf

Deci RTiC – The Case for Containerization of AI Inference

Blog Post

READ MORE
How Deci and Intel Hit 11.8x Inference Acceleration at MLPerf

Accelerate Deep Neural Network Inference with AutoNAC

White Paper

DOWNLOAD NOW

Maximize the Potential of Deep Learning