The Most Advanced
Neural Architecture Search Technology

Deci is powered by groundbreaking Automated Neural Architecture Construction (AutoNAC™) technology. Deci’s AutoNAC™ engine democratizes the use of Neural Architecture Search for every organization and helps teams quickly generate fast, accurate and efficient deep learning models.

Main Advantages

Inference Speedup 3-10x

Same or Better Accuracy

Full Awareness of Data and Hardware

Fast and Compute Efficient Search

Accelerate Deep Neural Network Inference on Any Hardware while Preserving Accuracy

Automate Performance Optimization Across the Entire Inference Stack

The most ambitious algorithmic acceleration technique for aggressive speedups is neural architecture search (NAS). To use NAS one should define an architecture space and use a clever search strategy to search this space for an architecture that satisfies the desired properties. NAS optimizations are responsible for monumental achievements in deep learning. For instance, MobileNet-V3 and EfficientNet(Det) were found using NAS.

NAS algorithms typically require huge computational resources and therefore, applying them in a scalable manner in production is extremely challenging and expensive. Deci’s, AutoNAC brings into play a new, fast and compute efficient generation of NAS algorithms allowing it to operate cost effectively and at scale. The AutoNAC engine is hardware and data aware and considers all the components in the inference stack, including compilers and quantization.

See The Results

Deploy Efficient Models to Production with Deci’s Deep Learning Development Platform

The Ultimate Guide to Inference Acceleration of Deep Learning-Based Applications

Learn 12 inference acceleration techniques that you can immediately implement to improve the speed, efficiency, and accuracy of your existing AI models.