Discover deep learning insights and trends. Read stories from Deci’s platform users. Stay in the know about our product and technology as we make DL tools more accessible to developers across industries.
Starting a deep learning project? Here are 13 free resources and model zoos for computer vision models that can help simplify your work.
READ MOREExplore common PyTorch-based neural network training libraries and tools before selecting the most appropriate one for your use case.
READ MORESOTA DNNs can help handle image data. In this post, we cover important SOTA DNNs in classification, object detection & semantic segmentation.
READ MORELearn how you can build a model that is not too complex or large to run on the edge but still makes the most of the available hardware.
READ MORELearn how to convert a PyTorch model to NVIDIA’s TensorRT™ model in just 10 minutes. It’s simple and you don’t need any prior knowledge.
READ MOREFinding the right hardware for your model inference can be a daunting task, here are 4 key parameters to consider when you select hardware for inference.
READ MOREAfter the joint submission to MLPerf, Intel and Deci collaborated again to accelerate three off-the-shelf models: ResNet-50, ResNeXt101, and SSD MobileNet V1.
READ MOREJoin us on our journey towards making AI efficient, accessible, and scalable for all organizations.
READ MOREThe boost in inference speed of your Deep Learning model compared to your current runs can be as much as 8 times. By the end of this article, you’ll know how to apply it to your use case with minimal effort.
READ MORETo achieve optimal performance, the entire inference pipeline in production needs to be fast—not only the model. Here are 5 ways to optimize it.
READ MORELearn how AutoNAC creates proprietary DeciNet architectures that extend the efficient frontiers of latency/accuracy tradeoffs on NVIDIA cloud and edge GPUs.
READ MORELearn about the different types and technical implementations of object detection algorithms, a key domain of deep learning and computer vision.
READ MOREListen to this episode of The Data Exchange Podcast to know the reasons why you should optimize your deep learning inference platform.
LISTEN NOWLearn about ONNX and how to convert a ResNet-50 model to ONNX. Then, try optimizing it to reduce its latency and increase its throughput.
READ MORELet’s take deeper dive into how graph compilers work to better understand how, when used correctly, they can offer enormous amounts of acceleration.
READ MOREIn this post, you'll learn how the Deci platform can optimize your machine learning models. We use the YOLOv5 in our example, but the platform allows you to optimize any model.
READ MOREMLPerf provides fair and standardized benchmarks for measuring inference performance of machine and deep learning. Our submission to MLPerf proved that our AutoNAC technology reduces runtime while preserving the accuracy of the base model.
READ MORETo date, there is no widely-accepted definition of AutoML, and the industry is using this term to refer to many different things.
READ MOREDeep neural networks have become common practice in many machine learning applications.
READ MOREDeep neural networks can achieve state-of-the-art performance on many machine learning tasks.
READ MOREAccurately measuring the inference time of neural networks is not as trivial as it sounds. In this post, we review some of the main issues that should be addressed to measure latency time correctly.
READ MOREAnalyses of efficient models often assume that FLOPs can serve as an accurate proxy for efficiency, including run-time. But this is wrong.
READ MORE