Deployment

How to Convert a Model from PyTorch to ONNX in 5 Minutes

Convert Pytorch to Onyx

In this article, we will walk you through how to convert a model from PyTorch to ONNX, as well as some best practices that you can apply to model conversion in general. 

A deep learning framework is a library or tool which allows deep learning practitioners to build deep learning models quickly and easily. Along with the burst in both the quantity and capabilities of DL applications, there has also been a proliferation of deep learning frameworks. Some of the most common deep learning frameworks include PyTorch, TensorFlow, MXNet, and Keras. While the diversity of framework options for the AI community is not a bad thing, it can lead to other issues.

Not every framework is suited for every hardware. This means that the framework you select to build your model must correspond to that hardware. If it doesn’t, you have to go through the cumbersome process of converting your model to another framework. Switching from one framework to another (which is often necessary) is a challenge because they support different operations and different data types. 

The ONNX framework seeks to address these challenges by providing a standard for the operations as well as the data types. ONNX is also executable using a python inference runtime that allows running inference on `onnx` models. You’ll need to install it because we’ll use it later to run inference using the `onnx` model. 

ONNX: An Open Standard for ML Interoperability

ONNX, short for Open Neural Network Exchange, is an open source standard that enables developers to port machine learning models from different frameworks.  This interoperability allows developers to easily move between various machine learning frameworks. 

ONNX  supports all the popular machine learning frameworks including Keras, TensorFlow, Scikit-learn, PyTorch, and XGBoost. 

ONNX prevents developers from getting locked into any particular machine learning framework by providing tools that make it easy to move from one to the other. ONNX does this by doing the following:

  • Defining an extensible computation graph – Initially, various frameworks would have different graph representations. ONNX provides a standard graph representation for all of them. The ONNX graph represents the model graph through various computational nodes and can be visualized using tools such as Netron.
  • Creating standard data types – Each node in a graph usually has a certain data type. To provide interoperability between various frameworks, ONNX defines standard data types including int8, int16, and float16, just to name a few.
  • Built-in operators – These operators are responsible for mapping the operator types in ONNX to the required framework. If you are converting a PyTorch model to ONNX, all the PyTorch operators are mapped to their associated operators in ONNX. For example, a PyTorch sigmoid operation will be converted to the corresponding sigmoid operation in ONNX.
  • Provision of a single file format – Each machine learning library has its own file format. For instance, Keras models can be saved with the `h5` extension, PyTorch as `pt`, and scikit-learn models as pickle files. ONNX provides a single standard for saving and exporting model files. That format is the `onnx` file extension.

ONNX also makes it easier to optimize machine learning models using ONNX-compatible runtimes and tools that can improve the model’s performance across different hardware.

Now that you understand what ONNX is, let’s take a look at how to convert a PyTorch model to ONNX.

Convert Your Model from PyTorch to ONNX

Converting deep learning models from PyTorch to ONNX is quite straightforward. Start by loading a pre-trained ResNet-50 model from PyTorch’s model hub to your computer.

import torch
import torchvision.models as models
model = models.resnet50(pretrained=True)

The PyTorch to ONNX conversion process requires the following:

  • The model is in eval mode. This is because some operations such as batch normalization and dropout behave differently during inference and training.
  • Dummy input in the shape the model would expect. For ResNet-50 this will be in the form [batch_size, channels, image_size, image_size] indicating the batch size, the channels of the image, and its shape. For example, on ImageNet channels it is 3 and image_size is 224.
  • The input and names that you would like to use for the exported model.

Let’s start by ensuring that the model is in eval mode.

model.eval()

Next, we create that dummy input variable.

dummy_input = torch.randn(1, 3, 224, 224)

Let’s also define the input and output names.

input_names = [ "actual_input" ]
output_names = [ "output" ]

The next step is to use the `torch.onnx.export` function to convert the model to ONNX. This  function requires the following data:

  • Model
  • Dummy input
  • Name of the exported file
  • Input names
  • Output names
  • `export_params` that determines whether the trained parameter weights will be stored in the model file
torch.onnx.export(model,
                 dummy_input,
                 "resnet50.onnx",
                 verbose=False,
                 input_names=input_names,
                 output_names=output_names,
                 export_params=True,
                 )

That’s it, folks. You just converted the model from PyTorch to ONNX!

Assuming you would like to use the model for inference, you can create an inference session using the ‘onnxruntime’ python package and use it to make predictions. Here’s how it’s done.

import onnxruntime as onnxrt
onnx_session= onnxrt.InferenceSession("resnet50.onnx")
onnx_inputs= {onnx_session.get_inputs()[0].name:
to_numpy(img)}
onnx_output = onnx_session.run(None, onnx_inputs)
img_label = onnx_outputort_outs[0]


Now that you understand the basic process for converting your models, here are some important things to take into consideration.

Best Practices for Model Conversion from PyTorch to ONNX (and in General)

1. Fixed vs. Dynamic Dimensions

A deep learning model’s batch size determines the number of samples that will be propagated through the network with each forward pass. Nowadays, all well known model representation formats (including ONNX) support models with a dynamic batch size. This means, for example, that you could pass 3 images or 8 images through the same ONNX model and receive a corresponding, varying number of results as your model’s output. 

However, a model’s batch size may not be its only dynamic input axis. Many computer vision models will work on varying image sizes and some language models may work on varying sequence lengths.

To deal with these degrees of freedom, PyTorch’s ONNX export function allows you to pass variable input dimension sizes and, as a result, receive an ONNX model that may be used on variable-size inputs. Continuing the example above, we will demonstrate how to utilize the `dynamic_axis` parameter of the export command to create a ResNet50 ONNX model that works with variable batch sizes and variable input image sizes. 

You can begin with the same setup:

import torch
import torchvision.models as models
model = models.resnet50(pretrained=True)
model.eval()
dummy_input = torch.randn(1, 3, 224, 224)
input_names = [ "actual_input" ]
output_names = [ "output" ]

Now, the only difference is the call to the export function. You must provide a dictionary for the `dynamic_axes` parameter of the `torch.onnx.export` function. For example:

dynamic_axes_dict = {
    ‘actual_input’: {
        0: ‘bs’,
        2: ‘img_x’,
 	3: ‘img_y’
},
‘Output’: {
    0: ‘bs’
}
} 


torch.onnx.export(model,
                 dummy_input,
                 "resnet50.onnx",
                 verbose=False,
                 input_names=input_names,
                 output_names=output_names,
             dynamic_axes=dynamic_axes_dict,
                 export_params=True,
                 )

In this example, we told PyTorch to set the axes at indices 0, 2 and 3 of “actual_input” to be dynamic and to set the 0 index of “output” to be dynamic – where a dynamic shape is represented as an arbitrary string rather than a numerical value (e.g., `img_x` and `img_y` instead of 224 and 224). 

In a similar manner, you may pass a dictionary with list values, specifying the dimensions that you’d like to export as dynamic. PyTorch will automatically name the dynamic axes for you. The equivalent command would therefore be the following:

dynamic_axes_dict = {
    ‘actual_input’: [0, 2, 3],
‘Output’: [0]
} 


torch.onnx.export(model,
                 dummy_input,
                 "resnet50.onnx",
                 verbose=False,
                 input_names=input_names,
                 output_names=output_names,
             dynamic_axes=dynamic_axes_dict,
                 export_params=True,
                 )

Notice that your model’s architecture must support dynamic input dimensions. 

If you make assumptions about a specific layer’s size within the model (like at the input to a fully-connected layer following a convolutional layer without global average pooling in between), then you will not be able to export or utilize the model with any input size that does not meet these assumptions.

2. ONNX OpSet Versions

ONNX uses operator sets (“opsets”) to version different implementations of its operators. By utilizing opsets, deep learning frameworks can keep track of which of their operators are supported in which version of ONNX. For example, here you can see which TorchScript operators may (or may not) be exported to different versions of ONNX. 

This means that if you have an operator that is in the “unsupported operators” section, you will not be able to export that part of the model (or the model in general) to ONNX. Similarly, if your model utilizes an operator that is only supported since opset 10, you should not be able to export this model to ONNX using opset 9. The opset to which you export your DL model is controlled by the `opset_version` parameter of the torch.onnx.export function. 

It is generally recommended to export to the default version that PyTorch supports (as is listed in the torch.onnx.export documentation), and to maintain the opset version throughout your work with the model so that no new operators suddenly affect your previous results. 

Here is an example of exporting our static batch-sized ResNet50 from the examples above to ONNX, using opset 14:

torch.onnx.export(model,
                 dummy_input,
                 "resnet50.onnx",
                 verbose=False,
                 input_names=input_names,
                 output_names=output_names,
             opset_version=14,
                 export_params=True,
                )

Finally, if you hit a RuntimeError due to an unsupported node, you can do one of three things:

  1. Alter the model to not utilize this node
  2. Register a custom symbolic function that converts the operator
  3. Contribute to PyTorch and add a symbolic function to torch.onnx.

Check out the PyTorch documentation for more details. 

3. Multiple Inputs and Outputs

PyTorch natively supports the export of models with multiple inputs and outputs. If your model has several inputs or outputs, you must simply pass several input or output names to the export function, accordingly. PyTorch will allocate the names so that the first input name passed will correspond to the first input of the model and so on. 

For example, if “model” has 3 inputs and 2 outputs, the export call should look something like this:

torch.onnx.export(model,
                 dummy_input,
                 "multiple_input_model.onnx",
                 verbose=False,
                 input_names=[‘input_0’, ‘input_1’, ‘input_2’],
                 output_names=[‘output_0’, ‘output_1’],
             opset_version=14,
                 export_params=True,
                )

Beyond PyTorch to ONNX Conversion

At this point, you’ve learned about the ONNX deep learning framework and how to quickly and easily convert a model from PyTorch to ONNX. You’ve also learned about some best practices and points of consideration when you are engaging in this process. 

This should help ease the process when you encounter a platform or hardware that requires you to convert your model to a different format, which is often the case. Due to the numerous formats that exist, the ability to seamlessly convert your models is essential. To simplify model adaptation across different platforms or hardware, which often requires format conversion, we provide two additional tutorials. The first covers converting a model from PyTorch to TensorRT, and the second details conversion from PyTorch to CoreML.

Now it’s time to take your model to the next level. Head to the Deci deep learning development platform and get started with optimizing your model accuracy and inference performance. The Deci platform currently supports the ONNX, PyTorch, TensorFlow (TF2 saved-model), and Keras (h5 format) frameworks. 

To start optimizing your model, click on this link to read more and talk to an expert

You May Also Like

Top 10 List of Large Language Models Reshaping the Open-Source Arena

From Top-k to Beam Search: Everything You Need to Know About LLM Decoding Strategies

High Performance AI - Deci Gen AI Deve Platform and Deci Nano

Introducing Deci’s Gen AI Development Platform and Deci-Nano

The latest deep learning insights, tips, and best practices delivered to your inbox.

Share
Add Your Heading Text Here
				
					from transformers import AutoFeatureExtractor, AutoModelForImageClassification

extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-50")

model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")