File Exchange

image thumbnail

Deep Learning Toolbox Converter for ONNX Model Format

Import and export ONNX™ models within MATLAB for interoperability with other deep learning frameworks

45 Downloads

Updated 11 Sep 2019

Import and export ONNX™ (Open Neural Network Exchange) models within MATLAB for interoperability with other deep learning frameworks. ONNX enables models to be trained in one framework and transferred to another for inference.

Opening the onnxconverter.mlpkginstall file from your operating system or from within MATLAB will initiate the installation process for the release you have.
This mlpkginstall file is functional for R2018a and beyond.

Usage example:
%% Export to ONNX model format
net = squeezenet; % Pretrained Model to be exported
filename = 'squeezenet.onnx';
exportONNXNetwork(net,filename);

%% Import the network that was exported
net2 = importONNXNetwork('squeezenet.onnx', 'OutputLayerType', 'classification');

% Compare the predictions of the two networks on a random input image
img = rand(net.Layers(1).InputSize);
y = predict(net, img);
y2 = predict(net2,img);

max(abs(y-y2))

To import an ONNX network in MATLAB, please refer:
https://www.mathworks.com/help/deeplearning/ref/importonnxnetwork.html

To export an ONNX network from MATLAB, please refer:
https://www.mathworks.com/help/nnet/ref/exportonnxnetwork.html

Comments and Ratings (45)

Still not working with biLSTM Layers with regression outputlayer

Hi, When I Import the layers from a Keras network , an error occur:
'Importing 'LSTM' layers in Keras models built with the functional API is not yet supported'
So,what should i do to overcome this problem?

cui

Dear MathWorks Deep Learning Toolbox Team:
I hope that feature versions will support ONNX operators more abundantly, not just the current 28 operators.
希望以后版本更丰富的支持ONNX operators,不仅仅只是目前的28种operators.
So far the "importCaffeNetwork" function has performed very poorly.
到目前为止“importCaffeNetwork”这个函数表现的很差!
Come on!
加油!

Hi @cui

we pretty much followed the examples in

https://github.com/microsoft/onnxruntime/blob/master/csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/CXX_Api_Sample.cpp

and the network seem to work. I have not figured out how to reset the internal state of the LSTM layers for a new sequence. We just reload the model.

Best wishes

Andreas

cui

Dear Ting Su,
I can import and export the mobilenetv2 model that comes with matlab very freely and conveniently, but when I import mobilenetv2.onnx saved in the pytorch-onnx framework, the last layer of averagePooling can't be imported correctly. Why?
https://github.com/tonylins/pytorch-mobilenet-v2 ,onnx liabray can import and export this responsibily,matlab can't...

警告: Unable to import some ONNX operators, because they are not supported yet. They have been replaced by placeholder layers. To find
these layers, call the function findPlaceholderLayers on the returned object.
> In nnet.internal.cnn.onnx.importONNXLayers (line 13)
In importONNXLayers (line 48)

cui

Hi @Andreas Herzog:
Can I ask how you used the LSTM onnx model in a C++ interface function? Is also a sequence feature with matlab, there is such an example code can refer to it, thank you!

Dear Matlab Team,

exporting and load the LSTM model now works fine, also scoring works using the C++ interface.

One minor thing I noticed, the output tensor ist not called like the laster layer in the network but as combination of layer name and last computer graph operation.

So my last layer is named "fc_2" (a standard name from deep learning toolbox) but the outputtensor has to be retrieved in the C++ interface using "fc_2_Add" which is also displayed when you load the onnx file with Netron App.

Is this naming necessary? We save in our model description a serialized onnx file and the name of a certain layer as output tensor to control which compute graph node acts as output. Since for certain types this not necessarly needs to be the laster layer (coding with autoencoder for example).

So could this be set back to the previouse behaviour or can we read from the layer structure, what the correct tensor name should look like?

Best wishes and thanks a lot for the effort.

Andreas

Dear Matlab Team,

the new version (1st Aug) seem to resolve our problems with the LSTM export.

Test with the given example on github with python onnxruntime 0.5.

Many thanks!

Andreas

Dear Ting Su,

any word on the new version that writes LSTM compatible to ONNX runtime?

Sorry for being a pain, but we need that piece of functionality to deliver a model for our customer.

Best wishes

Andreas

I was able to point the installation location to a folder with enough space to get this done. matlabshared.supportpkg.setSupportPackageRoot()

Thanks.

I am hitting an issue with the installation for onnx. Not even able to download the file. I am using RHEL 7.5. Any idea on the issue ?

Hi,
i am trying to export a model to use it in tensorflow. It is the basically the same as this
https://de.mathworks.com/help/deeplearning/examples/cocktail-party-source-separation-using-deep-learning-networks.html
i get the warning: " Warning: ONNX does not support layer 'BiasedSigmoidLayer'. Exporting to ONNX operator 'com.MathWorks.Placeholder'. " because one of the layer is a custom sigmoid layer.
i failed to import into tensorflow getting the error
ValidationError: No Schema registered for Placeholder with domain_version of 1
==> Context: Bad node spec: input: "fc_1" output: "layer_1" name: "layer_1" op_type: "Placeholder" doc_string: "Placeholder operator" domain: "com.mathworks"
Is there anyway i can solve this?
my Onnx Modell: https://drive.google.com/open?id=1c5ItcPoYU2xkmOZNiUgrIetLsixEewYK

Thanks in advance

Dear Ting Su,

excellent! :D

Best wishes

Andreas

Ting Su

Hi Andreas,
The new version will be released soon.

Kevin Chng

Does it work with yoloV2?

Dear Ting Su,

any word on a new version that can resolve the issue with the LSTM (see github ticket). We would like to deploy some models into an application with the onnxruntime.

Best wishes

Andreas

cui

Dear Ting Su,
The onnx model exported by exportONNXNetwork() is not the same as the result of running in opencv and Matlab? I posted my issue also here:
https://ww2.mathworks.cn/matlabcentral/answers/464550-the-onnx-model-exported-by-exportonnxnetwork-is-not-the-same-as-the-result-of-running-in-opencv-an

cui

Hi Ting Su,
I noticed there was a recent update of the converter but LSTMs still don't seem to work properly. I posted my issue also here:
https://de.mathworks.com/matlabcentral/answers/457176-onnx-export-yields-error-in-windows-ml?s_tid=prof_contriblnk

cui

Dear Ting Su,
Does the current onnx version support the export of target detection networks, such as the Yolov2 network(export to yolov2.onnx)?

Dear Ting Su,

yes, thats the issue I opened on Github.

https://github.com/microsoft/onnxruntime/issues/1016

Best wishes

Andreas

Ting Su

Hi Andreas,
We noticed that some LSTM models exported by MATLAB ONNX Converter don't work well with ONNX Runtime, although they could be loaded into other frameworks, as ONNX Runtime strictly follows ONNX spec for the shape requirement. A new release of MATLAB ONNX converter will be released soon and it will work with ONNX Runtime better.

Ting Su

Hi Andreas,
Thanks for the question. Is this the same issue reported in the following link?
https://github.com/microsoft/onnxruntime/issues/1016
We are looking into this and will get back to you soon.

Dear Matlab Team,

we are exporting an lstm Model (basically build as descripted in the sequene-to-sequence regression example with the turbofan engine example data.

We get an error message when importing it in the onnxruntime (build from source 0.4.0 Release):

Load model from temp.onx failed:Node:fc_2 Output:fc_2 [ShapeInferenceError] Mismatch between number of source and target dimensions. Source=2 Target=3

We can load the onnx File in Netron just fine and have an fc_2 output with somewhat odd <1x1x1> dimension. Could there be a confusion in expected output dimensions?

Could we send the onnx-File / Matlab nnet to you for some help.

Would be much appreciated.

Exporting models from matlab to other runtime engines doesn't work apart from trivial examples. I've seen strange shape flipping on output ONNX network layers which causes failures when importing to python frameworks or c#.

when I import the model to c++ I don't have same result as output layer in matlab can you supply example in c++ opencv or tensorflow which get layer out to be same as matlab
conv layer for example

Hong Wang

thanks to Jihang Wang, with you help I setup this tool.

Hi Jihang, thanks for sharing this information, unfortunately it didn't resolve the problem in my case.

Jihang Wang

Hi everyone, I found the reason why it doesn't work under the help of MathWorks Technical support team. I just want to share my experience here. Basically there is a function on my path which is shadowing one of the built-in MATLAB functions. I reset my MATLAB path using the code below:
>> restoredefaultpath
>> rehash toolboxcache
>> savepath % note: this command will overwrite my current path preferences.

After that, I downloaded and reinstalled the converter app from this page and rerunning the export code. Problem solved :) Hope this helps.

Hi Andreas, I just used a custom CNN and checked it with WinMLRunner, I didn't try any pretrained models though.

Hi Gabriel
Could you tell me which CNN did you use?
As mentioned before i tryed the basic googlenet and i couldn't use it with Microsoft ML.
It would be very helpful if i could use the onnx file exchange.
Thanks in advance

Hi Ting, thanks a lot for the Opset update. However, now I obtain the same error as Andreas for LSTM networks: "First input does not have rank 2". If I have more than one LSTM-layer in the network the error messages somehow changes to: "First input tensor must have rank 3", CNNs seem to work though.

Ting Su

Hi Andreas and Jihang, Can you reach our technical support and send model to us?

Hi Ting, I ran into the same issue with C#. I can export the Network in different versions. If I try to load the Model into windows.ml I get an "ShapeInferenceError" the First Input does not have rank 2. With Opset v6 it is possible to load the File but it can't be used. I tested googlenet and tried to compare the onnx models with a program called "Netron". The difference I found was that the first layer “Sub” changed from [3x244x244] to [1x3x244x244] but I’m not sure if this is the Problem. A second thing is that with onnx v6 Visual Studio can generate a model class automatically but not with v7 or higher. It seems that it is not recognized as an onnx model. Can you give an advice how to use Matlab trained model's in C#?

Jihang Wang

Hi Ting, I have the same issue when loading the ONNX model in C#. I tried to save the model to different Opset versions but none of them works. Please advise.

Ting Su

Hi Gabriel,
We recently added support for ONNX Opset 7, 8 and 9. One can specify which Opset to use via an optional input argument 'OpsetVersion' during the export. You should be able to download it if you have a R2018b MATLAB.

Ting Su

Hi Kennth,
We saw a similar issue and the fix will be released soon. It will be great if you could send us your MATLAB model to allow us to test it.

It would be great if the export could be updated to version 7 or 8 to allow the use with windows ml.

exportONNXNetwork does not work properly using CNTK and Python. The conversion produces a ValueError: Gemm: Invalid shape, input A and B are expected to be rank=2 matrices.

Hui Yin Lee

Hi, Is the code or toolbox available for Faster R-CNN model to be exported? As i get the error mentioning the model is not DAGnetwork. Hopefully can get some feedback or help here

Do you guys know when support for the constant operator will get added?

Error using importONNXNetwork (line 39)
Node 'node_20': Constant operator is not supported yet.

umit kacar

I worked this code:) It is very good. Thank you.

Ting Su

Hi Trihn,
We would like to hear more details on the problem of importONNXNetwork(). Have you installed an old version of this converter before?

Trinh Pham

The function importONNXNetwork() doesn't work when I use example above!

MATLAB Release Compatibility
Created with R2018a
Compatible with R2018a to R2019b
Platform Compatibility
Windows macOS Linux