Issue
I have a trained model (Which I trained myself), which is a .h5 format, It works fine by itself, but I need to convert it to .onnx format for deploying it inside Unity Engine, I searched how to convert .h5 models to .onnx format and stumbled upon keras2onnx library, and following some tutorials I got this:
!pip install git+https://github.com/microsoft/onnxconverter-common
!pip install git+https://github.com/onnx/keras-onnx
import keras
import keras2onnx
import onnx
from tensorflow.keras.models import load_model
model = load_model('/content/drive/MyDrive/Sae/TesisProgra/CNNs/ParagrapshVsDrawings/REDPropiaFinal.h5')
onnx_model = keras2onnx.convert_keras(model, model.name)
temp_model_file = '/content/drive/MyDrive/Sae/TesisProgra/CNNs/ParagrapshVsDrawings/REDPropiaFinal.onnx'
onnx.save_model(onnx_model, temp_model_file)
The problem is that I keep getting the error code: AttributeError: 'KerasTensor' object has no attribute 'graph', I tried with different implementations of code, but keep getting that same error, so maybe there is something bad with my trained model? I really don´t know and I´m quite lost, anyone help? This is my first time dealing with .onnx formats,
If you want to see my model here it´s a link, my model was trained in analyzing images and differentiating between images which have drawings vs images which have handwriting. Lastly, my CNN model.summary() is this one:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (Conv2D) (None, 436, 308, 64) 640
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 218, 154, 64) 0
_________________________________________________________________
conv2d (Conv2D) (None, 216, 152, 32) 18464
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 108, 76, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 106, 74, 16) 4624
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 53, 37, 16) 0
_________________________________________________________________
flatten (Flatten) (None, 31376) 0
_________________________________________________________________
dense (Dense) (None, 64) 2008128
_________________________________________________________________
dense_1 (Dense) (None, 32) 2080
_________________________________________________________________
dense_2 (Dense) (None, 16) 528
_________________________________________________________________
dense_3 (Dense) (None, 1) 17
=================================================================
Total params: 2,034,481
Trainable params: 2,034,481
Non-trainable params: 0
_________________________________________________________________
I'm currently using Google colaboratory for this code implementation, also, my graph was trained with tensorflow 2.0
Solution
Not directly answer's your question, but a workaround:
You could try using the tf2onnx package for conversion.
The flow is to:
- Export the model to Saved Model format.
- Convert exported Saved Model to ONNX.
I had success converting the provided .h5
model:
# Install helper packages:
!pip install tf2onnx onnx onnxruntime
# Load model from .h5 and save as Saved Model:
import tensorflow as tf
model = tf.keras.models.load_model("REDPropiaFinal.h5")
tf.saved_model.save(model, "tmp_model")
# Convert in bash:
!python -m tf2onnx.convert --saved-model tmp_model --output "REDPropiaFinal.onnx"
The above should create REDPropiaFinal.onnx
file.
Let us check inference:
# Get original output
noise = tf.random.uniform((1, *model.input_shape[1:]))
original_out = model.predict(noise)
print(original_out) # [[1.]]
# Get converted output:
import onnxruntime
onnx_session = onnxruntime.InferenceSession("REDPropiaFinal.onnx")
onnx_inputs = {onnx_session.get_inputs()[0].name: noise.numpy()}
onnx_output = onnx_session.run(None, onnx_inputs)[0]
print(onnx_output) # [[1.]]
# Assure the same:
tf.debugging.assert_near(original_out, onnx_output)
Hope this helped!
Answered By - sebastian-sz
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.