Issue
I created/trained a model via TF 2.4 (w/ CUDA 11.0, Python 3.7) /models/research/object_detection tutorial. No errors, appears to have run fine for 25000 steps. Everything looked normal, Tensorboard showed total loss < 0.5. It produced a saved_model.pb per the tutorial. I now want to convert to a frozen graph for inferences.
It appears to load fine (this code was run in Jupyter notebook):
!ls {model_path} -l
model = tf.compat.v2.saved_model.load(export_dir=model_path)
print (type(model))
output:
total 13232
drwxr-xr-x 2 jay jay 4096 Dec 21 10:41 assets
-rw-r--r-- 1 jay jay 13538598 Dec 21 10:41 saved_model.pb
drwxr-xr-x 2 jay jay 4096 Dec 21 10:41 variables
<class 'tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject'>
however, when I begin to convert it, I get an error
full_model = tf.function(lambda x: model(x))
full_model = full_model.get_concrete_function(
tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))
output:
AttributeError Traceback (most recent call last)
<ipython-input-73-50e1947f8357> in <module>
2 full_model = tf.function(lambda x: model(x))
3 full_model = full_model.get_concrete_function(
----> 4 tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))
AttributeError: '_UserObject' object has no attribute 'inputs'
in addition, the model cli seems to work:
!saved_model_cli show --dir {model_path} --all
abbreviated output:
2020-12-22 11:38:23.453843: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['input_tensor'] tensor_info:
dtype: DT_UINT8
shape: (1, -1, -1, 3)
name: serving_default_input_tensor:0
<content removed for brevity>
Defined Functions:
Function Name: '__call__'
Option #1
Callable with:
Argument #1
input_tensor: TensorSpec(shape=(1, None, None, 3), dtype=tf.uint8, name='input_tensor')
Is my model bad or am I doing something wrong here? Should I be using tf.keras to load the model?
tf.keras.models.load_model(model_path, custom_objects=None, compile=True, options=None)
when I used tf.keras, I received an error on loading:
~/anaconda3/envs/tf24/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py in infer_inputs_from_restored_call_function(fn)
980 return tensor_spec.TensorSpec(defun.common_shape(x.shape, y.shape),
981 x.dtype, x.name)
--> 982 spec = fn.concrete_functions[0].structured_input_signature[0][0]
983 for concrete in fn.concrete_functions[1:]:
984 spec2 = concrete.structured_input_signature[0][0]
IndexError: list index out of range
Solution
You could use Keras to help you get to a frozen graph but (as of 2021.01.04), that doesn't work and you'll encounter issue 43527 as noted.
There is a workaround - not using Keras. Go through the colab tutorials: tensorflow/models/research/object_detection/colab_tutorials/
Specifically, go through: inference_from_saved_model_tf2_colab.ipynb With minor edits, this will run locally - you don't have to run on colab. This works well and it will show you the pattern for utilizing your model without the Keras problems.
Answered By - jduff1075
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.