Skip to content Skip to sidebar Skip to footer

How To Verify Optimized Model In Tensorflow

I'm following a tutorial from codelabs. They use this script to optimize the model python -m tensorflow.python.tools.optimize_for_inference \ --input=tf_files/retrained_graph.pb

Solution 1:

I've solved the error by changing the placeholder's type with tf.float32 when exporting the model:

def my_serving_input_fn():
    input_data = {
        "featurename" : tf.placeholder(tf.float32, [None, 4], name='inputtensors')
    }
    return tf.estimator.export.ServingInputReceiver(input_data, input_data)

and then change the prediction function above to:

defpredict(model_path, input_data):
    # load tf graph
    tf_model, tf_input, tf_output = load_graph(model_path)

    x = tf_model.get_tensor_by_name(tf_input)
    y = tf_model.get_tensor_by_name(tf_output) 

    num_outputs = 3
    predictions = np.zeros(num_outputs)
    with tf.Session(graph=tf_model) as sess:
        y_out = sess.run(y, feed_dict={x: [input_data]})
        predictions = y_out

    return predictions

After freezing the model, the prediction code above will be work. But unfortunately it raises another error when trying to load pb directly after exporting the model.

Post a Comment for "How To Verify Optimized Model In Tensorflow"