White Papers

20 CheXNet – Inference with Nvidia T4 on Dell EMC PowerEdge R7425 | Document ID
Further, the model needs to be built with supported operations by TF-TRT integrated,
otherwise the system will output errors for unsupported operations. See the reference list for
further description [13].
Figure 8: Workflow for Creating a TensorRT Inference Graph from a TensorFlow Model in Frozen
Graph Format
Import the library TensorFlow-TensorRT Integration:
import tensorflow.contrib.TensorRT as trt
Convert a SavedModel to a Frozen Graph and save it in the disk:
If not converted already, the trained model needs to be frozen before use TensorRTâ„¢
through the frozen graph method, below is the function to do the conversion
def convert_savedmodel_to_frozen_graph(savedmodel_dir, output_dir):
meta_graph = get_serving_meta_graph_def(savedmodel_dir)
signature_def = tf.contrib.saved_model.get_signature_def_by_key(
meta_graph,
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY)
outputs = [v.name for v in signature_def.outputs.values()]