Inference of Tensorflow model using OpenVINO

Diazonic Labs
2 min readMar 10, 2024

--

Step 1: Keep your tensorflow model ready after training. Although OpenVINO begun has a tool to work with Command prompt, post 2023 version of OpenVINO, there is a Python API to work with alternatively. This blog utilizes the Python API of OpenVINO. You can install OpenVino library using pip manager.

%pip install -q openvino

Step 2: Instantiate the Core function. This is the primary entry point for constructing and using inference networks.

import openvino as ov
core = ov.Core()
core

Step 3: Convert the model into OpenVINO format using convert_model

ir_model = ov.convert_model(final_model, input=[1,180,180,3])

Step 4 : If the model has to be saved in OpenVINO format, then use save_model function.

ov.save_model(ir_model,",<filename.xml>")

A trained model generally consists of one or more files that fully represent the neural network(Neural Network architecture and model weights). A model can be stored in different ways. For example:

OpenVINO IR: pair of .xml and .bin files
Tensorflow : .pb file
Keras : .keras file

IR (Intermediate Representation) is OpenVINO own format consisting of .xml and .bin files. Once saved as OpenVINO IR (a set of .xml and .bin files), the model may be deployed with maximum performance. Because it is already optimized for OpenVINO inference, it can be read, compiled, and inferred with no additional delay. Also, it compresses weights to FP16 by default.

Step 5: Reads a model from the OpenVINO file (or) Keras file format.

model = core.read_model('<filename.xml>')

Step 6: This state is achieved when one or more devices are specified for a model object to run on (ov.CompiledModel), allowing device optimizations to be made and enabling inference.

compiled_model = core.compile_model(model=model)

Full code : https://colab.research.google.com/drive/1KAcuMv9w9iSL4VV37qPMiwMekcID1XwL#scrollTo=TfxU7POtnUHv

--

--

Diazonic Labs
Diazonic Labs

Written by Diazonic Labs

Internet of Things, Cloud Computing, Edge Computing, Artificial Intelligence

No responses yet