SSDMobileNetv2 implementation for OpenVINO
Saksham Sharma
Chennai, Tamil Nadu
- 0 Collaborators
Implementation of SSDMobileNetv2 in OpenVINO ...learn more
Project status: Published/In Market
oneAPI, Artificial Intelligence
Overview / Usage
This project demonstrates how to convert a TensorFlow model to run on OpenVINO and shows that the inference results are identical between the two frameworks.
Methodology / Approach
- Convert the model: Use the OpenVINO Model Optimizer to convert the SSD-MobileNetv2 model from TensorFlow to the Intermediate Representation (IR) format used by OpenVINO. This will generate two files: an XML file that describes the network topology and a binary file that contains the weights.
- Run inference with TensorFlow: Load the original SSD-MobileNetv2 model in TensorFlow and run inference on a test image. Record the results for later comparison.
3.Run inference with OpenVINO: Load the IR files generated in step 1 into the OpenVINO Inference Engine and run inference on the same test image used in step 2. Record the results.
- Run inference with OpenVINO: Load the IR files generated in step 1 into the OpenVINO Inference Engine and run inference on the same test image used in step 2. Record the results.
Technologies Used
1.)TensorFlow: For showing original inference and loading the original model
2.) OpenVINO: For Model Conversion and Inference on the converted model