Inference using Neural Compute Stick with custom models

Sayak Paul

Sayak Paul

Kolkata, West Bengal

6 0
  • 0 Collaborators

This project is about exporting custom models to NCS compatible graph and then using NCS for inference. ...learn more

Project status: Under Development

Artificial Intelligence

Intel Technologies
Intel Python, Movidius NCS

Code Samples [1]

Overview / Usage

This project is for demonstrating how we can employ a Neural Computer Stick for inference on a custom model trained using the keras high-level API from tensorflow. If you are not aware of NCS, you can check out this article for a good overview. Briefly, an NCS is a VPU which is for performing inference using deep learning models developed in either caffeor tensorflow. The advantage is it is extremely low-powered, you do not need any cloud infrastructure setup. Yet you get a significant amount boost in the inference time using NCS. The good news is it works seamlessly with Raspberry PIs also. NCS is kind of a little beast and encourages heavy computation to be performed on edge devices.

Methodology / Approach

For training purpose, I used Google Colab. I used `keras` which is a high level API that comes with TensorFlow nowadays. After I developed the model using `keras`, I converted in to a native TensorFlow graph. This graph was used make another graph which is really the main file used by the NCS during inference.

All the remaining development were done locally on an Ubuntu 16.04 VM (64-bit).

Technologies Used

  • NCSDK v2

  • Python 3.5.2

  • TensorFlow 1.11.1 and TensorFlow 1.13.1

  • VirtualBox

Repository

https://github.com/sayakpaul/NCS_With_Custom_Models_In_TF_Keras

Comments (0)