Implementation of a Low power Image Inference engine on FPGA

Sanjaya MV

Sanjaya MV

Bengaluru, Karnataka

0 0
  • 0 Collaborators

This project aims to develop a hardware accelerator on FPGA for accelerating the Inference of deep learning models. Incorporating low power consumption techniques would make it a good candidate for Inferencing operations on the edge. ...learn more

Project status: Under Development

Artificial Intelligence

Intel Technologies
Intel FPGA

Overview / Usage

The high computational complexity of deep learning models make them very unsuitable to be deployed on edge devices which run on conventional processors. I intend to accelerate these computations by offloading them to a hardware accelerator implemented on an FPGA. The hardware developed would be optimized for low power consumption. The developed system is intended to be used in embedded platforms for image and speech recognition applications. I am aiming to test and benchmark the design by running various visual recognition models on the developed plarform.

Methodology / Approach

A basic processing unit which can perform multiply and accumulate operations is designed in Verilog. Multiple such instances of MAC units are made that operate in parallel to perform all the computation involved in the forward propogation phase of the network. One of the optimizations made is to detect a zero in the computation and skip that operation. I am using a heterogeneous computing platform (FPGA + Processor) during the development phase of the project.

Comments (0)