Shunt connection: An intelligent skipping of contiguous blocks for optimizing MobileNet-V2

Brijraj Singh

Brijraj Singh

Roorkee, Uttarakhand

8 0
  • 0 Collaborators

The proposed deep network connection is used over state-of-the-art MobileNet-V2 architecture and manifests two cases, which lead from 33.5% reduction in flops (one connection) up to 43.6% reduction in flops (two connections) with minimal impact on accuracy. ...learn more

Project status: Published/In Market

Artificial Intelligence

Intel Technologies
AI DevCloud / Xeon

Code Samples [1]Links [1]

Overview / Usage

Enabling deep neural networks for tight resource constraint environments like mobile phones and cameras is the current need. The existing availability in the form of optimized architectures like Squeeze Net, MobileNet etc., are devised to serve the purpose by utilizing the parameter friendly operations and architectures, such as point-wise convolution, bottleneck layer etc. This work focuses on optimizing the number of floating point operations involved in inference through an already compressed deep learning architecture. The optimization is performed by utilizing the advantage of residual connections in a macroscopic way. This paper proposes novel connection on top of the deep learning architecture whose idea is to locate the blocks of a pretrained network which have relatively lesser knowledge quotient and then bypassing those blocks by an intelligent skip connection, named here as Shunt connection. The proposed method helps in replacing the high computational blocks by computation friendly shunt connection. In a given architecture, up to two vulnerable locations are selected where 6 contiguous blocks are selected and skipped at the first location and 2 contiguous blocks are selected and skipped at the second location, leveraging 2 shunt connections. The proposed connection is used over state-of-the-art MobileNet-V2 architecture and manifests two cases, which lead from 33.5% reduction in flops (one connection) up to 43.6% reduction in flops (two connections) with minimal impact on accuracy.

Methodology / Approach

This work proposes a novel connection, which extends the idea of residual connection in deep learning architecture and is used to get rid of the computationally high intermediate blocks. ‘Shunt’ is a position-dependent connection, which exploits the concept of residual connection and can be used with any deep neural network.

Novelty and contribution of the work can be pointed out as:

• This work proposes a novel connection in the deep learning architecture and referred to it as ‘Shunt’ connection, which can be placed as a flyover between any two locations of the given network.

• The proposed connection reduces the number of floating point operations up to 43.6% with the least impact on the model’s performance.

Technologies Used

Software: PyTorch, Python, Anaconda, Intel Dev Cloud.

Hardware: Intel Xeon Processors, Nvidia 1080Ti GPU,

Repository

https://www.sciencedirect.com/science/article/pii/S0893608019301790

Comments (0)