ICE Tech OPTIMIZAITON WIN

0 0
  • 0 Collaborators

The inference performance plays a critical role in CPU based video image analysis workload. High throughput low latency are key factors in AI tasks such as classification and object detection commonly used in video image analysis use case. ...learn more

Project status: Published/In Market

oneAPI, Artificial Intelligence

Intel Technologies
oneAPI, Intel Opt ML/DL Framework, Intel vTune, OpenVINO, DevCloud

Overview / Usage

ICE Tech provides solutions based on video analytics and AI algorithms such as object detection, human/face detection.

The workload is the Inference work on Xeon CLX Processors using Intel Optimized Caffe with common topologies.

We deployed these models in the parking secure system, gate control system, shopping centre camera surveilliance and many more.

Methodology / Approach

OpenVINO played a dominant role in classification tasks with MobileNet series topologies. Respectively 28.40x and 25.74x performance gain were seen compared to that of Intel Optimized Caffe with FP32.

In object detection tasks **Intel Optimized Caffe **with oneDNN using Int8 quantization implementation with SSD-VGG, RPN-VGG topologies achieved 2.58x and 2.09x boost compared to that of Intel Optimized Caffe with FP32 respectively. We also used Intel Vtune as analyze tool during the optimization work.

Technologies Used

Thus we tried out different means, such as inference accelation with OpenVINO toolkit, Int8 quantization implementation and achieved further performance gain compared to the one implemented by Intel Optimized Caffe with FP32 with oneDNN.

Comments (0)