Benchmarking of Deep learning models on modern Intel platforms

Ayan Das

Ayan Das

Unknown

0 0
  • 0 Collaborators

In this project, we explore the capabilities of modern Intel hardware and use it to get the maximum computational benefit out of it. In particular, we focus on the field of Deep Learning which requires humongous computational power in practice. We benchmarked standard VGG-16 network on SkyLake platform with optimized set of software and execution policies to ensure maximum resourse utilisation. ...learn more

Project status: Concept

Artificial Intelligence

Intel Technologies
MKL

Overview / Usage

This project, as a high level objective, tries to optimally use the maximum resource available to the application. Deep learning, being one of the most compute-heavy applications, require not only high amount of computation but also proper management of thread execution and memory managements. This project tries to combine all possible methodologies that are in favor of high density computations as in deep learning applications. We are trying the entire project to focus mainly on "training" part of the deep learning system which, for a long time, have not been dominated by CPU platforms.

Methodology / Approach

In this project, we are trying to gather methodologies that allows applications to maximally utilize CPU resources. Few of them are mentioned

  1. Vectorization (SIMD)
  2. Controlled memory allocation
  3. Core binding and controlled thread execution
  4. etc.

Technologies Used

  1. Xeon processors (SkyLake)
  2. MKL
  3. NUMA (libNUMA)
Comments (0)