Arhat

2 0
  • 0 Collaborators

Arhat is a cross-platform deep learning framework that converts neural network descriptions into lean standalone executable code. Arhat has been designed for deployment of deep learning inference workflows in the cloud and at the edge. ...learn more

Project status: Published/In Market

oneAPI, Artificial Intelligence

Intel Technologies
oneAPI, Intel Iris Xe, OpenVINO

Overview / Usage

Arhat is a cross-platform deep learning framework that converts neural network descriptions into lean standalone executable code. Arhat has been designed for deployment of deep learning inference workflows in the cloud and at the edge. The project objectives are twofold:

  • provide a unified platform-agnostic approach towards deep learning deployment
  • facilitate performance evaluation of various platforms on a common basis

Arhat aims at overcoming challenges of the modern deep learning that include high fragmentation of the hardware and software landscape as well as related need for cumbersome software stacks.

Methodology / Approach

Unlike the conventional deep learning frameworks, Arhat translates neural network descriptions directly into platform-specific executable code. This code interacts directly with the platform library of deep learning primitives and has no other external dependencies. Furthermore, the generated code includes only the functionality essential for handling the particular source model on the selected platform. This approach facilitates generation of the lean code and substantially streamlines the deployment process.

Code generation for different platform is implemented via replaceable back-end. Currently supported are the back-ends for Intel (oneDNN), NVIDIA (CUDA / cuDNN or TensorRT), and AMD (HIP / MIOpen).

Technologies Used

Arhat framework is implemented in Go programming language. Thin platform-specific runtime libraries required to run the generated code are implemented in C++. Arhat generates code that can be used on any modern Intel computing hardware, including Xeon CPUs and Xe GPUs. Arhat relies on oneDNN library for efficient cross-platform implementation of deep learning operations on Intel hardware.

Arhat includes the OpenVINO interoperability layer that consumes models produced by the OpenVINO Model Optimizer. This approach facilitates integration of Arhat into Intel deep learning ecosystem and opens a way for using Intel tools for the native deployment of neural networks on non-Intel platforms. In particular, Arhat can be utilized for deployment of OpenVINO models on a wide range of NVIDIA GPUs using cuDNN and TensroRT inference libraries.

Comments (0)