Scientific Deep Learning on Intel FPGA
Dave Ojika
Unknown
- 0 Collaborators
Demonstration of multiple AI workloads on multiple FPGAs. ...learn more
Project status: Under Development
Intel Technologies
OpenVINO,
Intel FPGA
Overview / Usage
AI and deep learning are experiencing explosive growth in
almost every domain involving analysis of big data. Deep learning using
Deep Neural Networks (DNNs) has shown great promise for such scientific
data analysis applications. However, traditional CPU-based sequential
computing can no longer meet the requirements of mission-critical applications,
which are compute-intensive and require low latency and high
throughput. Heterogeneous computing (HGC), with CPUs integrated
with accelerators such as GPUs and FPGAs, offers unique capabilities to
accelerate DNNs. Collaborating researchers at SHREC at the University
of Florida, NERSC at Lawrence Berkeley National Lab, CERN Openlab,
Dell EMC, and Intel are studying the application of heterogeneous
computing (HGC) to scientific problems using DNN models. This demo
focuses on the use of FPGAs to accelerate the inferencing stage of the
HGC workflow. We demo 4 use cases and results in inferencing state-of-the-
art DNN models for scientific data analysis, using Intel distribution of
OpenVINO, running on an Intel Programmable Acceleration Card (PAC)
equipped with an Arria 10 GX FPGA. The uses cases are CosmoGAN for Cosmology, HEP-CNN for high-energy physics, 3D-GAN for simulation, and 3D-UNET for radiology.
Methodology / Approach
Uses Intel FPGA