Heavy Likelihood MCMC
Gene Stoltz
Unknown
- 0 Collaborators
Markov chain Monte Carlo simulations need to calculate a likelihood millions of times. Calculating a likelihood using the full potential of the hardware is essential for timeous execution of MCMC simulations. The project aims to ease the implementation of heterogeneous calculated likelihoods. ...learn more
Project status: Under Development
oneAPI, HPC, Artificial Intelligence
Intel Technologies
DevCloud,
oneAPI,
DPC++,
Intel FPGA
Overview / Usage
The project uses a Markov Chain Monte Carlo sampling method for optimising a large set of parameters where the appropriate likelihood is computationally expensive. The calculation of a likelihood consists of a single function with a set of parameters outputting a single value. The likelihood function can be data-intensive or multiple MCMC chains can be calculated at the same time. A framework to allow high-performance likelihood calculations without knowledge of heterogeneous computing is being explored.
We have implemented the algorithm for an HMM model that categorizes nonvolcanic tremor data. Furthermore we have integrated the algorithm as part of an R package for Bayesian analysis using the OpenCL framework with Python under the hood. It is however expected that a CUDA implementation for NVIDIA GPUs will achieve higher data throughput but this limits the algorithm to a single vendor. OpenCL on the other hand allows execution of the algorithm on any OpenCL compliant device such as Intel CPUs, AMD CPUs and GPUs, Qualcomm processors, Xilinx FPGAs (Field-programmable gate array) and even NVIDIA GPUs.
Here is the paper providing an example to what the project is aiming at.
https://arxiv.org/abs/2003.03508
Methodology / Approach
The first set of tests consists of long-chain Hidden Markov Model with a small( < 25 states) transition matrix implemented in OpenCL allowing various state kernels to be used. The next step is to move the OpenCL code to oneAPI to observe various performance increases/decreases.