Hyperdimensional Computing Acceleration using oneAPI

Ian Peitzsch

Ian Peitzsch

Pittsburgh, Pennsylvania

0 0
  • 0 Collaborators

This project seeks to accelerate hyperdimensional computing algorithms on FPGAs using Intel's oneAPI. Hyperdimensional computing is a style of computing based around operating on high-dimensional vectors to derive relationships between data. A major application is developing classification models. ...learn more

Project status: Under Development

oneAPI, HPC, Artificial Intelligence

Intel Technologies
DevCloud, oneAPI, DPC++, Intel FPGA

Overview / Usage

Background on Hyperdimensional Computing:

Hyperdimensional computing is a style of computing based on representing data as extremely large vectors (often around 10000-dimensional vectors). These large vectors can then be operated on using bundling, binding, and permutation.

  • bundling: Given 2 orthogonal hypervectors A and B, we can construct another hypervector C by "bundling" A and B (denoted C = A + B). The bundled hypervector C is then similar to both A and B, but dissimilar to any hypervector D that is orthogonal to both and B. A common form of bundling is element-wise addition or element-wise XOR.
  • **binding: **Given 2 orthogonal hypervectors A and B, we can construct another hypervector C by "binding" A and B (denoted C = A * B). The bound hypervector C is then dissimilar to both A and B.
  • permutation: Given a hypervector A, then we can permute A by performing a rotational shift on the elements of A (denoted R(A)). This permuted hypervector R(A) is then dissimilar to A.

These operations can then be applied to train a model. This is possible by taking each training feature vector f with label l and using binding and permutation to map f to a corresponding hypervector F. Then a set of class hypervectors can be constructed by bundling all hypervectors F with the same label l into a hypervector C_l. After this training, the model can classify some input x by simply mapping x to its corresponding hypervector X and then finding the class l that has the maximum similarity between X and C_l.

The benefits of this style of learning over neural networks are hyperdimensional computing models often are smaller than neural networks, more energy efficient, and benefit more from hardware acceleration. The trade-off is for some applications, hyperdimensional computing models have significantly lower accuracy than neural networks, but recent developments in training algorithms for hyperdimensional models have quickly closed this accuracy gap.

This Project

This project seeks to use Intel's oneAPI and Stratix10 FPGAs to accelerate the training and inference pipelines for hyperdimensional learning tasks. Most research on accelerating hyperdimensional computing has focused on edge FPGAs and GPUs, so evaluating how well such models can be accelerated on server FPGAs allows for hyperdimensional computing to expand to large-scale applications.

Methodology / Approach

The current inference pipeline is composed of 2 kernels: encoding and classification. Each of these kernels have their own unique challenges. For the encoding kernel, the current encoding method requires a large amount of on-chip memory to store basis hypervectors for mapping the features into the hyperdimensional space. Currently, this challenge is remedied by reducing the number of hyperdimensions to around 2000. For the classification kernel, the main challenge is the necessary dependency in calculating the maximum after calculating the similarities.

Additional optimizations used for both kernels are memory banking to fit the memory access model, loop unrolling, formatting code for the compiler to infer a dot product DSP, and quantization of input data. These optimizations have led to a 1.67x speedup of the inference pipeline on the Stratix 10 over a similar implementation on the Xeon.

Comments (0)