Distributed and Multicore Non Linear SVMs

Roberto Diaz Morales

Roberto Diaz Morales

Madrid, Community of Madrid

0 0
  • 0 Collaborators

A distributed and multicore hybrid MPI/openMP implementation of Support Vector Machgines ...learn more

Project status: Under Development

HPC, Artificial Intelligence

Intel Technologies
Intel CPU

Overview / Usage

There has been a noticeable increase in the number of HPC infrastructures. This fact has promoted the adaptation of traditional machine learning techniques to be capable of addressing large scale problems in distributed environments. Kernel methods like support vector machines (SVMs) suffer from scalability problems due to their nonparametric nature and the complexity of their training procedures.

In this project, I propose a new and efficient distributed implementation of a training procedure for nonlinear semiparametric SVMs. This project is an evolution of a past project of multicore (but not distributed) implementation of SVMs:

https://devmesh.intel.com/projects/libirwls-a-parallel-irwls-library-to-solve-svms-and-semiparametric-svms

I have benchmarked it against other state-of-the-art methods, such as p-pack SVM or PSVM. Experimental results show that the proposed algorithm achieves higher accuracy while controlling the size of the final model, and also offers high performance in terms of run time and efficiency, when processing very large datasets (the computation time grows linear with the number of training patterns).

Methodology / Approach

To allow distributed executions, I have made use of message passage information (MPI) and I combine it with openMP to make use of several cores in every node.

It is a semiparametric model to keep the complexity of the SVM under control.

Technologies Used

C, python, MPI and openMP

Comments (0)