ParallelGSO
Amrut phadke
Chennai, Tamil Nadu
- 0 Collaborators
A highly scalable parallel version of Galactic Swarm Optimisation Algorithm Galactic Swarm Optimization is a state-of-the-art meta-heuristic optimization algorithm which is insiped by the motion of stars, galaxies, superclusters interacting with each other under the influence of gravity. Take a look at a detailed introduction to our project - https://github.com/shubham0704/parallelGSO/blob/master/white%20paper.ipynb ...learn more
Project status: Under Development
Intel Technologies
Intel vTune
Overview / Usage
Project Description
Deep Learning is a family of machine learning algorithms which involve a deeper cascade of smaller learning models than traditional machine learning models have. These algorithms are used extensively for automate tasks which were previously thought to be only possible for humans to do. Today the go-to algorithm for optimizing the deep learning algorithms is Gradient Descent. Although, it works very well in practice. It gets stuck when there are multiple optimal solutions available for a given function. i.e A multimodal function. In practice a different version of it called Stochastic Gradient Descent introduces stochasticity to help navigate out of tough spots but the requirement of finding gradients limits its use case to just that.
Fortunately, We have a family of algorithms that are very good at dealing with the functions which are highly non-convex, the kind of ones for which we cannot find gradients to, called the metaheuristic algorithms. A research work was recently published by our university which proposed an algorithm called Galactic Swarm Optimization (GSO) which is currently the state-of-the-art. GSO doesn't need any gradient finding. So even for non-convex functions (the kind of ones which have the min/max) are also suitable. What it lacked was that it could not scale in with hardware.
Our primary contribution is that we have come up with a solution which modifies the GSO algorithm to scale in with increase in hardware. We have also created a library which is open-sourced on github along with provisions to rerun the benchmarks
Methodology / Approach
The GSO algorithm is a modification of the PSO algorithm which eliminates the pain points of the PSO algorithm. Most variants of PSO first have a full exploration phase which gradually becomes a full exploitation by using the learning rate decay to strike the balance. GSO has multiple cycles of exploration and exploitation by dividing search in terms of epochs. This allows us to explore the global minima more accurately. Consider each galaxy as a subswarm which have a centre of mass. These galaxies are part of a larger supercluster. Where they look like point masses revolving inside. Now using PSO we find the best solution a.k.a the centre of mass of galaxy which represents the galaxy in the supercluster. Now these representative points are used to find centre of mass of this large supercluster. This heirarchy can go on even more but we currently restrict it to 2 levels. We use PSO to find the centre of mass of a galaxy/supercluster.
Technologies Used
python, numba, intel parallel studio xe, intel vtune amplifier