H-LSTM distribuited neural network with Intel Edison

H-LSTM distribuited neural network with Intel Edison

Sandro Bovelli

Sandro Bovelli

Massa Martana, Umbria

A smart, highly optimized distributed neural network, based on Intel Edison "Receptive" Nodes

Modern Code, Artificial Intelligence, Internet of Things

Description

Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.

While the training procedure of large scale network is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices.

These trained networks can perform complex pattern recognition tasks for real-world applications ranging from real-time anomaly detection in Industrial IoT to energy performance optimization in complex industrial systems. The high-value, high accuracy recognition (sometimes better than human) trained models have the ability to be deployed nearly everywhere, which explains the recent resurgence in machine-learning, in particular in deep-learning neural networks.

These architectures can be efficiently implemented on Intel Edison modules to process information quickly and economically, especially in Industrial IoT application.

Our architectural model is based on a proprietary algorithm, called Hierarchical LSTM, able to capture and learn the internal dynamics of physical systems, simply observing the evolution of related time series.

To train efficiently the system, we implemented a greedy, layer based parameter optimization approach, so each device can train one layer at a time, and send the encoded feature to the upper level device, to learn higher levels of abstraction on signal dinamic.

Using Intel Edison as layers "core computing units", we can perform higher sampling rates and frequent retraining, near the system we are observing without the need of a complex cloud architecture, sending just a small amount of encoded data to the cloud.

Medium 179998 101201916625179 8369039 n

Sandro B. created project H-LSTM distribuited neural network with Intel Edison

Medium 77aa7bca 2434 48e8 b46e a29e73bb8967

H-LSTM distribuited neural network with Intel Edison

Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.

While the training procedure of large scale network is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices.

These trained networks can perform complex pattern recognition tasks for real-world applications ranging from real-time anomaly detection in Industrial IoT to energy performance optimization in complex industrial systems. The high-value, high accuracy recognition (sometimes better than human) trained models have the ability to be deployed nearly everywhere, which explains the recent resurgence in machine-learning, in particular in deep-learning neural networks.

These architectures can be efficiently implemented on Intel Edison modules to process information quickly and economically, especially in Industrial IoT application.

Our architectural model is based on a proprietary algorithm, called Hierarchical LSTM, able to capture and learn the internal dynamics of physical systems, simply observing the evolution of related time series.

To train efficiently the system, we implemented a greedy, layer based parameter optimization approach, so each device can train one layer at a time, and send the encoded feature to the upper level device, to learn higher levels of abstraction on signal dinamic.

Using Intel Edison as layers "core computing units", we can perform higher sampling rates and frequent retraining, near the system we are observing without the need of a complex cloud architecture, sending just a small amount of encoded data to the cloud.

Default user avatar 57012e2942
  • Projects 0
  • Followers 0

ankur gupta

I am pursuing Masters in Computer Science from University of Paderborn , Germany

Paderborn, Germany

Bigger 0 xncdb1nwdxh5fdlroboifzhwek25flwgoxo7 dzswn3lyd6jeqimq0vh4v2lfwi4obiaf4qhiluldqcb2jqckhnwylu5dqcowjqfl1eidwjud qbwmnaimjth5
  • Projects 0
  • Followers 0

matin hosseini

619 N University Ave, Lafayette, LA 70506, USA

Bigger 17800396 10212998264891284 3451268807469057913 n
  • Projects 0
  • Followers 0

Julian Dario Luna Pati±o

#Maker - #IoT - #Domotica CEO @DOMOTECO Jah is always around me!

Bucaramanga, Santander, Colombia

Bigger 0 0gzkp9cdxgbkh6txsch syqwnaisgq1kicdnse6w9lcmgexketd9dfqdncuig6x3a0h iffubirixbooe9k0idbfrirwxbmfe9kty9u75aecxh0metnpnzsqxm
  • Projects 0
  • Followers 0

brad lindsey

first computer experience in the 1970's with Apple Desktops Apple II

534 Cypress St, Abilene, TX 79601, USA

Bigger photo
  • Projects 0
  • Followers 3

Aravind Sura

Hyderabad, Telangana, India

Bigger 16683880 267385300365706 328479981993418895 n
  • Projects 0
  • Followers 0

Rohan shibu

Thiruvananthapuram, Kerala, India

Bigger j5oui22x 400x400
  • Projects 0
  • Followers 2

Thomas Jung

engineer, hacker, maker, federal hackathon winner.

Dallas, TX, USA

See More