H-LSTM distribuited neural network with Intel Edison

H-LSTM distribuited neural network with Intel Edison

Sandro  Bovelli

Sandro Bovelli

Massa Martana, Umbria

A smart, highly optimized distributed neural network, based on Intel Edison "Receptive" Nodes

Modern Code, Artificial Intelligence, Internet of Things

Description

Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.

While the training procedure of large scale network is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices.

These trained networks can perform complex pattern recognition tasks for real-world applications ranging from real-time anomaly detection in Industrial IoT to energy performance optimization in complex industrial systems. The high-value, high accuracy recognition (sometimes better than human) trained models have the ability to be deployed nearly everywhere, which explains the recent resurgence in machine-learning, in particular in deep-learning neural networks.

These architectures can be efficiently implemented on Intel Edison modules to process information quickly and economically, especially in Industrial IoT application.

Our architectural model is based on a proprietary algorithm, called Hierarchical LSTM, able to capture and learn the internal dynamics of physical systems, simply observing the evolution of related time series.

To train efficiently the system, we implemented a greedy, layer based parameter optimization approach, so each device can train one layer at a time, and send the encoded feature to the upper level device, to learn higher levels of abstraction on signal dinamic.

Using Intel Edison as layers "core computing units", we can perform higher sampling rates and frequent retraining, near the system we are observing without the need of a complex cloud architecture, sending just a small amount of encoded data to the cloud.

Medium 179998 101201916625179 8369039 n

Sandro B. created project H-LSTM distribuited neural network with Intel Edison

Medium 77aa7bca 2434 48e8 b46e a29e73bb8967

H-LSTM distribuited neural network with Intel Edison

Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.

While the training procedure of large scale network is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices.

These trained networks can perform complex pattern recognition tasks for real-world applications ranging from real-time anomaly detection in Industrial IoT to energy performance optimization in complex industrial systems. The high-value, high accuracy recognition (sometimes better than human) trained models have the ability to be deployed nearly everywhere, which explains the recent resurgence in machine-learning, in particular in deep-learning neural networks.

These architectures can be efficiently implemented on Intel Edison modules to process information quickly and economically, especially in Industrial IoT application.

Our architectural model is based on a proprietary algorithm, called Hierarchical LSTM, able to capture and learn the internal dynamics of physical systems, simply observing the evolution of related time series.

To train efficiently the system, we implemented a greedy, layer based parameter optimization approach, so each device can train one layer at a time, and send the encoded feature to the upper level device, to learn higher levels of abstraction on signal dinamic.

Using Intel Edison as layers "core computing units", we can perform higher sampling rates and frequent retraining, near the system we are observing without the need of a complex cloud architecture, sending just a small amount of encoded data to the cloud.

Bigger 16683880 267385300365706 328479981993418895 n
  • Projects 0
  • Followers 0

Rohan shibu

Thiruvananthapuram, Kerala, India

Bigger j5oui22x 400x400
  • Projects 0
  • Followers 0

Thomas Jung

engineer, hacker, maker, federal hackathon winner.

Dallas, TX, USA

Bigger 14264126 1719610614966859 7658205800800384838 n
  • Projects 0
  • Followers 0

Caleb Pfohl

151 New Park Ave # 75, Hartford, CT 06106, USA

Bigger 20161112 172934
  • Projects 0
  • Followers 0

Dennis Masesi

A passionate student with the love of Artificial Intelligence

Kenya

Default user avatar 57012e2942
  • Projects 0
  • Followers 0

awadhi ally

Dar es Salaam, Tanzania

See More