Stellar

Stellar

Eskil Steenberg

Eskil Steenberg

Stockholm,

Stellar is a software platform for controlling elaborate LED lighting setups.

Internet of Things, Modern Code, Networking, Virtual Reality

  • 0 Collaborators

  • 1 Followers

    Follow

Description

Stellar is a protable highly multithreded application that can controll hundreds of thousands of lights at 100+Hz.

Stellar lets you design and map complex lighting setups in full 3D, so that lighting effects can be controlled with high precission.

Stellar then gives the user a familliar 3D application interface to design and animate light effects, using a powerfull plugin system where the user can combine layers of filters to create a wide range of effects.

Finally Stellar lets users play bak these effects in like a DJ, where they can controll selected prameters and tie them to music and hardware devices such as Midi, and game controllers.

Stellar works on multiple platforms and supports VR.

Medium mol

Moloti N. created project Intelligent Home Security: Africa Motion Content encoder decoder using Deep Neural Networks

Medium 18657e03 c017 4a3b b485 57589b45e7a5

Intelligent Home Security: Africa Motion Content encoder decoder using Deep Neural Networks

We propose the use of Drones to help communities enhance their security initiatives, to identify criminals during the day and at night. We use multiple sensors and computer vision algorithms to be able to recognize/detect motion and content in real-time, then automatically send messages to community members cell phones about the criminal activities. Hence, community members may be able to stop house breakings before they even occur.

Machine Intelligence Algorithm Design Methodology

AMCnet: https://github.com/AfricaMachineIntelligence/AMCnet https://devmesh.intel.com/projects/africa-motion-content-network-amcnet

We propose a deep neural network for the prediction of future frames in natural video sequences using CPU. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. The model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. The model we aim to build should be end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the proposed network architecture on human AVA and UCF-101 datasets. We show state-of-the art performance in comparison to recent approaches. This is an end-to-end trainable network architecture running on the CPU with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos.

// We then use this AMCnet pretrained model on the Video feed from the DJI Spark drone, integrated with the Movidius NCS to accelerate real-time object detection neural networks.

Medium merve ayyuce kizrak

Ayyüce K. created project People Counting and Tracking from Crowd Video

Medium 944ac8d0 4870 49e6 9e33 ee96bc2ae137

People Counting and Tracking from Crowd Video

• In the first step of the project, density of crowded scenes will be calculated. • In the second step, positions and flow tendecy of pedestrians in a crowded scene will be detected. • In the last step, individuals who have abnormal activities such as falling, moving against the general movement of the crowd and a rioted or over populated crowds etc will be detected.

See More

No users to show at the moment.