Predictive analytics for level of fill data from IoT device

Predictive analytics for level of fill data from IoT device

alfred ongere

alfred ongere

Nairobi, Nairobi County

Design an IoT device for collective level of fill data, and then use machine learning for predictions from collected data

Artificial Intelligence, Internet of Things

Description

The goal is to design an MLMD(Mountable Level Monitoring Device) that can be used to monitor the level of fill of solids and liquids in regularly shaped containers. This data will then be passed through a machine learning algorithm for predictions based on data collected.

Gallery

Video

Medium h59v9ywj

Peter M. created project Anti-Snoozer

Medium 2d4327fa 176e 4670 bb56 d89d25337ccf

Anti-Snoozer

Anti-Snoozer is a facial detection system for when people are falling asleep at the wheel. You have a camera, ideally on your dashboard, that detects when you’re rapid blinking, when your eyes are closing, when you’re yawning, as well as when you’re looking around too much.

When that’s detected, a sound alarm is activated, as well as the car’s hazard lights (if you have access to that) to warn the other drivers. In order to stop it, you can either look directly at the road, or raise your hand to signal, like in a boxing match, to stop the annoying sounds. There are other features, like text messages, that are sent to your loved ones, but those aren’t immediate feedback that you can have.

It consists of an Intel® RealSense™ camera on the dashboard and an Intel® Edison chip, which gives you all of the alarms. In the center is a computer. For this one, we use a thing called NUC. It’s just a “next unit of computing,” a tiny little computer—a Mac mini is probably the easiest comparison. It runs the operating system that can run the software, and it can be mounted anywhere. Ideally this would be integrated inside the car, and not as a third-party product.

They open sourced the whole technology, so anyone who wants to can play with the technology or build one for themselves. There are two chips inside.

The first chip is the Haswell chip that’s in all of our computers. The 3D camera does require a lot of processing power, because we’re doing facial recognition and we’re doing voice synthesis, but without the Internet. Normally, on something like your phone, voice-to-text and text-to-voice involves sending the file off to be processed and getting something back, but we want to do all that processing on board, so it took a pretty powerful chip.

The second chip is an Edison chip that comes from the Internet of things. If you use the camera as a sensor and you’re sensing “drowsy,” you need to set some alarms in order to attempt to wake the user. The Edison chip is used for that. It can connect via USB and send out alarms—noise, but you could also vibrate the chair or the wheel if it’s integrated in the car itself.

Medium h59v9ywj

Peter M. created project Vehicle Rear Vision

Medium a1bc4b47 bb5b 46b4 b2c0 d2cc36b494e8

Vehicle Rear Vision

Why do we build Vehicle Rear Vision? Back-up collision has been a major problems, The U.S. Center for Disease Control reported that from 2001–2003, an estimated 7,475 children (2,492 per year) under the age of 15 were treated for automobile back-over incidents. About 300 fatalities per year result from backup collisions. By 2018 all the cars sold in United States will require a mandatory backup camera.

How do we solve the problem? Most of the cars in the market today still do not have backup camera, that includes about half of the the cars that's being sold in the US today, and much more than half across the globe. We can solve this problem by installing a camera on the back of the car, using the space of license plate.

Walabot will be able to detect the distance of the target closest to the vehicle.

Intel RealSense R200 camera will give us a greater detail on what's being seen, including low light situation.

Intel Joule developer kit is powerful enough to run RealSense cameras along with Walabot. Raspberry Pi isn't powerful enough to run a RealSense 3D camera, in which we can add a lot more features in the future that can improve functionalities of the car. Same version can be used with Pi with a normal USB camera, but it won't be good for night time.

Android phone/tablet being used to display the Backup Camera, this is to reduce the cost of an additional screen. iOS version can be built upon request.

Through of those components, we will be able to build an Rear End Vision that shows user the back of the car.

Medium 0 pib0crxtbcg0fccpzwup sc3qjf9ig2sggs4naobqlaniafyxtsypq0b9n7ps8gskwsj 5yg5hmnz1fnr89kviot hm9z1o0x89jhrg3bv00zk7nxx6pe ixb1

Victor J. created project Sonic the Hedgehog 2 HD

Medium bd198e70 779e 49c2 bccd a0939dc8c507

Sonic the Hedgehog 2 HD

From the very beginning of the project, S2HD has focused on maintaining the feel that has made Sonic 2 a classic while using new tools to re-imagine the original as it could have been were it made today. The game's art direction reflects this, but S2HD has also given equal attention to unseen essential elements in the physics and audio - the result of this is an unprecedented re-interpretation in high definition grapics, music, and gameplay of Sonic 2, which continues to set quality standards to this day.

We firmly believe 2D artwork is the foundation of Sonic's retro roots. With this in mind, our goal was to produce a similar shading style to the Japanese original concept artwork and set it in motion with hand-drawn animations. 20 years after Sonic 2's release, S2HD can now faithfully represent the original's cast of characters and environments in a world where technology no longer imposes artistic limitations.

Medium img 0667

Erika H. updated project SensiCare status

Medium img 0667

Erika Harvey

Would like to introduce our new CTO, Farzad a Berkley EECS grad who's worked at Lawrence Berkley Labs for the last eight years and has taken over the soldering from me. So at least that has definitely improved!

Medium bob duffy 3d head avatar

Bob D. added a comment on John F.'s activity feed event

Status a chat a 16c3c3fcc3

Thanks John for joining DevMesh. Please post any projects, research or even development ideas you have here. There is a lot of interest in how factories can improve inefficiencies, be more automated through Commercial IoT and AI. Love to hear more about that kind of work

Default user avatar 57012e2942

John F. updated status

Default user avatar 57012e2942

John Finlay

Hi,

My name is Jack Finlay and I am currently studying for a Dual Masters. My Masters with which I started is Mechanical Engineering and I was then nominated to study a Masters in Mechatronic Systems. I will graduate, all being well, in June 2018. Right now I am studying for my Mechatronic Systems degree and my current projects involve using a combination of OPC UA and Microsoft Azure in order to send requests and receive responses to and from a server over the cloud. This should lay a firm foundation for my Mechanical Engineering final year project next year in which I will be attempting to implement cloud-based communication into a factory setting where this field has not really been considered yet.

My ambitions for the future very much revolve around the automation field so I think that this opportunity would give me the insight I need in order to discover what advancements need to made to bring the future to now.

Jack

Comment Report as Spam

Thumb bob duffy 3d head avatar

Bob D.

Thanks John for joining DevMesh. Please post any projects, research or even development ideas you have here. There is a lot of interest in how factories can improve inefficiencies, be more automated through Commercial IoT and AI. Love to hear more about that kind of work

Report as Spam

See More

No users to show at the moment.