Human Activity Recognition

Human Activity Recognition

Suraj Ravishankar

Suraj Ravishankar

Tempe, Arizona

Human Activity Recognition which use sensors to recognize human actions, have been studied for a long time to produce a simpler system.

Android, Artificial Intelligence, Internet of Things

  • 0 Collaborators

  • 0 Followers

    Follow

Description

Human Activity Recognition which use sensors to recognize human actions, have been studied for a long time to produce a simpler system with higher precision. However, there are a very limited number of projects that investigate a human activity recognition system built right on the smartphone. A great advantage of this integrated system is the real time and full time supervision. The human activities recognition built right in a smart phone promises to open up a new direction not only in monitoring and health care but also in other fields. Smartphones are going to get more popular in the world over the next five years. According to Ericsson's mobility report, there will be a massive jump from the 2.6 billion smartphone users recorded in 2014 to 6.1 billion by 2020. These smartphones are also equipped with a lot of sensors. In detail, accelerometer sensor measures acceleration in three orthogonal axes. All of objects in the Earth are affected by the gravity. The linear acceleration measures the acceleration effect of the device movement, excluding the effect of Earth's gravity on the device. The gyroscope uses Earth’s gravity to help determine orientation of smartphone. Thus, the idea of utilization of these sensors to make a smartphone application for human activities recognition became more realistic. Android is the most popular mobile OS in the world with more than 82% phones running on it and with Google’s continuously improving features by listening to feedback from its users, it is only going to get more popular. In Android, Google has introduced a set of APIs (Application Programming Interface), which allow the developer to connect Google services to their Android phone for receiving the human activity recognition results. Google API can recognize six type of activities which include “in vehicle”, “on bicycle”, “on foot”, “running”, “still” and “walking”. However, this system required the connection to Google server inorder-to send the requests and receive the results.

Medium whatsapp image 2017 08 21 at 11.51.22 pm

Srihari J. created project Novel approach to avoid glare while driving cars in Highways at night.

Medium c96d72d5 eb8c 4f98 afce 405b7a700812

Novel approach to avoid glare while driving cars in Highways at night.

Abstract:APT i.e "Accident Prevention Technology".All we see in the second page of any News paper is about unfortunate incidents ,The Accidents.We are coming up with amazing features for all four wheelers which on support with statistical reports, prevent 50% of the accidents. That's the minimum margin we put, as our Arduino, Ultrasonic and Servo-motor based quick responsive systems, and android map application, taking care of the obstructions caused for the drivers which leads to accidents(glare hitting the eyes when other vehicles zoom in with high beams "on" highways in night), and about the junctions, which are very difficult to spot when you are driving at 40+mph and none to warn you about the nearest intersection(our android application using G-Maps API takes care of it)!! Aren't intersections ,glare, and drowsiness main reasons for accidents?! We are taking at most care to address each and every aspect. We prevent the glare hitting the drivers eyes (completely automated systems , implementable for every lower end/economical cars),using Robotic hands, which is guided by an image processing system along with Light Detecting Resistors value. Also keep the vehicles speed in check, mostly at junctions. We will also want to take care about the "Care" part of our project. We make sure the emergency wheels reach out soon to the hospitals, using IoT and GPS system! We are automating the communication system, using the best of the resources available. We are using cost effective sensors, and connecting each and every device, bringing out a working prototype. Our total expenses comes under a thousand rupees , for the completely working and ready to implement systems. We feel this is the perfect platform to bring out our innovations and make them working models, and reach the world. We await anxiously to execute/implement our thoughts and "SAVE LIVES!!!".

Objectives: Preventing accidents and saving lives, by preventing the glare of opposite direction travelling vehicle disturb the driver. Warning about the very next intersection/junction when the drivers speed is above the limit for that specific road. Automated signal systems which create green corridor for emergency vehicles.

Outcomes: Reduction in accident rate by 50%.

The overall block diagram :

Brief description of the Methodology: a) Glare prevention system: We have used a small camera, fixed on the dashboard, which clicks the picture of the driver every second (automated) and sends the result to the servo motor via Arduino board. Using matlab , we have coded for drowsiness and glare detection. If the output gives to be drowsy, an alarm (beep) rings. If the glare is detected, servo motor is given the right value for the amount of inclination to obstruct the light from falling on drivers eyes. We have used X-ray sheet as a translucent material which obstructs the light in one way direction. The communication between different devices is automated and wireless. b) Junction alarms: If the vehicle is travelling more than prescribed speed for that highway/road ,our android application shouts/alarms that there a junction ahead of 600m and advices the driver to slow down. c) Emergency vehicle systems: Pure IoT concepts, to inform the cop about its arrival to the next signal. One application which also communicates with the signal system, hence creating green corridor and pushing the chances of saving lives.

Expected Results: The human kind is the end user! Ideas themselves call to be philanthropic. None wishes to end his/her life on roads. Everyone cannot afford for automated systems which come in Benz/BMW's. Our glare prevention technology can be installed in less than 2k rupees. The junction app is free and will always be. Ambulance system app again is free, but interacting with the government officials will do the required job for us!

Conclusion: This is an essential implementation which should not be ignored at all. Statistics says 50% of accidents happen summing up of intersections and glare. We single handedly prevent both. We await to implement it at the very best platform we have, ABB Makeathon.

Future Scope :Research and development takes things to next level. Our glare prevention system, having mechanical constraints, work in less than 1.2 seconds. Higher level of automation and accuracy can be confronted. This kind of a concept does not exist anywhere till date.

Medium erika

Erika H. updated status

Medium 1618498 813216372038108 986849409 n

Daniel T. created project CAVSIM

Medium 9d9e560c 92ce 4f03 adc8 917207775b7c

CAVSIM

In recent years, research over information exchange between vehicles has been growing, with the goal to improve safety and efficiency in traffic. This project describes the development of a multi-agent system for simulation of connected vehicles in road crossings, in order to remove the need for traffic lights. The system was developed using the C# programming language and Boris.NET platform for communication between agents. The user can specify the desired environments using JSON files which are read at the simulation startup. A 2D graphical interface was also created to view the simulation, so that it can follow an agent or stay at a map position.

Medium adam

Adam M. created project TASS PVL Computer Vision Hub

Medium 8b5dbe67 9224 46f5 a322 d19f714d65c7

TASS PVL Computer Vision Hub

DESCRIPTION:

TASS PVL is a sister project to the original TASS Hub project. As with TASS Hub, TASS PVL is a local server which homes an IoT connected A.I. powered by the Intel® Computer Vision SDK Beta. The hub can connect to multiple IP cameras and two Realsense cameras. First, the program detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder, the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

INTEL® TECHNOLOGY

TASS PVL uses the following Intel technologies:

  • Intel® Core i7 NUC
  • Intel® Computer Vision SDK Beta
  • Intel® Realsense (R200,F200)

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

TASS PVL uses the Intel Computer Vision SDK Beta to provide the system with Artificial Intelligence. For other uses of A.I. used in the sister project TASS PVL, follow this link.

INTELLILAN MANAGEMENT:

The IntelliLan Management Console/Applications are essentially IoT JumpWay applications, capable of controlling all IntelliLan devices on their network and communicating with the IoT JumpWay. Users can use the console and manage their devices using their voice which is powered by TIA, an A.I. agent developed to assist home and business owners to use TechBubble web and IoT systems.

Default user avatar 57012e2942

Rishabh S. updated status

Default user avatar 57012e2942

Rishabh Shukla

Hello there!

I'm Information Technology undergrad at Maharaja Agrasen Institute of Technology in India. I am a Backend and Mobile developer specialising in Node, Android, Python and iOS. I've also created some Augmented Reality projects in past with ARKit and Vuforia. My latest research is in the field of Image Processing, in which I created an IOT based Rover which follows a required path without any errors. I used Edge Detection technique in it as well. I am also very interested in deep learning and NLP. I've worked with Tensorflow and Keras. I'm excited to know the what projects you've been working on in those fields!

See More

No users to show at the moment.

No users to show at the moment.