Scin

Scin

Scin is an app that utilizes a deep feed forward convolutional neural network to detect and diagnose both skin and plant diseases.

Modern Code, Artificial Intelligence

  • 0 Collaborators

  • 0 Followers

    Follow

Description

One out of every twenty people does not have access to medical facilities. Skin diseases and afflictions affect more than 80% of the world’s population and are sometimes a sign of internal problems. Skin diseases are primarily diagnosed visually, and then by more invasive procedures (dermoscopic analysis, biopsy, and histopathological examination). A similar issue arises with plants, with more than 30% of crops in less affluent areas dying due to diseases. Due to this loss of crops, one in nine people are suffering from chronic undernourishment.

Automatically detecting and diagnosing these lesions has been challenging, owing to the variable properties of each disease image. Deep convolutional neural networks (CNNs) are a new method of machine learning, one that is showing to be extremely promising at detecting images with real world variables. (lighting, focus, etc.)

In this project, I classified skin and plant ailments/diseases using a specially developed CNN, trained using only images of the conditions with only pixels and disease labels as inputs. I trained the CNN on a dataset of 200,000 clinical and horticultural images, consisting of 13 human diseases and 17 plant diseases. Outfitted on an IOS device, my application is capable of classifying skin and plant diseases will a level of competence comparable to dermatologists and plant pathologists. All the user must do is aim the camera of the smartphone towards the diseased area, and my application will provide a real-time diagnosis to the user by classifying the image using the CNN. The CNN achieves performance far above any other tested system, and its efficiency and ease of use will prove it to be a helpful tool for people around the world. There are currently 6,000,000,000 mobile subscriptions in place, so, therefore, my application could potentially provide low-cost universal access to vital diagnostics.

Gallery

Medium profilev2

Chaplin M. updated status

Medium profilev2

Chaplin Marchais

Well its now 7am and I have been up for about 30 hours.... but the image recognition is now actually working in azure!! Now time to take a power nap and then do some optimization with Intel's awesome suite of tools!

Medium profilev2

Chaplin M. updated status

Medium profilev2

Chaplin Marchais

Fifth night in a row I find myself still in front of the computer at 2AM.... I think I got up at-least twice today though!! Progress... Oh well, the future doesn't build itself! ...... yet....

Medium mol

Moloti N. created project Intelligent Home Security: Africa Motion Content encoder decoder using Deep Neural Networks

Medium 18657e03 c017 4a3b b485 57589b45e7a5

Intelligent Home Security: Africa Motion Content encoder decoder using Deep Neural Networks

We propose the use of Drones to help communities enhance their security initiatives, to identify criminals during the day and at night. We use multiple sensors and computer vision algorithms to be able to recognize/detect motion and content in real-time, then automatically send messages to community members cell phones about the criminal activities. Hence, community members may be able to stop house breakings before they even occur.

Machine Intelligence Algorithm Design Methodology

AMCnet: https://github.com/AfricaMachineIntelligence/AMCnet https://devmesh.intel.com/projects/africa-motion-content-network-amcnet

We propose a deep neural network for the prediction of future frames in natural video sequences using CPU. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. The model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. The model we aim to build should be end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the proposed network architecture on human AVA and UCF-101 datasets. We show state-of-the art performance in comparison to recent approaches. This is an end-to-end trainable network architecture running on the CPU with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos.

// We then use this AMCnet pretrained model on the Video feed from the DJI Spark drone, integrated with the Movidius NCS to accelerate real-time object detection neural networks.

See More

No users to show at the moment.

No users to show at the moment.