Drift Detection for Edge IoT Applications

This concept drift project is run on video and image datasets such that we can calculate an overall precision and standard error. The concept drift detection technique finds True positives and False negatives using real and virtual drift detection. ...learn more

Project status: Concept

oneAPI

Intel Technologies
DevCloud, oneAPI, DPC++, Intel Python, OpenVINO, Movidius NCS

Docs/PDFs [1]Code Samples [1]

Overview / Usage

Project Overall

The concept drift produces a binary classification technique on image data detected from video. It follows a Bayesian approach in representing the decision boundary as a set of P-value images. The drift detection technique is integrated with IP cameras to report a **partial precision **of drift detection.

Problems Solved

The concept drift deals with Drift Understanding scenarios where PSNR is identified as Knowledge Loss and MSE (Mean Squared Error) of P-value images is identified as Drift Severity. Through simulation the technique achieves drifted reconstructed images from a parameter called sensitivity of drift detection such that distances to an image manifold of the detected images also become a feature. Any image within CIFAR or ImageNet are translated to a representation of an error-based Image Manifold for those images. An extensive Feature Engineering process improving the accuracy reported to a desired level of the original ML model used within an IP camera is being developed.

In Production

In production, there are two routes:

(1) AI route: where you execute the drift detection algorithm (being developed in Github) within a Jupyter notebook producing a video inference result

(2) Networking route: where you install the drift detection algorithm inside an IP camera telling it to generate a video inference result

During purchase, the results of the drift detection technique for the cameras will be available within the technical documentation as articles or blogs or within documents. IP cameras used in industries will be tested with a live video. oneAPI code that can run on heterogeneous systems will be deployed to the processors. The processors that are under consideration are: RISC-V, ARM Cortex, Intel. **FPGAs **used in industrial applications are deployed with the drift detection algorithm such that the machines support them in a production environment.

Methodology / Approach

Drift Detection

In this project, we present three different techniques for drift detection:

(1) Anomaly Detection: This is a real drift algorithm which integrates the ML model to report the drift. The anomaly detection algorithm is based on Euclidean distances and cosine distances of the drifted and non-drifted images.

(2) Real-drift Classifier approach: This algorithm integrates the ML model to report the drift based on the image manifold error and **epsilon **values. The classifier used could be a **GaussianProcessClassifier **which reduces the training error and testing error bias.

(3) Virtual-drift Classifier approach: This algorithm is used within the project because it is a virtual drift algorithm that does not use an ML model. Such an algorithm is used for integrating the IP camera with drift detection such that a partial precision is obtained. The image manifold is either processed using an autoencoder and MDS based manifold or uses patch based techniques to represent a single pixel within a set of images.

For face aging, we find that the Real-drift algorithm is the best algorithm among all three.

Real Time Video Transfer

In this project, a real time video transfer is achieved by a command line application sending frames to ffmpeg server which is viewed over HTTP by a node.js based application. In a real scenario where IP camera is deployed, a P2P transfer is made from the camera to the server using RTP and RTSP by relaying the video to another domain. This will ensure people from various organizations are able to view the real time video inference output from their devices.

In every video processed the objects are detected using an Object Detection ML model. The object detection model detects relevant images with their bounding boxes and converts that to P-value images through image correlation tests. Each detected image is converted to its **reconstructed **image with an additional drift sensitivity parameter obtained through simulation. The P-value images are evaluated for both the original image detected and its reconstructed image. The **MSE **of P-value images and **PSNR **of the original image and reconstructed image form the features for the binary classifier.

Video Analytics into Databases

The analyzed video transforms into features for every frame which is stored in a database. A partial precision is calculated from these features in every frame such that a Grafana server connected to the database will visualize the output.

Technologies Used

Technologies Used

HTTP / WebRTC, FFMPEG, Node.js, Python, oneAPI

Libraries Used, Software and Hardware

FPGAs on Intel Devcloud, Grafana, Jupyter Notebook

Intel technologies

FPGA on Intel Devcloud, Intel oneAPI Base toolkit, Intel HPC

Documents and Presentations

Repository

https://github.com/blackout-ai/Face_Aging_Concept_Drift

Collaborators

2 Results

2 Results

Comments (0)