TASS Computer Vision Hub

TASS Computer Vision Hub

An Artificially Intelligent, IoT connected CCTV hub. As seen at Codemotion Amsterdam 2017 @ the Intel Booth.

Artificial Intelligence, Internet of Things

  • 0 Collaborators

  • 8 Followers

    Follow

Description

DESCRIPTION: TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.

First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

IOT CONNECTIVITY: The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE: During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.

    1. The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.
    1. The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.
    1. The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.
    1. For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.
    1. For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.

To view the full information about the latest versions of TASS, you can visit the official project page:

https://ai.techbubbletechnologies.com/projects/computer-vision/project/tass-techbubble-autonomous-sight-system

Gallery

Video

Links

Standard raspberry pi computer vision example

Standard screen shot 2017 02 17 at 19.14.55

Standard iot jumpway

Standard github

Standard shareimage

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 3ba83b51 fade 4ba4 bb88 3b695a7e1f94

TASS Computer Vision Hub

DESCRIPTION:
TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.

First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

IOT CONNECTIVITY:
The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.

- 1. The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.

- 2. The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- 3. The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- 4. For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- 5. For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.

To view the full information about the latest versions of TASS, you can visit the official project page:

https://ai.techbubbletechnologies.com/projects/computer-vision/project/tass-techbubble-autonomous-sight-system

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium e9fb0246 bc68 44a4 8fe5 d64257ccf5cf

TASS Computer Vision Hub

DESCRIPTION:
TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.

First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

IOT CONNECTIVITY:
The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.

- 1. The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.

- 2. The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- 3. The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- 4. For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- 5. For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.

To view the full information about the latest versions of TASS, you can visit the official project page:

https://ai.techbubbletechnologies.com/projects/computer-vision/project/tass-techbubble-autonomous-sight-system

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 2879f23f 9a70 4a24 b8e3 3546574614bb

TASS Computer Vision Hub

DESCRIPTION:
TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.

First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

IOT CONNECTIVITY:
The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.

- 1. The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.

- 2. The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- 3. The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- 4. For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- 5. For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.

To view the full information about the latest versions of TASS, you can visit the official project page:

https://ai.techbubbletechnologies.com/projects/computer-vision/project/tass-techbubble-autonomous-sight-system

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 7a12dfc0 b6fc 4fb3 9f3c 1b49afa38b35

TASS Computer Vision Hub

DESCRIPTION:
TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.

First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

IOT CONNECTIVITY:
The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.

- 1. The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.

- 2. The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- 3. The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- 4. For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- 5. For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.

To view the full information about the latest versions of TASS, you can visit the official project page:

https://ai.techbubbletechnologies.com/projects/computer-vision/project/tass-techbubble-autonomous-sight-system

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 1fc87808 fe99 4540 b27a 64325f5ba017

TASS Computer Vision Hub

DESCRIPTION:
TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.

First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

IOT CONNECTIVITY:
The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.

- 1. The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.

- 2. The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- 3. The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- 4. For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- 5. For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.

To view the full information about the latest versions of TASS, you can visit the official project page:

https://ai.techbubbletechnologies.com/projects/computer-vision/project/tass-techbubble-autonomous-sight-system

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 5a5d7f6e 2588 48b1 8c63 69e51fadd59e

TASS Computer Vision Hub

DESCRIPTION:
TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.

First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

IOT CONNECTIVITY:
The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.

- 1. The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.

- 2. The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- 3. The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- 4. For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- 5. For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.

To view the full information about the latest versions of TASS, you can visit the official project page:

https://ai.techbubbletechnologies.com/projects/computer-vision/project/tass-techbubble-autonomous-sight-system

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium bc644b62 b002 4916 967c 1bc05be20ae1

TASS Computer Vision Hub

DESCRIPTION:
TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.

First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

IOT CONNECTIVITY:
The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.

- 1. The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.

- 2. The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- 3. The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- 4. For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- 5. For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.

To view the full information about the latest versions of TASS, you can visit the official project page:

https://ai.techbubbletechnologies.com/projects/computer-vision/project/tass-techbubble-autonomous-sight-system

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 3ba83b51 fade 4ba4 bb88 3b695a7e1f94

TASS Computer Vision Hub

DESCRIPTION:
TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.

First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

IOT CONNECTIVITY:
The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.

- 1. The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.

- 2. The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- 3. The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- 4. For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- 5. For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.

To view the full information about the latest versions of TASS, you can visit the official project page:

https://ai.techbubbletechnologies.com/projects/computer-vision/project/tass-techbubble-autonomous-sight-system

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium e9fb0246 bc68 44a4 8fe5 d64257ccf5cf

TASS Computer Vision Hub

DESCRIPTION:
TASS is a state of the art IoT connected computer vision server and API for advanced home and business security and automation. The hub can connect to multiple IP cameras and RealSense cameras and utilizes the Intel® Computer Vision SDK Beta to bring industry standard computer vision to the project.

First detects if there is a face, or faces, present in the frames captured from the cameras, and if so passes the frames through computer vision algorythm to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

TASS was officially debuted at the official Intel Booth at CodeMotion in Amsterdam in 2017. More recently TASS was demonstrated at Web Summit alongside A.I. E-Commerce debuting the current version using the latest Intel NUC.

IOT CONNECTIVITY:
The IoT connectivity is managed by the TechBubble IoT JumpWay, the TechBubble Technologies IoT PaaS which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:
During the ongoing development of TASS, 8 A.I. solutions have been used and tested before settling at the current solution.

- 1. The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. This version has now been opensourced, click here to view the tutorial.

- 2. The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- 3. The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. We created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- 4. For the 4th, a system was developed on the foundations of OpenFace. We moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, we came across the issue of unknown people being identified as known people. We have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, we are working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- 5. For the 5th solution, the server used for the A.I was re-homed onto an Intel Nuc. The structure of the network also changed, the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process became significantly more efficient, and also the camera devices only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image. This version has now been opensourced, click here to view the tutorial.

To view the full information about the latest versions of TASS, you can visit the official project page:

https://ai.techbubbletechnologies.com/projects/computer-vision/project/tass-techbubble-autonomous-sight-system

See More

No users to show at the moment.

Bigger 13900219 660563024098842 5366137820585326808 n
  • Projects 0
  • Followers 1

Enyegue polycarpe Bertin

I am a student at the University of Yaoundé 1 resident in Yaoundé in Cameroon

Yaounde, Cameroon

Bigger 0 17h0cidt6oijn3mpfqjj9z2q5bgjnt8sqljyqgatbbcgns v5djg wsn9fmjlsmpcpjy6f2a5gi4nkizixlkh4ut giznktmixljvik bbtlnfiri8tpx mfca
  • Projects 0
  • Followers 1

Suwon HAM

39 Rue Volta, 75003 Paris, France