TASS Computer Vision Hub

TASS Computer Vision Hub

An Artificially Intelligent, IoT connected CCTV hub. As seen at Codemotion Amsterdam 2017 @ the Intel Booth.

Artificial Intelligence, Internet of Things

  • 0 Collaborators

  • 7 Followers

    Follow

Description

THIS PROJECT IS NOW ON GITHUB! See links below..

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, an IoT PaaS I have developed which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

  • The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. (See links below).

  • The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

  • The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. I created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

  • For the 4th, I now use a system that I developed on the foundations of OpenFace. I moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, I came across the issue of unknown people being identified as known people. I have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, I am working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

  • For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

CURRENT ISSUES:

  • The Open Set Recognition Issue: The Open Set Recognition Issue is where a neural network will identify someone that it has not been trained on, as someone that it has. In this version of TASS we have seemed to have solved this issue with the use of an unknown class consisting of 500 images of random people from the LFW dataset. In larger environments, this may not solve this issue, but in small environments such as a home or office it should.

  • Lighting: Lighting is unfortunately quite a large problem that we have not been able to solve as of yet. We find we have best results when there is bright light in front of the face.

Gallery

Video

Links

Standard github

Standard shareimage

Standard screen shot 2017 02 17 at 19.14.55

Standard iot jumpway

Standard raspberry pi computer vision example

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 3ba83b51 fade 4ba4 bb88 3b695a7e1f94

TASS Computer Vision Hub

THIS PROJECT IS NOW ON GITHUB! See links below..

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, an IoT PaaS I have developed which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

- The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. (See links below).

- The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. I created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- For the 4th, I now use a system that I developed on the foundations of OpenFace. I moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, I came across the issue of unknown people being identified as known people. I have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, I am working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

CURRENT ISSUES:

- The Open Set Recognition Issue: The Open Set Recognition Issue is where a neural network will identify someone that it has not been trained on, as someone that it has. In this version of TASS we have seemed to have solved this issue with the use of an unknown class consisting of 500 images of random people from the LFW dataset. In larger environments, this may not solve this issue, but in small environments such as a home or office it should.

- Lighting: Lighting is unfortunately quite a large problem that we have not been able to solve as of yet. We find we have best results when there is bright light in front of the face.

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium e9fb0246 bc68 44a4 8fe5 d64257ccf5cf

TASS Computer Vision Hub

THIS PROJECT IS NOW ON GITHUB! See links below..

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, an IoT PaaS I have developed which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

- The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. (See links below).

- The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. I created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- For the 4th, I now use a system that I developed on the foundations of OpenFace. I moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, I came across the issue of unknown people being identified as known people. I have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, I am working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

CURRENT ISSUES:

- The Open Set Recognition Issue: The Open Set Recognition Issue is where a neural network will identify someone that it has not been trained on, as someone that it has. In this version of TASS we have seemed to have solved this issue with the use of an unknown class consisting of 500 images of random people from the LFW dataset. In larger environments, this may not solve this issue, but in small environments such as a home or office it should.

- Lighting: Lighting is unfortunately quite a large problem that we have not been able to solve as of yet. We find we have best results when there is bright light in front of the face.

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 2879f23f 9a70 4a24 b8e3 3546574614bb

TASS Computer Vision Hub

THIS PROJECT IS NOW ON GITHUB! See links below..

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, an IoT PaaS I have developed which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

- The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. (See links below).

- The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. I created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- For the 4th, I now use a system that I developed on the foundations of OpenFace. I moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, I came across the issue of unknown people being identified as known people. I have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, I am working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

CURRENT ISSUES:

- The Open Set Recognition Issue: The Open Set Recognition Issue is where a neural network will identify someone that it has not been trained on, as someone that it has. In this version of TASS we have seemed to have solved this issue with the use of an unknown class consisting of 500 images of random people from the LFW dataset. In larger environments, this may not solve this issue, but in small environments such as a home or office it should.

- Lighting: Lighting is unfortunately quite a large problem that we have not been able to solve as of yet. We find we have best results when there is bright light in front of the face.

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 7a12dfc0 b6fc 4fb3 9f3c 1b49afa38b35

TASS Computer Vision Hub

THIS PROJECT IS NOW ON GITHUB! See links below..

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, an IoT PaaS I have developed which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

- The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. (See links below).

- The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. I created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- For the 4th, I now use a system that I developed on the foundations of OpenFace. I moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, I came across the issue of unknown people being identified as known people. I have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, I am working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

CURRENT ISSUES:

- The Open Set Recognition Issue: The Open Set Recognition Issue is where a neural network will identify someone that it has not been trained on, as someone that it has. In this version of TASS we have seemed to have solved this issue with the use of an unknown class consisting of 500 images of random people from the LFW dataset. In larger environments, this may not solve this issue, but in small environments such as a home or office it should.

- Lighting: Lighting is unfortunately quite a large problem that we have not been able to solve as of yet. We find we have best results when there is bright light in front of the face.

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 1fc87808 fe99 4540 b27a 64325f5ba017

TASS Computer Vision Hub

THIS PROJECT IS NOW ON GITHUB! See links below..

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, an IoT PaaS I have developed which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

- The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. (See links below).

- The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. I created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- For the 4th, I now use a system that I developed on the foundations of OpenFace. I moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, I came across the issue of unknown people being identified as known people. I have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, I am working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

CURRENT ISSUES:

- The Open Set Recognition Issue: The Open Set Recognition Issue is where a neural network will identify someone that it has not been trained on, as someone that it has. In this version of TASS we have seemed to have solved this issue with the use of an unknown class consisting of 500 images of random people from the LFW dataset. In larger environments, this may not solve this issue, but in small environments such as a home or office it should.

- Lighting: Lighting is unfortunately quite a large problem that we have not been able to solve as of yet. We find we have best results when there is bright light in front of the face.

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 5a5d7f6e 2588 48b1 8c63 69e51fadd59e

TASS Computer Vision Hub

THIS PROJECT IS NOW ON GITHUB! See links below..

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, an IoT PaaS I have developed which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

- The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. (See links below).

- The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. I created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- For the 4th, I now use a system that I developed on the foundations of OpenFace. I moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, I came across the issue of unknown people being identified as known people. I have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, I am working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

CURRENT ISSUES:

- The Open Set Recognition Issue: The Open Set Recognition Issue is where a neural network will identify someone that it has not been trained on, as someone that it has. In this version of TASS we have seemed to have solved this issue with the use of an unknown class consisting of 500 images of random people from the LFW dataset. In larger environments, this may not solve this issue, but in small environments such as a home or office it should.

- Lighting: Lighting is unfortunately quite a large problem that we have not been able to solve as of yet. We find we have best results when there is bright light in front of the face.

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium bc644b62 b002 4916 967c 1bc05be20ae1

TASS Computer Vision Hub

THIS PROJECT IS NOW ON GITHUB! See links below..

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, an IoT PaaS I have developed which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

- The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. (See links below).

- The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. I created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- For the 4th, I now use a system that I developed on the foundations of OpenFace. I moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, I came across the issue of unknown people being identified as known people. I have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, I am working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

CURRENT ISSUES:

- The Open Set Recognition Issue: The Open Set Recognition Issue is where a neural network will identify someone that it has not been trained on, as someone that it has. In this version of TASS we have seemed to have solved this issue with the use of an unknown class consisting of 500 images of random people from the LFW dataset. In larger environments, this may not solve this issue, but in small environments such as a home or office it should.

- Lighting: Lighting is unfortunately quite a large problem that we have not been able to solve as of yet. We find we have best results when there is bright light in front of the face.

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium 3ba83b51 fade 4ba4 bb88 3b695a7e1f94

TASS Computer Vision Hub

THIS PROJECT IS NOW ON GITHUB! See links below..

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, an IoT PaaS I have developed which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

- The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. (See links below).

- The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. I created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- For the 4th, I now use a system that I developed on the foundations of OpenFace. I moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, I came across the issue of unknown people being identified as known people. I have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, I am working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

CURRENT ISSUES:

- The Open Set Recognition Issue: The Open Set Recognition Issue is where a neural network will identify someone that it has not been trained on, as someone that it has. In this version of TASS we have seemed to have solved this issue with the use of an unknown class consisting of 500 images of random people from the LFW dataset. In larger environments, this may not solve this issue, but in small environments such as a home or office it should.

- Lighting: Lighting is unfortunately quite a large problem that we have not been able to solve as of yet. We find we have best results when there is bright light in front of the face.

Adam

Adam M. added photos to project TASS Computer Vision Hub

Medium e9fb0246 bc68 44a4 8fe5 d64257ccf5cf

TASS Computer Vision Hub

THIS PROJECT IS NOW ON GITHUB! See links below..

The TASS Hub is a local server which homes an IoT connected Convolutional Neural Network. The hub can connect to multiple IP cameras and first detects if there is a face, or faces, present in the frames, and if so passes the frames through the trained model to determine whether the face is a known person or an intruder. In the event of a known person or intruder the server communicates with the IoT JumpWay which executes the relevant commands that set by rules, for instance, controlling other devices on the network or raising alarms in applications etc.

IOT CONNECTIVITY:

The IoT connectivity is managed by the TechBubble IoT JumpWay, an IoT PaaS I have developed which primarily, at this point, uses secure MQTT protocol. Rules can be set up that can be triggered by sensor values/warning messages/device status messages and identified known people or intruder alerts. These rules allow connected devices to interact with each other autonomously, providing an automated smart home/business environment.

ARTIFICIAL INTELLIGENCE:

During the development phase, 5 A.I. solutions have been used and tested before settling at the current solution.

- The first solution was to use OpenCV and Haarcascades with an Eigenfaces model, users could upload their training data which was sent to the device via MQTT for training. This solution was good as a POC, but identification was not accurate enough. The solution has now been opened up as an example for the IoT JumpWay Developer Program. (See links below).

- The second solution was developed whilst at the IoT Solutions World Congress Hackathon in Barcelona, and won our team the Intel Experts Award for building a deep learning neural network on the Intel Joule. This solution included OpenCV to detect faces, and Caffe to identify them, although we managed to build the network on the Joule, we were unfortunately unable to complete the full functionality, but had a great time working on the project and were honoured to win the Intel Experts Award.

- The third solution was to use OpenCV to detect faces and pass them through a custom trained Inception V3 model using Tensorflow. I created the ability to carry out transfer learning directly on the device (Raspberry Pi). Users could upload their training data which was sent to the device via MQTT for training. This solution was a massive improvement and accuracy for detecting trained people was almost 100%, unfortunately I identified an issue which I now know to be a common issue at the moment, where the network would identify anyone that was unknown as one of the trained people. I am currently writing a Python wrapper for the Tensorflow/Inception/IoT JumpWay method and the project will soon be released as an IoT JumpWay example.

- For the 4th, I now use a system that I developed on the foundations of OpenFace. I moved to using a local server to house the A.I. (Ubuntu) rather than doing the identification onboard as the identification onboard using an RPI was quite poor. This move means that training is only required on the server rather than all devices. As with the Tensorflow implementation, I came across the issue of unknown people being identified as known people. I have so far resolved this issue through the use of an unknown class, although this solution may not work across the board, I am working on additional solutions with the OpenFace GitHub community which incorporate multiple models that will verify the identification.

- For the 5th and current solution, the server used for the A.I has been re-homed onto an Intel Nuc. The structure of the network has also changed, now the program that handles the facial recognition and identification can connect to multiple IP cams, previously the camera devices would send the frames through MQTT to the broker, with this move, the identification process is more efficient, and also the camera devices now only need to stream, they do not need to connect to the communication broker and 3rd party devices are now supported. In addition to the ability to manage multiple IP cams, the hub can now process frames from a Realsense camera and classify the image.

CURRENT ISSUES:

- The Open Set Recognition Issue: The Open Set Recognition Issue is where a neural network will identify someone that it has not been trained on, as someone that it has. In this version of TASS we have seemed to have solved this issue with the use of an unknown class consisting of 500 images of random people from the LFW dataset. In larger environments, this may not solve this issue, but in small environments such as a home or office it should.

- Lighting: Lighting is unfortunately quite a large problem that we have not been able to solve as of yet. We find we have best results when there is bright light in front of the face.

See More

No users to show at the moment.

Bigger 0 17h0cidt6oijn3mpfqjj9z2q5bgjnt8sqljyqgatbbcgns v5djg wsn9fmjlsmpcpjy6f2a5gi4nkizixlkh4ut giznktmixljvik bbtlnfiri8tpx mfca
  • Projects 0
  • Followers 0

Suwon HAM

39 Rue Volta, 75003 Paris, France

Bigger 13900219 660563024098842 5366137820585326808 n
  • Projects 0
  • Followers 1

Enyegue polycarpe Bertin

I am a student at the University of Yaoundé 1 resident in Yaoundé in Cameroon

Yaounde, Cameroon