Pose Estimation based on the Yolov7 on the edge computing and cloud processing analysis

Nitin Mane

Nitin Mane

Aurangabad, Maharashtra

0 0
  • 0 Collaborators

A study case for the Yolov7 model to use for the edge device and analyse multi-process based on the Intel DevCloud server and provide results ...learn more

Project status: Under Development

Internet of Things, Artificial Intelligence, Cloud, Performance Tuning, Reviews

Intel Technologies
DevCloud, oneAPI, Intel FPGA, Intel CPU, OpenVINO, 10th Gen Intel® Core™ Processors, AI DevCloud / Xeon

Overview / Usage

Pose estimation refers to the process of determining the pose (i.e., position and orientation) of an object or a person in an image or video. The Yolov7 is a state-of-the-art object detection model that can be used for pose estimation. In this study, we investigate the use of Yolov7 for pose estimation on both edge computing and cloud processing platforms. We compare the performance of Yolov7 on these platforms in terms of accuracy and computational efficiency. Our results show that Yolov7 performs well on both edge computing and cloud processing platforms, but edge computing may be more suitable for real-time applications due to its lower latency. However, the choice between edge computing and cloud processing will depend on the application's specific requirements.

Methodology / Approach

The methodology for using Yolov7 for pose estimation with Intel OpenVINO and oneAPI can be outlined as follows:

  1. Install Intel OpenVINO and oneAPI on the development machine.
  2. Download the Yolov7 model and convert it to the OpenVINO format using the Model Optimizer.
  3. Pre-process the input image or video to be compatible with the Yolov7 model. This may include resizing, normalizing, and converting the image to the correct format.
  4. Use the Inference Engine of Intel OpenVINO to run the Yolov7 model on the input image or video.
  5. Post-process the output of the Yolov7 model to extract the pose information. This may include decoding the output tensors, applying non-maximum suppression, and calculating the pose from the bounding boxes and class probabilities.
  6. Visualize the pose estimation results on the input image or video.

Note that this is a general outline of the methodology and the specific steps may vary depending on the application and the desired level of accuracy and efficiency.

Comments (0)