Use cases for YOLOv3 on IoT Edge

Use cases for YOLOv3 on IoT Edge

Introduction 

Information concentrated applications are developing at an expanding rate, and with it grows a developing need to solve scalability and high-performance issues. Using Cloud technologies, it’s become possible – and more feasible – to harness remote resources to build and deploy these applications.  

In recent years, a new set of applications and services based on the Internet of Things (IoT) require processing a large amount of data in exceptionally less time. Among them, detecting objects of interest within a business context has gained prime importance. We see these models applied in areas such as… 

  • Self-driving cars, which is entirely reliant on video object recognition systems 
  • The healthcare sector to diagnostic tools like MRIs and CT scans 
  • The retail industry where object detection can assist in inventory management, monitoring stock, and identifying stockouts . 

Why use YOLOv3 as an object detection model 

YOLOv3 (You Only Look Once, Version 3) is a real-time object detection algorithm that is used to identify objects  in videos, live feeds, and images. YOLO “learns” how to detect certain objects using a deep neural network – specifically a convolutional neural network (CNN).  

Some benefits of YOLOv3 include:

    • It’s fast. Suitable for real-time processing. 
    • Predictions (object locations and classes) are made from one single network. Can be trained end-to-end to improve accuracy. 
    • YOLOv3 is more generalized. It outperforms other methods when generalizing from raw images to other domains like artwork. 
    • YOLO detects one object per grid cell. It enforces spatial diversity in making predictions. 

Yolov3 uses COCO’s (Common object in Context) pre-trained weights which means that you can only use YOLO for object detection with any of the 80 pretrained classes that come with the COCO dataset. COCO is a good option for beginners because it minimizes the need to create or redesign new code. 

The following 80 classes are available using COCO’s pretrained weights: 

 

Yolo v3 object detection algorithm
YOLOv3: Real-Time Object Detection Algorithm (What’s New?)

 

YOLO object detection using OpenCV and python
YOLO Object Detection with OpenCV and Python

 

Yolo object detection
YOLO object detection with OpenCV

How does YOLO work? 

  • The YOLO algorithm works by separating the picture into N grids, each having an equivalent dimensional region of SxS. Each of these N grids is capable of the discovery of the object it contains. 
  • Correspondingly, these frameworks anticipate B bounding box arranges relative to their cell facilitates, in conjunction with the object name and likelihood of the question being display within the cell. 
  • This preparation significantly brings down the computation as both discovery and acknowledgment are taken care of by cells from the image. 
  • It brings forward a part of copy forecasts due to different cells foreseeing the same protest with diverse bounding box expectations. 
  • YOLO utilizes Non-Maximal Suppression to manage this issue.

Yolo-v3 architecture

 

How to build a docker container with Yolov3 Model? 

  1. Install docker and curl module in your machine 
  2. Git clone https://github.com/Azure/video-analyzer.git 
  3. cd video-analyzer/edge-modules/extensions/yolo/yolov3/http-cpu/ 
  4. You can view the yolov3.docker file under that folder  

Finding Yolov3.docker file in folder

 

5. To build the docker image locally, run the below command with admin privileges in the same directory where the docker file is stored. 

 Sudo docker build -f yolov3.dockerfile . -t objectdetection:yolov3 

 Note :The REST endpoint accepts images with size 416 px x 416 px  

6. To run the container, use the below-mentioned command. 

 docker run –name my_yolo_container -p 8080:80 -d  -i objectdetection:yolov3  

7. You can get a list of detected objects using the following command. 

 curl -X POST http://127.0.0.1:8080/score -H “Content-Type: image/jpeg” –data-binary @<image_file_in_jpeg> 

 Ex: curl -X POST http://127.0.0.1:8080/score -H “Content-Type: image/jpeg” –data-binary @/home/ashish/image/camera.jpg

List of detected objects in command

 

8. After testing, you can push the image to the Azure container registry 

9. Follow the instruction in the Deploy module from the Azure portal to deploy the container image as an IoT Edge module (use the IoT Edge module option). 

Use case: Detect objects using the YOLOv3 model at the Edge location 

This scenario will track the objects in a live feed from a (simulated) IP camera. Still, there can be various use cases of it, such as counting people in an area during pandemic situations or identifying livestock details in the agriculture industry.  

Companies like Amazon have already taken advantage of these technologies to analyze customer behavior in stores, detect anomalies in wheat fields, identify rust on industrial equipment, and more.; They have a larger field that will generate a large amount of data that can easily be used to train models for customized requirements.  

You will also see how to apply the YOLOv3 model with Azure Video Analyzer (AVA) edge module to track the objects. 

  1. To deploy all the necessary resources for this project, click here. This will deploy the mentioned below resources in your Azure subscription. 

Note: Make sure you have an active Azure subscription with owner-level permission on it. 

    • Video Analyzer account -This is an Azure service used to register the Video Analyzer edge module and playback the recorded video. 
    • Storage account -For storing recorded video and video analytics. 
    • Managed Identity -This is the user-assigned managed identity used to control access to the above storage 
    • Virtual machine -This is a virtual machine that will serve as your simulated edge device. 
    • IoT Hub -This acts as a central message hub for bi-directional communication between your IoT application.

2. Generate the AVA provisioning token by opening the Azure video analyzer, and on the left-hand side, there is the option of edge module. Click on that, and then you will see an option to generate token, as shown below. 

Generating AVA provisioning token

 

3. Download the deployment manifest from here and change the $AVA_PROVISIONING_TOKEN to the value generated by the above step and then save it as a deployment.json file 

4. You can use the below-mentioned command to deploy the deployment manifest file, but before that, make sure you have Azure CLI installed, and after that, follow the steps below: 

    • First login into Azure CLI by opening cmd and enter “az login” 
    • It will redirect to the browser where you can log in to your Azure account. 
    • Use the below command to deploy the deployment manifest file. 

az iot edge set-modules –device-id $DEVICE_ID –hub-name $HUB_NAME –content deployment. json –only-show-error -o table 

where you must replace the $HUB_NAME with the IoT hub name and $DEVICE_ID with the IoT edge device name.

5. To create and deploy the live pipeline. Go to the IoT hub on the Azure portal > select IoT edge device > Under modules, click on avaedge > Select direct method. 

    • To set the pipeline topology, in the method name type “Pipelinetopologyset” and in payload paste the method from this URL as  shown below and then click on invoke method: 

Creating and deploying live pipeline

 

    • To set the live pipeline, in the method name type “livetopologyset” and in the payload paste the method as shown below and then click on invoke method:  

Note: If you want to use a custom video, login into the VM and store the video under /home/localedgeuser/sample/input/ folder and then under rtspurl, change the name of the video which you have uploaded. 

{ 

  “@apiVersion”: “1.0”, 

  “name”: “Sample-Pipeline-1”, 

  “properties”: { 

    “topologyName”: “CVRHttpExtensionObjectTracking”, 

    “description”: “Sample pipeline description”, 

    “parameters”: [ 

      { 

        “name”: “rtspUrl”, 

        “value”: “rtsp://rtspsim:554/media/camera-300s.mkv” 

      }, 

      { 

        “name”: “rtspUserName”, 

        “value”: “testuser” 

      }, 

      { 

        “name”: “rtspPassword”, 

        “value”: “testpassword” 

      } 

    ] 

  } 

} 

Setting live pipeline

 

    • To activate the pipeline, in the method name, type the “livepipelineactivate” and in the payload, paste the method shown below and then click on invoke method

{ 

    “@apiVersion” : “1.0”, 

    “name” : “Sample-Pipeline-1” 

} 

Activating live pipeline

 

6. Now, you can see the live video with inferences on the AVA player. 

Vehicle detection in real-time using AVA player