Quality assurance in motion detection

How to test and label data for motion detection

Antti Havanko
5 min readOct 27, 2020

Background

As you might have noticed from my previous articles, I have (everlasting) project where I’m building a security camera system for myself. The idea is to use the object detection to spot the intruders on my property and then notify me on my phone. Here are some of my previous articles about this:

Object detection is relatively easy nowadays due to all the pre-trained models but developing the motion detection on top of it turned out to be a bit challenging than I originally thought. Especially to test it without needing to walk in front of the camera every time you make a change.

This is a personal project with a limited amount of resources so I wanted to have an easy way for myself to label images from the cameras so I could have more data to test and improve the system.

I’ll describe next what I did. This consisted of the following steps:

  1. Perform object detection and motion detection
  2. Collect data for labeling
  3. Label the data
  4. Use labeled data for quality assurance and for improving the system

Let’s dive into each step next!

1. Perform object detection and motion detection

I’m using the MobileNet SSD v2 model for object detection. It has been trained with COCO dataset and can recognize 90 different objects which is more than enough for spotting intruders. Or how often you have had a giraffe with bad intentions walking in our backyard? :)

The object detection alone is not enough because I want to get notified only when someone is moving in front of the camera. I’m currently making six detections in 6 seconds and looking at the positions of detected objects in the consecutive detections to remove the static objects. E.g. if my car is parked in the yard.

Detected objects before and after static object removal

Ps. I’m also using public cameras for testing so I don’t have to walk back and forth in front of the camera myself. There are many public cameras listed on https://www.insecam.org.

2. Collect data for labeling

I’m running the application on Raspberry Pi so I can’t keep a lot of image data on the device itself. I decided to use Google Cloud Storage for this and configured Raspberry Pi to upload a random set of images and the detected objects in the images to a GCS bucket.

The detected objects are saved as a JSON file which contains the number of detected objects in each image in the sequence and the decision from the motion detector.

I decided to go only with the number of objects per image and not the positions of the objects because the exact position is not so important for thiis project. Plus I knew that I’m not going to have time to review and correct the exact positions of the objects in each image anyway.

3. Label the data

The most important thing in labeling is to make it as easy as possible. Some kind of UI for displaying the collected data and providing an easy to correct the invalid detections would be perfect but due to the resource constraints in the project, I decided to go with Google spreadsheets. I configured Bigquery to load the data from the GCS bucket and then added Bigquery as a data source for a Google spreadsheet.

This makes it easy to get a random sample by just opening the spreadsheet and it enables me to quickly review and label the data. The link points to the image sequence stored in Cloud Storage but you could even show the image directly in the spreadsheet by using the IMAGE function.

Image sequence to be reviewed

The final spreadsheet looks like this:

Data for labeling

4. Use labeled data for quality assurance and for improving the system

The labeled data will be exported as a CSV file from the Google spreadsheet and will be used for quality assurance. I wrote a standard integration test, which loads the data from the CSV and uses it to verify the behavior. The test runs the image sequence through the system and measures the recall and precision for the motion detector and for the object detector.

I’m using Gitlab for CI/CD so the test also exports the results as OpenMetrics file which is supported by Gitlab:

Gitlab will then compare the metrics in your feature branch to the metrics in the master branch and show the metrics directly in the merge request!

Word of a warning for Gitlab users with a free account. The metrics feature is only Gitlab premium users but you can still see the Metrics report even with the free account. It just always states that “the metrics reports didn’t change” even though there were changes. This was super confusing because I would assume this would be completely hidden if the feature is not included in the free account.

--

--