MATLAB Answers

craq
0

How to use vision.PointTracker with ImageLabeler?

Asked by craq
on 3 Jul 2018
Latest activity Commented on by craq
on 9 Jul 2018
There is an excellent tutorial on how to use the point tracker with the Ground Truth Labeler app. https://au.mathworks.com/videos/ground-truth-labeler-app-1529300803691.html
Unfortunately, I don't have access to the Automated Driving Toolbox, but I do have access to the image processing toolbox. The image processing toolbox includes both the ImageLabeler app and a point tracker algorithm. So I think it should be possible to implement the same functionality. I have tried comparing the vision.PointTracker class to the template that comes up when I click "create new algorithm" in the ImageLabeler app, but I am having trouble understanding how to make them work together. If there is a tutorial I have overlooked, please point me in the right direction. If not, a brief explanation would be much appreciated.

  2 Comments

The image labeler is used to label a image with a ROI. You want to automate that labeling i assume?
If not its pointsless for you to create a new algorithm in the image labeler. The algorithm you use in this option is to detect objects in the images you want to label, you can then use them to train a detector for example.
Yes I want to automate that labelling, and use those labels as ground truth to train a machine learning algorithm.
I have several series of images which are effectively frames from a movie. It would help me a lot if I could use a point tracker to localise an ROI in one image based on its position in the previous image. Is that what the vision.PointTracker class does?

Sign in to comment.

1 Answer

Answer by Florian Morsch on 5 Jul 2018
Edited by Florian Morsch on 5 Jul 2018

The vision.PointTracker does, as the name implies, tracks points (with KLT-algorithm).
But to track those points you first have to find them. This is mostly done with a object detector for example. Now what you are aiming for is to find a specific object in each frame and label it. If its something simple like people or faces you could try a already trained cascade object detector (MATLAB has some pre-trained variants of it).
If you want to detect a more special object you are better off if you
a.) write a algorithm for detection on your own (if you only want to detect white colored objects for example you can search for only white pixels, or if you want to detect a cube you can search for it with edge detection)
b.) label it yourself. Depending on how many pictures you have for training it might be faster to label them yourself instead of writing the algorithm and then check each picture if its labeled correctly.
The vision.PointTracker itselfs cant detect anything. It needs points you give it which it can then track.
Now if you are able to find your first object and get enough points, then the point tracker can follow those points over multiple images. So basically yes, you can use a point tracker to follow points over multiple images. But you have to make sure that you give it enough points to follow (id recommend 10 or more) and after you have processes all images you still should check if the labeling is done correctly.

  3 Comments

Thank you for your suggestions, I am aware of those options, and was hoping I'd be able to do better. In my specific case my object is not simple so there are no pretrained object detectors available. The goal of my work will be to train a detector, but first I need some labelled ground truth data. Detection algorithms using conventional image processing have about a 90% detection rate. If I feed my machine learning only with data labelled by conventional algorithms, it will only ever be as good as those algorithms, right?
I do intend to give PointTracker points to track. I will label the points in the first image of a sequence (or using other terminology, the first frame of a video). Then I would like to automate the labelling for the rest of the sequence.
Is there a tutorial somewhere on how to integrate the PointTracker with the ImageLabeler app? It isn't clear to me how to make them work together.
No, your machine learning algorithm has nothing to do with the algorithm which labeled the images. The machine learner trains itself, so if you have enough positiv and negative examples (depending on which detector you want to train) you can get way better results.
Since i never wrote such an algorithm (i labeled the images myself) i cant tell for sure, but i guess the best try would be to load all images into the labeler app, then choose the first and set the points for the tracker. After that you give it every following frame and step through them with the active tracker.
Also this might help, which is a coded example of the point tracker https://de.mathworks.com/help/vision/examples/face-detection-and-tracking-using-the-klt-algorithm.html
And maybe this might interest you as well: https://de.mathworks.com/help/vision/ug/find-corresponding-interest-points-between-pair-of-images.html here you can find corresponding points between images, also a possible way to achieve your goal
Thanks for those ideas. I think I will be able to adapt the point tracker or the feature matching to help label my images. It's a shame that it doesn't work with the ImageLabeler app, but at least you've given me an idea of how to get the result I'm looking for.
By the way, conventional algorithms to detect these objects have a low success rate for certain specific conditions (lighting, background etc). I am aware that deep learning networks generalise quite well, but my understanding is that they don't extrapolate well outside their training data. I assume that if I don't feed it with enough examples of the difficult conditions, then it will only know about the "easy" examples.

Sign in to comment.