A quick overview of the progress of the AprilTags detector for the bots in two videos – hello localization in 50 lines or less!
Can’t afford a Vicon camera system? Me neither
With some valuable feedback from a few friends that work a lot more with robotics than I do, I found out there is a pretty neat way to do localization very cheaply: It’s possible to use simple tags/fiducials to estimate location… and there are actually great libraries already available. AprilTags is a fantastic example.
The whole ARNerve project is based in Python, so I spent the past week or so learning how to wrap C libraries – there is a herd of different options (yes herd is the official collective for C libraries :P) – Cython, SWIG, etc.
By wrapping the AprilTags library with SWIG and pulling the native C libraries into Python, it becomes really easy to decode Kinect2 video data and feed it through an AprilTags detector. Knowing the important camera characteristics, it’s possible to then work out where the bot is.
The AprilTags detector is actually very impressive, with a few lessons learned:
- When you cut out the tags after printing, don’t cut out the white border around the tag (1 day of confusion)
- A left/right flipped image will not be detected (another day of confusion with a serious *Doh* when that was noticed)
- Don’t cover the corners of the tags with your thumbs (as in the first movie below)
- Motion blur pretty much writes off the detection
We’ll include some code discussion when writing up the ‘Building ARNerve’ thread, but if you’re interested you can peek at the ‘arnerve_bot’ folder in the linbox1 branch of the GitHub project – it has everything for the detection and the decoding except the actual calculation of the position.
We’re almost at the point where we do the first drop and then over Christmas will start writing up on the architecture.