With the advent of cloud, and the race to create new business models, I’d like join in the fray and coin a new term – XaaS! XaaS stands for (Xtremely everything) as a Service! It’s a cloud business model that incorporates PaaS, SaaS, HaaS, TaaS, FaaS, GaaS, SnaaS, MCAaaS etc.
While writing up the next set of articles, we decided to upload the first set to CodeProject – mostly because we’re fans and wanted to contribute.
The first of the series can be found at Building User Interfaces for Robotics with VTK. It contains the first 3 posts, props to Daniel Gawryjolek for the editing+contrib and William Shipman for the WinPython install steps! Feel free to comment and please drop us a rating. If it sucks let us know and we’ll improve it! However, 5 stars gets you a free beer at the next hackathon, I’m just saying 🙂
Right, on to the next set of articles!
Cue the work montage:
That is all.
Up to now, the focus has been on setting up the system and building the 3D environment. With this all baked in, the remaining posts will add all the components that make it into a complete system. These will discuss how to overlay 2D PyQt UI’s and how to connect the visualization via LCM to an real-world bot. A number of interesting points have also been raised by readers, so I’m including sections for installing on Windows, integrating camera real-world feeds, and using GLSL to build shaders in VTK.
This ‘interlude’ post looks at adding head-tracking with a pair of Wrap 920 VR glasses so that you can fly around the scene. I wasn’t quite happy with the interactivity of the 1st-person user camera in the previous post – it was clunky and difficult to move around – there is definitely room for improvement, and a proof-of-concept is demonstrated here. This is based on the work done with ARX, however there are some technical challenges in using the VR glasses in Linux.
The final VTK part of the project – interaction! A big requirement for this project was to allow a user to pilot a bot in the 3D environment. The first step in adding this is to build custom camera types, so that we can look at the environment through the correct perspective. This post looks at adding a rendering loop, handling keyboard and mouse input, and setting up a few template cameras.