ROS: The Robot Operating System
With a Pi installed in the head of Marty providing a substantial boost to computational power, more advanced features can be implemented. One of the best ways to leverage this extra power is through the use of the Robot Operating System - ROS. You can take advantage of the vibrant community and ecosystem that exists to learn about robots and how to develop cool applications for them.
Okay, great, but what exactly is ROS?
In essence, ROS is a flexible framework that aims to ease the development of creating robot software. It’s a pseudo operating system consisting of a wide range of software tools and libraries that make life a little easier for robotics developers. One of it’s main strengths is the open-source nature of the framework. This was intentionally designed in order to promote collaboration between groups, capitalise on their strengths and build on each other’s work, with the end result being more robust and effective robot software.
As such, individuals or groups can create their own tools and libraries and publish them for use by the wider ROS community. This modularity makes life a little easier for budding roboticists (which you most likely are if you’re reading this) as it allows you to use as much or as little of ROS as you’d like, picking and choosing what suits your particular application best.
And how does it work?
Let’s cover the basics of ROS, first.
Underpinning ROS is the concept of distributed computing, and ROS itself is a kind of peer-to-peer distributed system. A ROS system - in this case not neccessarily a single machine - is a network comprising of independent nodes (synonymous with processes) that communicate with each other by passing messages, using a publisher and subscriber model. A node can essentially broadcast its data over a named bus - called a topic - to be picked up by any other node that is interested in that data. For example, a camera node can publish a stream of images, which can be picked up by another node that handles some kind of processing using the images. There is no limit to the number of subscribers a topic can have, or whether the node is even on the same machine (or architecture!).
This distributed model is really handy, and can easily span devices and operating systems. Using Marty as an example, the Rick (an Arduino based board) communicates with the Pi through serial communication, using rosserial.
I think I’m starting to get it…
Don’t worry if this isn’t making complete sense for now. Let’s visualise it. We’ll use Marty’s face tracker as an example.
It’s a simplified view, but relatively accurate. Let’s break down what’s going on here.
Governing everything, is the ROS Master. This node is crucial to the running of a ROS system, and its role is to handle the naming and registration of the other nodes in the system. It keeps an eye on the various publishers, subscribers and services that are running over the entire system. Once a node registers with the Master, it can then communicate with any other node that has registered with the Master.
Next, is the
raspicam_node. This is the node that gets images from the Raspberry Pi camera and publishes it over the
This data is then read by the
face_tracker node by subscribing to the
/marty/camera/image topic, and this node handles the image processing (such as detecting faces, eyes and smiles). Once this has been completed, this node publishes the processed images (
/marty/face_tracking/faces, for example) as well as the co-ordinate data of the detected features within the image frame (
/marty/face_tracking/faces_centroid, for example).
Also shown is the method of visualisation, the output of which, in this case, is simply a rectangle drawn over any detected features. This is performed on the laptop that is used in this ROS system and is achieved through running an instance of
rqt_image_view on the laptop and subscribing to a video output of the
face_tracker node. The
Terminal here is simply a command line prompt that has been told to subscribe to the
/marty/face_tracking/faces_centroid topic and prints the output.
It’s a relatively simple example, but illustrates the structure of a typical ROS application.
Worthy of note is the previously mentioned modularity of ROS. It doesn’t neccessarily have to be an on board camera that the
face_tracker node receives images from, it could just as easily be a camera on the network (so long as the subscribed topic name is the same of course). Also, expanding or utilising data from this node is straight forward, simply subscribe to the relevant topic in your application and you have the data. For example, you could have a script that subscribes to the co-ordinate data from the
/marty/face_tracking/faces_centroid topic and make Marty move to track your face. Try it and show us, we’d love to see!
Cool! So where do I start?
Now that the basics have been explained, diving into practical examples is next.
The official Marty Pi image already has a version of ROS installed, so you can bypass the pain that is installing ROS from source.
Before you dive in though, it is highly recommended to install ROS on another computer. This will speed up your development and just make life that much easier. Also, when it comes to developing robot applications, Linux is king, so it is also virtually a requirement to use an OS like Ubuntu. It’s one of the most widespread Linux distros and is officially supported by ROS, here’s the official install tutorial. There exists versions of ROS that can “run” on Windows and OSX, but these are still experimental, buggy and not recommended.
Once you have Ubuntu and ROS installed on a computer of your choice, my advice is to get your bearings first and follow the official ROS tutorials. These will get you familiar with the beginner level stuff that’s neccesary to get Marty to do cool things.