Neato Tag

Project Description

Our project is Neato Tag- a system of robot vacuums where one is “it” and chases the others while they run away from it. If “it” manages to tag another Neato, that Neato will then become “it” and the game continues.
We chose to undertake this project because it aligned with a lot of the learning goals we set for ourselves at the beginning of the semester. These goals include understanding the math behind a complex algorithm, using multiple Neatos and multiple sensors on each Neato, and gaining experience using ROS2. The algorithm we explored was the Kalman Filter, which uses a series of measurements in order to estimate some unknown variable. We used the Neato’s camera, LiDAR, and bump sensors throughout the course of the project. We also explored many ROS2 topics, including the creation of launch files, satisfying that learning goal as well.
Our game is simple. Each Neato has a unique color sticky note that is used to identify it as well as to figure out how far away it is from the others. One of the Neatos is chosen to be “it” first and begins chasing the others. It uses the onboard camera to find any of the target colors. If it sees one, it will turn towards it while driving at full speed. When the bump sensor is triggered by a collision, the camera checks to see what color Neato it just hit. That color Neato now has to wait 5 seconds before beginning to chase the others and continuing the game. This whole time the Neatos are all using their LiDARs to survey the surrounding area and avoid any obstacles that they come near while chasing or running away from the other Neatos.
Here you can see our Neatos playing tag with each other! It was very fun and rewarding to work on this project and to finally see everything come together in the end.

System Architecture

Our system architecture consists of three main nodes:

  • camera_detector: a node running on each neato to detect the relative position of other neatos
  • tagging: a node for each neato to determine when it has tagged another neato
  • navigator: the motion control node for each neato
These combine to create tag between neatos. camera_detector’s main goal is to publish a point cloud of all other neatos that the camera can see. To do this, we placed colorful sticky notes around each neato to identify them, and created a custom color mask for each of these sticky notes. We chose HSV for these color masks, as we wanted to focus on the color (hue) of the sticky notes, and not the brightness (value). For each mask, we made a binary image from the regular camera image and used a blob detection algorithm to find the largest concentration of each color. Since we knew the actual height of the blob, we could use the camera intrinsics combined with the height of the blob to calculate the precise position of the blob in 3D space using some basic linear transformations. We then publish the position found for each color (if there was one found) into a ParticleCloud, and send that information to the navigator node.
tagging has the goal of calculating when it has tagged another neato. To do this, it listens to both the camera and the bump sensor. Every time a new camera image is received, the node finds the closest neato within the image. Using the same color masks as camera_detector, it finds the biggest blob in each of the resulting binary images. Assuming that blob is large enough, it decides that the corresponding neato is the closest neato. If no blobs are big enough, it notes that no neato is close to it. At the same time, the tagging node listens to the bump sensor. If any part of the bump sensor has been hit, the tagging node checks if it is running on the neato that is currently “it”, and if so, sends a message to all other neatos that the closest neato is now “it” (if no neato is closest, it does not send this message).
The navigator node is the meat of the tagging behavior. It controls the full movement of the neato. navigator has three states: frozen, tagger, and runner. When a neato is first tagged, it freezes for five seconds; enough time for the previous tagger to get away. Once the five seconds are up, the neato transitions itself into tagger mode. In this mode, the neato moves forward while also performing obstacle avoidance and chasing any neatos given by camera_detector. To do this, it calculates a cost function - any point in the LiDAR scan is penalized, while points given by camera_detector (where the other neatos are) are encouraged, where the weights for both are inversely proportional to the distance from the respective point, squared. The neato calculates the gradient of this cost function and attempts to move in that direction. It stays in this state until the tagging node detects a tag. The neato then transitions itself into runner mode, which does the exact same thing except the other neatos are now negatively weighted, causing the neato to want to run from both obstacles and other neatos. When properly transitioned, these behaviors let the neatos play tag.

Ethics Statement

The current iteration of the software is only functional on the Neato Robotics robot vacuums, and the navigation is based on color recognition. The color recognition is not novel technology but could be used in weapons systems to target objects of a certain known color. If our software is used, it must not be adapted to other platforms or hardware.
There are currently 3 set colors which the software will use to recognize other Neato robots; if a person were to don attire in these colors, Neatos may mistake them for a Neato and flee from them or chase them. The robot vacuum on which this software is run does not generally pose a danger to humans over the age of 4 years old. We advise users and spectators to avoid wearing the colors used by the navigation software, and keep children under 4 away from the operating area. Additionally, Neato robot vacuums may pose a tripping hazard; be aware when in an area traversed by Neatos or other robotic vacuums.