In drive tests performed inside an MIT campus building, the robot successfully avoided collisions while keeping up with the average flow of pedestrians.
“Socially aware navigation is a central capability for mobile robots operating in environments that require frequent interactions with pedestrians,” said Yu Fan “Steven” Chen, who led the work. “For instance, small robots could operate on sidewalks for package and food delivery. Similarly, personal mobility devices could transport people in large, crowded spaces, such as shopping malls, airports, and hospitals.”
In order for a robot to make its way autonomously through a heavily trafficked environment, it must solve four main challenges: localisation (knowing where it is in the world), perception (recognising its surroundings), motion planning (identifying the optimal path to a given destination), and control (physically executing its desired path).
Chen and his colleagues used standard approaches to solve the problems of localisation and perception. For the latter, they outfitted the robot with off-the-shelf sensors, such as webcams, a depth sensor, and a high-resolution lidar sensor. For the problem of localisation, they used open-source algorithms to map the robot’s environment and determine its position. To control the robot, they employed standard methods used to drive autonomous ground vehicles.
“The part of the field that we thought we needed to innovate on was motion planning,” graduate student, Michael Everett explained. “Once you figure out where you are in the world, and know how to follow trajectories, which trajectories should you be following?”
This is a problem, particularly in pedestrian-heavy environments, where individual paths are often difficult to predict. As a solution, roboticists sometimes take a trajectory-based approach, in which they program a robot to compute an optimal path that accounts for everyone's desired trajectories. These must be inferred from sensor data, as people don't explicitly tell the robot where they are trying to go.
“But this takes forever to compute. Your robot is just going to be parked, figuring out what to do next, and meanwhile the person’s already moved way past it before it decides ‘I should probably go to the right,’” Everett said. “So, that approach is not very realistic, especially if you want to drive faster.”
Others have used faster, ‘reactive-based’ approaches, in which a robot is programmed with a simple model, using geometry or physics, to quickly compute a path that avoids collisions.
The problem with reactive-based approaches, Everett says, is the unpredictability of human nature — people rarely stick to a straight, geometric path, but rather weave and wander, veering off to greet a friend or grab a coffee. In such an unpredictable environment, such robots tend to collide with people or look like they are being pushed around by avoiding people excessively.
The team found a way around such limitations, enabling the robot to adapt to unpredictable pedestrian behaviour while continuously moving with the flow and following typical social codes of pedestrian conduct.
They used reinforcement learning, a type of machine learning approach, in which they performed computer simulations to train a robot to take certain paths, given the speed and trajectory of other objects in the environment. The team also incorporated social norms into this offline training phase, in which they encouraged the robot in simulations to pass on the right, and penalised the robot when it passed on the left.
The advantage to reinforcement learning is that the researchers can perform these training scenarios, which take extensive time and computing power, offline. Once the robot is trained in simulation, the researchers can program it to carry out the optimal paths, identified in the simulations, when the robot recognises a similar scenario in the real world.
The researchers enabled the robot to assess its environment and adjust its path, every one-tenth of a second. In this way, the robot can continue rolling through a hallway at a typical walking speed of 1.2 meters per second, without pausing to reprogram its route.
“We’re not planning an entire path to the goal — it doesn’t make sense to do that anymore, especially if you’re assuming the world is changing,” Everett said. “We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again. This way, we think our robot looks more natural, and is anticipating what people are doing.”
The team now plans to explore how robots might navigate crowds in a pedestrian environment.
“Crowds have a different dynamic than individual people, and you may have to learn something totally different if you see five people walking together,” Everett says. “There may be a social rule of, ‘Don’t move through people, don’t split people up, treat them as one mass.’ That’s something we’re looking at in the future.”