Autonomous, collaborative machines set to change the face of robotics
The prospect of 'swarms' of robots behaving autonomously and collaboratively to achieve complex industrial tasks is one that can excite and alarm in equal measure.
On the one hand, of course, the sheer technical achievement is remarkable and the possibilities almost endless. On the other hand, the sight inevitably calls to mind the spectre of various sci-fi dystopias to mind where robots act independently of their human 'masters'.
These contrasting emotions were doubtless felt by many attending the recent SolidWorks World event in Florida, at which Professor Vijay Kumar, Professor at the School of Engineering and Applied Sciences, University of Pennsylvania, gave a fascinating presentation on his work in this area.
Professor Kumar drew the comparison between his robots and Unmanned Aerial Vehicles (UAVs) such as the drones used for military purposes. While these two things may be mechanically similar in certain ways, he said, UAVs were not robots in the sense that they are remotely operated – often by a team of people. By contrast, Kumar's swarms are made up of autonomous robots capable of acting both independently and collaboratively to achieve a set task.
The other key difference between these quadrotors (so called because they are helicopters with four rotors) and UAVs is their size. UAVs are big – sometimes weighing hundreds of kilos – while the robots developed at the University of Pennsylvania are at most a metre in diameter and at most a couple of kilos in weight. Indeed, the smallest of these robots weighs just 75g and is approximately 21cm in diameter.
The dynamic capabilities of these robots are impressive. For instance, their size makes them incredibly agile – capable of performing flips and double flips in the space of just half a second. This is possible because an onboard computer carefully monitors signals from onboard gyroscopes and accelerometer s to stabilise the aircraft in such a way as to hover. This means it is possible to throw the robot into the air and, no matter from what position, it will stabilise to a hover.
However, the question of autonomy is less easily explained. While the quadrotors may have pre-programmed flight paths, the question of motion planning in this instance is still key – in other words how the robots learn to get from point A to point B. Says Professor Kumar: "The dynamics of a robot like this can only be described in a 12-dimensional space and, furthermore, that space is curved. The reason for this is that if you write down the equations of motion, they have all kinds of non-linearities. What we have to do is plan motions for this in real-time. So if you consider the problem of getting from Point A to Point B and you want to perform all these computations via onboard processors, this is quite hard.
So there's a trick that allows us to transform that 12-D space into a flat, 4-D space and to perform this trick, all you have to worry about are the x,y and z angles and the yaw angle. If you can find sufficiently smooth trajectories in this 4-D space – trajectories that avoid obstacles – then it's possible to take that and transform it back to the 12-D space and that's the trick that our robots perform to plan motions in realtime."
Aggressive Trajectory
Thus, a robot going from A to B via an intermediate wavepoint will describe an aggressive trajectory that starts from a hover position. This movement is undertaken autonomously and the feedback is obtained by a motion capture system overhead in the lab that provides realtime information 100 times per second (calculating flight-control commands for the rotors 600 times per second). But the planning is done at 20Hz, so 20 times a second it can calculate 'minimum snap trajectories' (smooth trajectories designed to find the right path).
Professor Kumar was also able to demonstrate the robots' ability to avoid obstacles – both static and moving. In one highly impressive video, he showed a quadrotor flying through a hoop thrown in the air. In another clip, a robot was confronted by a window just three inches wider than its diameter. Starting in a hover position, it twisted its body into a vertical axis and then recovered – having worked all this out by itself.
Says Kumar: "It knows where these obstacles are because of the motion capture system. It plans paths through these obstacles with remarkable precision. It doesn't matter that these obstacles are moving. As long as it can predict the movement of these obstacles, it can plan a path through them in real time."
While these displays of autonomy are impressive, perhaps more significant is the capability demonstrated by the quadrotors to act collaboratively. Again, of course, this requires the robots to be cognisant not merely of their surroundings, but of their fellow members of the 'swarm'.
To achieve this, said Kumar, his team turned to the examples of nature. In particular, he pointed to the behaviours of flocking starlings and ants engaged in a particular task. Work with biologists, he claimed, had brought his team to the conclusion that there were three aspects of animal behaviour they had identified in relation to 'swarms'.
The factors identified, said Kumar are that "[animals in swarms] operate solely on sensing local information. It is not possible for the individual at one extreme of a group to know what the person at the other extreme is doing. Secondly, the individuals have to act independently. No individual is going to tell you what to do to achieve a particular co-operative task.
"Finally, there has to be a notion of anonymity. An individual has to be agnostic to who their neighbour is. That's very important if you want to perform a collaborative task. The ants have no identities and the robots don't care which robot is next to them."
Appreciating these factors has allowed the quadrotors to be used in a number of impressive collaborative actions. These have included the creation a 20-robot swarm. Flying in formation, they are all aware of their neighbours, but are told by their human operator to create these formations, which they do in a safe way, keeping a track of their fellow robots. The formations can be two- or three-dimensional and can change dynamically to avoid obstacles. Control has to be very precise in these formations, particularly as the individual robots are buffeted by the aerodynamics of their neighbouring vehicles and the downwash of the rotors.
The size of these robots, though, would appear to limit their practical value. One of the limitations of scaling down is the question of payload. It therefore follows that a lot of robots will be needed to collaborate to carry payloads. Whether the payloads are objects to transport or sensors to map a particular area, collaboration is the key.
Professor Kumar then showed a video of robots collaborating to build simple structures. He said: "The robots decide between them which robot is going to pick up which part and where they're going to assemble it. The high-level specification is very simple, but the robots essentially make all the decisions in order to satisfy it."
However, the great limiting factor would appear to be the fact that these robots still rely on the motion capture system within the lab. But Professor Kumar pointed to an application where the robot carried a Hokuyo laser scanner and a Microsoft Kinect RGB depth camera. Using these sensors, the robot can distinguish features in the environment and triangulate those features.
A video was then shown of a robot taking off outside a building it had never seen before and mapping the features. It then triangulates its position in relation to these features a hundred times a second. The onboard cameras thus provide roughly the same information as the motion capture cameras in the lab. 'Roughly' because these robots don't have any access to a global coordinates system like GPS or a motion capture system. What they have to do is triangulate based on features that they have captured, giving a relative sense of how fast they are moving.
In the instance shown, the robot was completely autonomous. Its operator only being able to direct it based on the map it had created as it moved around.