Gaming sensors find real-world engineering applications
While it may be enjoyable, entertaining and pass an idle hour or so, few people would necessarily ascribe the word 'useful' to video game technology. Indeed, the more puritanical might even argue that the employment of so much creativity and technological brilliance on something so frivolous is wasteful in itself.
However, the transfer of technology from gaming to 'real' engineering is an increasingly well-worn path. And nowhere is this more the case than where the Microsoft Xbox Kinect controller is concerned.
For those who are not au fait with games consoles, the Kinect is a motion sensing input device by Microsoft for the Xbox 360 video game console and Windows PCs. Based around a webcam-style add-on peripheral for the Xbox 360 console, it enables users to control and interact with the Xbox 360 without the need to touch a game controller, through a natural user interface using gestures and spoken commands.
Capable of simultaneously tracking up to six people, the Kinect sensor is a horizontal bar connected to a small base with a motorised pivot and is designed to be positioned lengthwise above or below the video display and has a practical ranging limit of 1.2–3.5m.
Its motion sensing capabilities and relatively low cost make the Kinect ideal for adaptation to other purposes, something that became apparent when, in November 2010, Adafruit Industries offered a bounty for an open-source driver for Kinect. Despite initial misgivings, Microsoft went on to embrace third-party development of its technology, even going so far in June last year as to launch Microsoft Accelerator, acting as a matchmaker between promising start-ups that embed the technology in their products and potential investors.
And there is no shortage of examples of such applications. Last June, for instance, Eureka reported on Surrey Satellite Technology Limited's use of the Kinect's sensors to provide low-cost nanosatellites with spatial awareness in three axes, thereby allowing them to align and dock. In turn, the inspiration for this project came from a Kinect-based project at MIT, as SSTL's project leader Shaun Kenyon makes clear. Says Kenyon: "We were really impressed by what MIT had done flying an autonomous model helicopter that used Kinect and asked ourselves 'Why has no-one used this in space?'"
The medical field is another where Kinect has been successfully adapted for a variety of purposes. One recent example was undertaken by the University of Leeds in order to monitor the rehabilitation of patients after suffering strokes.
Currently, a number of technologies exist to track patient movements. However, not only can these systems cost upwards of £40,000 per camera, they have the added disadvantage of requiring the user to wear markers placed on the skin or clothing, and are often significantly more accurate than is necessary, providing no extra insight into patients' movements at a large capital expense. As such, a significant advantage is presented by the development of an easy-to-use, Kinect-based system that can produce similar results at a fraction of the cost.
A system able to record normal camera footage of a patient was developed, alongside a full 3D rendering of the user's skeleton, allowing the operator to rotate and explore the user's movements. This Virtual Instrument (VI) embeds the skeletal data directly into an .avi file. A further VI has been developed that allows this video to be reviewed, chopped up and saved, giving cherry-picked footage of a physiotherapy session, providing a 3D, rotatable rendering of the patient's skeleton, alongside the raw video footage.
Two other uses of this system include during laparoscopies where, using the depth-mapping functionality of the Kinect, the depth from the camera to the abdomen for each pixel in the camera's range is determined. This information is fed into National Instruments' LabVIEW, and processed, using approximations of the geometry of the abdomen, in order to determine the inflated volume, and provide surgeons with a more accurate picture of the intra-abdominal cavity.
Gesture-based manipulation of the .stl files then allows surgeons to manipulate CT scans and 3D models wirelessly without the need to leave the sterile operating environment.
As with stroke rehabilitation, gait analysis is often undertaken via the application of various sensors to the patient's body, the location of which is then fed into the computer system. This is generally a long and drawn out affair, which may result in patient discomfort. Using the Kinect to track the user's skeleton, gait can be quickly analysed and important metrics concerning the patient can be calculated through LabVIEW and fed back to the operator.
Another fascinating application of the Kinect Sensor came from NSK, a company more usually associated with bearing technology. This was in the form of a robotic guide dog for the blind that uses Microsoft's device to allow it to tackle stairs. The robot converts information gained from the sensor into 3D shape, position and attitude information so that it can recognise the width and number of stairs. Conventionally this was a very difficult challenge, but with a new algorithm, the robot can recognise stairs while in motion for a safe and stable ascent and descent of staircases.
Further proof of the Kinect's transition into the mainstream of engineering comes in the form of an app developed by 3D measurement company Faro. Based on its proprietary scan processing software, SCENE, the company has now developed a new app transforming data from popular motion sensors into 3D models suitable to use in various applications.
The SCENECT app takes the colour images captured by the Kinect and joins these together to form a complete 3D model. To do this, it also uses the depth information supplied by the device. Using this technology, everyday objects, rooms and figures can be scanned – and it even works on people. Basically there are no limits to users' creativity in their choice of objects to scan. All three-dimensional, stationary objects can be scanned at a distance of 0.8 to 2.5 metres.
Dimensions can be taken directly in the three-dimensional data models. Additionally, scans can be exported in various file formats to other software such as CAD programs. It opens almost unlimited possibilities in the 3D scene – for example, in applications including video games, graphic design, visual art, model-making and even 3D printing.
Scans are easy to make with the portable motion sensor. In the first place, the motion sensor is simply connected to a PC or laptop with the SCENECT app. Once the scanning process has started, the Kinect is guided slowly along or around an object. The motion should be constant and steady when doing this to ensure that the colour and depth information is captured. The measuring points form the basis for a point cloud. During scanning, the SCENECT software always displays a video on the monitor to show what the Kinect is currently capturing.
On the display, coloured markings indicate the quality of individual scanning points in the field of vision, thereby helping to capture data as effectively as possible. At the same time, the 3D point cloud for the object is generated and is immediately visible in the other half of the window, allowing the progress of the scan to be observed at all times.
An additional status manager also documents the scan process and uses a coloured field to indicate the overall quality of the scan, which is dependent on the point quality and the scanning speed. In the 3D view available in SCENECT, users can use their mouse and keyboard to turn around inside the room and to view the object from all sides.