分享

Kinect for Robotics

 quasiceo 2016-01-24

Kinect for Robotics

Rate This
29 Nov 2011 9:40 PM

The Kinect sensor is one of the best devices, for its price, to become available for robotics in the last decade. For $150 you get 3D range (distance) data and RGB color (webcam video) data. There is a microphone array thrown in as well. But wait! There’s more. The Kinect can detect people and generate skeleton data and you don’t have to write a single line of code to do the processing – the Kinect for Windows SDK does all the hard work for you.

How it Works

The Kinect uses an infrared (IR) laser to spray out a pseudo-random beam pattern. An IR camera captures an image of the dots that are reflected off objects (as in the picture below) and the electronics inside the Kinect figures out how much the dot pattern has been distorted. The distortion is a measure of distance from the camera. This approach is commonly called Structured Light. All this happens at 30 frames per second – not bad for a cheap consumer device.

Kinect-IR-Pattern

As a programmer, the Kinect gives you an array of depth values that correspond to the pixels of the RGB image. This unusual coordinate space, consisting of x and y as pixel coordinates and z as a distance in millimeters, is called the Depth Image Space. All of the distances are measured from a virtual plane passing through the Kinect camera. You can convert the data into conventional (x, y z) in meters but this requires additional processing overhead.

Depth-Image-Space

Skeleton Space on the other hand uses conventional (x, y, z) coordinates. A skeleton consists of a set of 20 joints, each with its own 3D coordinates. Detecting gestures is relatively easy. For example, to detect a person waving their arm over their head you just need to compare the height of their wrist to see if it is above their head, and then see if it is moving from side to side.

Skeleton-Joints

Comparison to Laser Range Finders

Laser Range Finders, or LIDAR (Light Detection and Ranging) devices, have been on the market for a long time. A LRF works by sending out pulses of infrared laser light and timing the return signal. Therefore they are known as Time of Flight devices.

The German SICK brand of LRFs have long been the workhorses of the research community, but these cost thousands of dollars. More recently, Hokuyo in Japan has been selling a cheaper range of LRFs that are approaching the $1,000 barrier. However, for this amount of money you can buy six Kinect sensors, although you would need enough USB ports to plug them all in and the necessary processing power to make use of all the 3D data.

The primary differences between a LRF and a Kinect are the Field of View (FOV), Maximum Range and Resolution. A conventional LRF has a FOV of 180 degrees, and some are up to 270 degrees. In contrast, the Kinect only has a 57 degree FOV.

LRFs are available in a variety of ranges from as little as 2 meters out to well over 100 meters. The Kinect, because it was designed for use in a living room, has a maximum range of 4 meters.

Even with a very large maximum range, the resolution of a LRF is from millimeters to centimeters (depending on how expensive it is) and the accuracy is constant across its entire range. Distance data from the Kinect varies in accuracy from sub-centimeter up close to as much as a 5cm error at its maximum range. This is a consequence of how it measures distance.

One downside of a LRF is that it operates in 2D, effectively taking a horizontal slice through the environment around the robot. In order to capture 3D data, the LRF must be mounted on a tilt mechanism and tilted up and down or mounted sideways and panned from side to side. This makes capturing 3D data relatively slow.

The Future for Kinect

Currently the Kinect has a limited range of 80cm to 4 meters. This means that the Kinect cannot see objects that are right in front of the robot, so you still need traditional obstacle sensors such as sonar (which is why they are included on the RDS Reference Platform). Next year, when the Kinect for Windows Hardware is released, there will be a new “near mode” that will range from 50cm to 3 meters. Although this helps with detecting nearby obstacles, a robot should still have other sensors for redundancy.

Going beyond next year, Microsoft will continue researching even better Kinect hardware. This means that 3D depth data is now here to stay, so sharpen up your 3D geometry skills and get cracking on applications that take full advantage of these new devices.

  • Gunter Wendel
    30 Nov 2011 12:36 PM
    #

    ?so you still need traditional obstacle sensors”

    If so, then it would be helpful, that in the final version of RDS4 services for Hokuyo laser rangefinders will be available, especially for the URG-04LX-UG01. This model is a good obstacle finder, is “lowcost” (in comparision to other models) and is used by many research institutions and even in the hobbyist aerea. Hokuyo abondend the C# drivers which makes it difficult to use the Hokuyo rangefinders with MSRDS. But Hokuyo is also named in the partner-list of MSRDS. So it should be possible to make such a service available.

    Christmas time is near so I want to add few other things to my wishlist:

    2) To me its not quite clear how I can add servos to the reference platform ( e.g. Parallax Eddie) to build a gripper or a pan/tilt mechanism for the Kinect. There should be an easy way to do so with MSRDS and the reference platform.

    3)  The Kinect has  an accelerometer, but in contrary to the free Kinect drivers (OpenKinect) the MS Kinect SDK does not provide the accelerometer data. If I want to use RDS  - and I want - I have to install the MS Kinect SDK. Result: No accelerometer data. If I use libfreenect: No RDS. It would be helpful, if the MS Kinect SDK would add the accelerometer data to the datastream. On my robot the Kinect is installed 1.4m above the ground, looking downwards with an angle of 66°. This installation is very sensitive to angular deviation and the accelerometer data can compensate these deviations. Then it would be easier to detect the floor plane and to distinguish small obstacles on the floor.

    4) SLAM (Kinect based)!!!

    Although there are some wishes left, I close my list to be not too impertinent.

    Best regards

    Gunter Wendel

  • 30 Nov 2011 8:32 PM
    #

    Thanks for the feedback.

    Yes, the Hokuyo URG-04LX-UG91 is a great LRF. I can't promise anything, but we are looking into it.

    Using servos on the Parallax IO controller requires some additional firmware to support PWM on some of the spare digital I/O pins. This is not difficult, but it is not there "out of the box".

    Servos can be also added to the Reference Platform using other servo controllers such as the one available from Phidgets. In early prototypes, we used a combination of a Phidgets servo controller and HB-25 H-bridges to drive an Eddie platform. It worked well.

    We know about the issue with the Accelerometer in the Kinect. So do the people developing the Kinect for Windows SDK but it has not bubbled to the top of their To Do List. It's up to them to implement support - not us. I understand the issue of small angular deviations when it comes to locating the floor. The new Obstacle Avoidance service in RDS 4 does floor subtraction.

    SLAM... Well, what can I say? It's the holy grail of robotics and it's still not something you can just get "off the shelf". We are well aware of this.

    Trevor

  • Piero Cabassa
    7 Dec 2011 5:18 AM
    #

    Hello, I've a simple question about Kinect and robot. How I can connect the sensor to a battery? If I want to create a robot I have to make a portable solution.

    Thanks!

  • 7 Dec 2011 11:21 AM
    #

    I can't give you the exact technical information (I'm not allowed to), but I can give you a couple of tips. Firstly, you can try searching the web because several people have done this already.

    The power pack, or "wall wart", that you plug into the wall has a lead coming out of it. Read the specs on the power pack. I also suggest that you use a voltage regulator to make sure that you do not put the wrong voltage into the Kinect. Batteries often have a higher voltage when they are freshly charged and drop below their nominal voltage near the end of their discharge cycle. You want to make sure that the power to the Kinect remains constant.

    I have to caution you that any modifications you make to your Kinect will void your warranty. And of course, be careful playing with electricity.

  • Piero Cabassa
    7 Dec 2011 9:43 PM
    #

    I've found many informations on Google, but I search a way to connect Kinect without open the cable. With the new robot (with Kinect) on your site, it appears possible.

    Thank for response!

  • John
    15 Dec 2011 7:17 PM
    #

    "Next year, when the Kinect for Windows Hardware is released, there will be a new “near mode” that will range from 50cm to 3 meters"

    That would be very useful for me. Is the close range capability due to a software or hardware modification? My understanding is the depth image is generated on board the Kinect.

  • 21 Aug 2014 10:38 AM
    #

    What is the latest with Microsoft Robotics?  I have not found any new posts past the release of Microsoft Robotics Studio 4.  Will Microsoft continue developing and supporting this?

    Thanks - Bijan Gofranian

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约