We use the **[[Point Cloud Library PCL]][[http://pointclouds.org/documentation/]]** that aims at detecting obstacles (with Kinect). ====== Objectif ====== We catched the depth data and tranformed to point cloud. Then we filter that object which is higher than the ground. We projected this result to create a 2D image. By using this image, we can send back reference position of an obstacle. * You can also take a look to the tutorial:[[http://wiki.ros.org/pcl/Tutorials]] Calibration Parameters for the calculation of 3D coordinates from the raw image measurements include: **The height of Kinect.** (In addition, we found that y-axis of the point cloud corresponds to the height, means the z-axis for a tf-tranform of Kinect Sensor) ===== package ===== The package: {{:pcl_detector.tar.gz|}} 1. Launch Kinect Sensor * //roslaunch openni_launch openni.launch// 2. Run * //rosrun pcl_detector pcl_detector// ==== Simulation ==== You can simulate the result by using rviz: * //rosrun rviz rviz// Choose the output topic "output" in type of "PointCloud2"