This is an old revision of the document!


We use the Point Cloud Library PCLhttp://pointclouds.org/documentation/ that aims at detecting obstacles (with Kinect). We catched the depth data and tranformed to point cloud. Then we filter that object which is higher than the ground. We projected this result to create a 2D image. By using this image, we can send back reference position of an obstacle. You can also take a look to the tutorial:http://wiki.ros.org/pcl/Tutorials

Calibration Parameters for the calculation of 3D coordinates from the raw image measurements include:

The height of Kinect. (In addition, we found that y-axis of the point cloud corresponds to the height, means the z-axis for a tf-tranform of Kinect Sensor)

You can simulate the result by using rviz:

Headline

$ rosrun rviz rviz Choose the output topic “output” in type of “PointCloud2”

TODO: add FindingBlob to optimize the results.

The package: pcl_detector.tar.gz

To use it: 1. Launch Kinect Sensor $roslaunch openni_launch openni.launch 2. Run $rosrun pcl_detector pcl_detector

  • obstacle_detection.1457859356.txt.gz
  • Last modified: 2019/04/25 14:08
  • (external edit)