A short post to show the potentiality of the Asus Xtion Pro Live used in the ROS framework.
First of all an image of the full RGB-D power: 3D pointcloud with RGB information in Rviz
In the middle of the 3D pointcloud you can see the simulation of the 2D laserscan extracted by the Depth Image.
The second image is the 3D pointcloud colorized according to the distance of each point along X axis.
The third image shows a flat pointcloud used to sharpen the 2D laser scan
Finally the full laser scan
To extract the 2D laser scan I installed a useful ROS package: “depthimage-to-laserscan” and I used this launch script:
<!-- this code originates from https://github.com/turtlebot/turtlebot/blob/hydro/turtlebot_bringup/launch/3dsensor.launch --> <launch> <!-- "camera" should uniquely identify the device. All topics are pushed down into the "camera" namespace, and it is prepended to tf frame ids. --> <arg name="camera" default="camera"/> <arg name="publish_tf" default="true"/> <!-- Factory-calibrated depth registration --> <arg name="depth_registration" default="true"/> <arg if="$(arg depth_registration)" name="depth" value="depth_registered" /> <arg unless="$(arg depth_registration)" name="depth" value="depth" /> <!-- Driver parameters --> <arg name="color_depth_synchronization" default="false" /> <arg name="auto_exposure" default="true" /> <arg name="auto_white_balance" default="true" /> <!-- Processing Modules --> <arg name="rgb_processing" default="true"/> <arg name="ir_processing" default="true"/> <arg name="depth_processing" default="true"/> <arg name="depth_registered_processing" default="true"/> <arg name="disparity_processing" default="true"/> <arg name="disparity_registered_processing" default="true"/> <arg name="scan_processing" default="true"/> <!-- Worker threads for the nodelet manager --> <arg name="num_worker_threads" default="4" /> <!-- Laserscan topic --> <arg name="scan_topic" default="scan"/> <include file="$(find openni2_launch)/launch/openni2.launch"> <arg name="camera" value="$(arg camera)"/> <arg name="publish_tf" value="$(arg publish_tf)"/> <arg name="depth_registration" value="$(arg depth_registration)"/> <arg name="num_worker_threads" value="$(arg num_worker_threads)" /> <!-- Processing Modules --> <arg name="rgb_processing" value="$(arg rgb_processing)"/> <arg name="ir_processing" value="$(arg ir_processing)"/> <arg name="depth_processing" value="$(arg depth_processing)"/> <arg name="depth_registered_processing" value="$(arg depth_registered_processing)"/> <arg name="disparity_processing" value="$(arg disparity_processing)"/> <arg name="disparity_registered_processing" value="$(arg disparity_registered_processing)"/> </include> <!-- Laserscan This uses lazy subscribing, so will not activate until scan is requested. --> <group if="$(arg scan_processing)"> <node pkg="nodelet" type="nodelet" name="depthimage_to_laserscan" args="load depthimage_to_laserscan/DepthImageToLaserScanNodelet $(arg camera)/$(arg camera)_nodelet_manager"> <!-- Pixel rows to use to generate the laserscan. For each column, the scan will return the minimum value for those pixels centered vertically in the image. --> <param name="scan_height" value="10"/> <param name="output_frame_id" value="/$(arg camera)_depth_frame"/> <param name="range_min" value="0.45"/> <remap from="image" to="$(arg camera)/$(arg depth)/image_raw"/> <remap from="scan" to="$(arg scan_topic)"/> <remap from="$(arg camera)/image" to="$(arg camera)/$(arg depth)/image_raw"/> <remap from="$(arg camera)/scan" to="$(arg scan_topic)"/> </node> </group> </launch>
The script is available on github in the repository “ros_myzharbot_robot”: