Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 122 articles
Browse latest View live

Difference between depth image types

$
0
0
Trying to understand the difference between depth images and how to convert between them. I have two bag files from which I am displaying depth image topics below. The image on the left is the rgb image , the middle image is the generated via cv::imshow and the right image is via rviz. The top seq displays a point cloud correctly but the bottom does not...and is corrupt. Why are they different and how can I transform them. Can anyone give me some insight on this. ![C:\fakepath\Screenshot from 2017-12-09 17-23-19.png](/upfiles/15128590367690248.png) ![C:\fakepath\Screenshot from 2017-12-09 17-24-26.png](/upfiles/15128590619909724.png)

What does the "depth" mean on earth?

$
0
0
I am really confused about the meaning of "depth" which is acquired by a RGB-D camera. I think there are two meanings, 1: The distance from a point in the 3-D world to the infrared camera center. 2: The Z value of the 3-D world point in the camera frame. Which is right? If the second meaning is right, the depth values of all the points in a wall which is parallel with the RGB-D camera should be equal, but the fact is not like that. However, many 3-D vision books indicate the second meaning is right. I am really confused. Who can help me?

Is it possible to get odometry only with a depth map and/or IR image?

$
0
0
I am about to buy Intel RealSense D410 OEM camera module, which is identical to the D415, except missing the RGB camera. Will I be able to calculate visual odometry with it? Which ROS packages should I use for that? Is it possible to register RGB image from an external color camera to depth image of D410? Thank you for help!

Can't visualize Pepper's PointCloud using PCL viewer

$
0
0
I have written a piece of code which subscribes to my Pepper Robot's depth_registered/points topic, converts the Sensor::msg object to pcl::PointCloud using pcl::fromROSMsg(), when I try to visualize the pcl point cloud via PCL viewer I get a green red black screen(refer [this](https://github.com/ros-naoqi/pepper_robot/issues/46)) I'm able to see the pointcloud on RViZ. I've tried converting the Sensor::msg PointCloud2 object to both pcl::PointCloud and pcl::PointCloud, both gives the same output. Also the depth coordinates from Pepper that I'm printing are very small in the order of 10^35. Is this expected? ROS Kinetic Ubuntu 16.04 Any help?

How to use depth_image_proc

$
0
0
I am running gazebo 7, ubuntu 16.04, ROS kinetic. I've attached a kinect depth camera to the end effector of a UR5. I want to be able to take the data from image_raw and convert it into readable distances. I've read that depth_image_proc can do this, but i'm not sure how to use it. If anyone could provide a tutorial or any information that would inform me of how to use depth_image_proc from scratch, that would be greatly appreciated.

performance collapses upon subscribing to depth/image topic(s) on turtlebot

$
0
0
I have two PCs on a Turtlebot2. One is running the turtlebot nav stack and openni (on hydro). The other is running cmvision_3d to do 3d blob detection (on jade) (it's a python package that combines cmvision with the depth info). It is subscribing to the camera and depth topics from the hydro pc. (My launch file is below so one can see which topics.) These devices communicate to each other and to my client pc over a dedicated 5Ghz network. The problem is that the performance is slow. When I start the ros nav stack, rostopic hz shows the relevant topics for the camera running at about 30hz, but when I subscribe, the performance drops to < 1hz over the course of a minute or two. The dropoff is steep and immediate and makes the system unusable for the blob tracking I'm trying to do. htop on both machines show that there is no high utilization. nload shows no high utilization of network i/o (about 50% when subscribed). Upon looking for answers I came across the discussion about setting the queue_size (required) and the buff_size in the python subscriber method. The cmvision_3d does not have these values set, so I tried this, putting in high values for the requests (queue_size of 30, and buff_size of 30*2400000) for the size of the frames of depth registered image data that would be coming over the wire. (There are three subscriber calls in the cmvision_3d code.) This had absolutely no effect. I note that the cmvision code does not have the buff_size set but it's in C++ and not sure if that is a problem. After I post this I may try that. But I figured that even if I had borked the buff_size values I would have had some kind of impact. I'm running out of things to try. Any ideas on what might be the issue? Thank you in advance. ----------- cmvision_3d launch file

Create PointCloud2 in Matlab

$
0
0
I am looking to create a PointCloud2 message in MATLAB and publish back to ROS via the Robotics Systems Toolbox. I am receiving a PointCloud2 message from my Intel Realsense camera and processing the RGB data for a lane detection algorithm. I am then using the distance data to know how far away the line is and to also integrate into my local Costmap for use in the Navigation Stack. Is there a quick and dirty way to create this message in MATLAB. There is a function called readRGB and readXYZ within MATLAB to extract a depth image and a color image from the rgbd PointCloud2 but I cannot get that information back into the PointCloud2 message. I am not very familiar with using Point Clouds and would appreciate any help!!

Turtlebot depth image not showing in image_view

$
0
0
Hello, I want to try some opencv algorithms with depth images from the turtlebot1 (create base). I want to save them for later use because of the complexity of modifying and remaking a node. We have our Turtlebot started with Turtlebot_bringup minimal.launch and turtlebot_bringup 3dsensor.launch. RVIZ is currently showing */camera/depth_registered/image_raw* (which dont look as detailed as I imagined looking at the IR image) but not any */camera/depth/* topic. I tried `rostopic hz /camera/depth/image_raw` but it is not publishing. `rosrun image_view image_view image:=/camera/depth_registered/image_raw` says: > Unable to convert '16UC1' image to bgr8: '[16UC1] is not a color format. but [bgr8] is. The conversion does not make sense' Can I somehow easily save these images for later processing?

different encodings of depth image for real & simulated kinect

$
0
0
Hi all I've got a Gazebo simulation of a kinect camera, where I'm subscribing to the ROS topic `/camera/depth/image_raw`, which is of type [sensor_msgs/Image](http://docs.ros.org/kinetic/api/sensor_msgs/html/msg/Image.html). I also have a real Kinect for Xbox 360 camera connected to my PC via USB, where I can also get the `/camera/depth/image_raw` topic. Unfortunately, these two messages are not exactly the same: message from real camera: header: seq: 0 stamp: secs: 1527861646 nsecs: 997300811 frame_id: "camera_depth_optical_frame" height: 480 width: 640 encoding: "16UC1" is_bigendian: 0 step: 1280 data: [0, 0, ... ] # array is 614'400 long, i.e. 2 entries per pixel message from simulated camera: header: seq: 0 stamp: secs: 27 nsecs: 854000000 frame_id: "camera_link" height: 480 width: 640 encoding: "32FC1" is_bigendian: 0 step: 2560 data: [9, 51, 243, ... ] # array is 1'228’800 long, i.e. 4 entries per pixel The important difference here is the different encodings used, which results in different length of the data array. I don't know why this is the case, but nevertheless I would like to have the same encoding for both the real and the simulated depth image (I don't really care which one). **How can I convert one encoding into the other?** Also, is it possible to convert the data array so that it contains height * width entries, where each entry corresponds to the distance for this pixel in mm (or meter)? I'm using the [freenect](https://github.com/OpenKinect/libfreenect) drivers for the real kinect camera (`sudo apt-get install libfreenect-dev, sudo apt-get install ros-indigo-freenect-launch`), because the [Openni](http://wiki.ros.org/openni_camera) drivers didn't work for me, as I'm on a Ubuntu VM, where Openni has their problems with. Thanks & Cheers!

Octomap voxel occupancy probabilities getting stuck

$
0
0
I am using octomap to maintain an occupancy grid from depth camera point cloud measurements, and I've noticed that some nodes seem to get stuck in an occupied or free state, and never get updated by subsequent point cloud measurements. I'm calling insertPointCloud to integrate my sensor measurements, which should integrate free space along with a hit at the end of the ray: > _octree->insertPointCloud(pointCloud, > octomap::point3d(_currentMAVROSPose.translation().x(), > _currentMAVROSPose.translation().y(), > _currentMAVROSPose.translation().z())); Where I have provided the position of the sensor to ensure that the rays traced by octomap are correct. I've made sure my parameters obey > probMiss < occupancy threshold < probHit with the values 0.2, 0.5, and 0.8. My understanding is that the log odds probability of occupancy should be incremented with every subsequent measurement so that cells that are marked as free or occupied in enough subsequent measurements will eventually cross the threshold. But no matter how many point cloud measurements I insert, in some cases the voxel occupancy probability at the inserted point just stays constant. With some debugging, I found that it is neither below or above the clamping range, yet it is not being updated. I do not have a BBX set for this octree, and there's no max range, so the point can't be out of range. Apparently the voxel containing the point may not be the endpoint of the ray due to discretization error, but no nearby voxels are updated either; it seems the rays are being ignored, leaving holes or dangling voxels in the map.

Localization using 3D OctoMap structure

$
0
0
Hello there, Firstly, i am a quite newbie in ROS and OctoMap and even more on PCL, and as i search around ros.answers most of the questions were quite old regarding OctoMap. So, I have builded a 3D OctoMap and now i am trying to do some localization with it! My task is to localize with the use of Depth and RGB Cameras. My input is, a `sensor_msgs::PointCloud2` for depth data. I have read in `https://answers.ros.org/question/30821/octomap-for-collision-avoidance-using-point-clouds/` that the PointClouds used for its (3D OctoMap) construction is not saved. Hence there in no way, as far as i can tell, to do feature extraction on Octomap and then matching them between OctoMap and input data. So my question here is, what would be the best approach to leverage 3D OctoMap library to match my input to the OctoMap structure and hence do some form of localization? Any help would be greatly appreciated, Thanks

Depth image from rectified stereo images or disparity image

$
0
0
I need a depth image for my project(MoveIt robotic arm with CV). Stereo_image_proc package provides rectified stereo images and disparity image. I can view them in RVIZ and rqt_image_view. What I have tried so far 1)As referenced here https://answers.ros.org/question/165836/depth-image-from-a-disparity-image/, I've tried using disparity_image_proc package. My launch file looks like that I can launch the nodelet, it doesn't produce any errors, but the /depth_image topic is empty. 2)I have tried writing my own Python node that would subscribe to the pair of rectified images and output a depth image. Here's the code I've came up so far #!/usr/bin/env python from __future__ import print_function import roslib roslib.load_manifest('depth') import sys import rospy import cv2 from std_msgs.msg import String from sensor_msgs.msg import Image import message_filters from cv_bridge import CvBridge, CvBridgeError class image_converter: def __init__(self): self.image_pub = rospy.Publisher("depth",Image,queue_size=5) self.bridge = CvBridge() print("Create subscribers for each topic") self.left = message_filters.Subscriber("/elp/left/image_rect_color", Image) self.right = message_filters.Subscriber("/elp/right/image_rect_color", Image) print("Create sync filter. Use exact or approximate as appropriate.") self.ts = message_filters.ApproximateTimeSynchronizer([self.left, self.right], queue_size=5, slop=0.1) print("Registering callback") self.ts.registerCallback(self.callback) def callback(self,left,right): try: cv_image_left = self.bridge.imgmsg_to_cv2(left, "rgb8") cv_image_right = self.bridge.imgmsg_to_cv2(right, "rgb8") cv_image_left_new = cv2.cvtColor(cv_image_left, cv2.COLOR_BGR2GRAY) cv_image_right_new = cv2.cvtColor(cv_image_right, cv2.COLOR_BGR2GRAY) except CvBridgeError as e: print(e) print("creating stereo image") stereo = cv2.StereoBM_create(numDisparities=16, blockSize=15) depth = stereo.compute(cv_image_left_new, cv_image_right_new) print("showing depth image") cv2.imshow("Image window", depth) cv2.waitKey(3) try: self.image_pub.publish(self.bridge.cv2_to_imgmsg(depth, "16SC1")) except CvBridgeError as e: print(e) def main(args): rospy.init_node('image_converter', anonymous=True) ic = image_converter() try: rospy.spin() except KeyboardInterrupt: print("Shutting down") cv2.destroyAllWindows() if __name__ == '__main__': main(sys.argv) It runs without errors, but still nothing on the /depth topic. The image_view window hangs after a while. For this one I am not sure if it is something fundamentally wrong with this approach or I just used wrong image formats. Any help and feedback would be very much appreciated. Another solution for this problem(obtaining depth image from stereo image OR disparity image)would also be very welcome. Update 1: Here's my rqt_graph with camera driver(elp camera driver), stereo_image_proc and disaprity_image_proc running. [img]https://i.imgur.com/8YYQlU8.png[/img]![image description](https://i.imgur.com/8YYQlU8.png)

rviz not showing 16 bits-per-pixel, single channel depth images

$
0
0
Hi, I am playing a bag of 16 bits-per-pixel depth and 8 bit rgb images. While playing the bag, I can extract both the rgb and the depth images into files (verifying that the data is ok) (I used the thread [here](https://answers.ros.org/question/313164/create-a-bag-out-of-16-bit-images/) to create and extract the depth images) The way the depth image is stored in the bag is by using a python script that uses the PIL package to read a pgm image from file into a PIL object, serialize the object data into byte array (using io.BytesIO()), and save the byte array in PPM format. **In rviz I can view the rgb image, but I cannot view the depth image.** rviz issues an error message: [ WARN] [1548110467.614542827]: OGRE EXCEPTION(2:InvalidParametersException): Stream size does not match calculated image size in Image::loadRawData at /build/ogre-1.9-B6QkmW/ogre-1.9-1.9.0+dfsg1/OgreMain/src/OgreImage.cpp (line 283) [ERROR] [1548110467.614620078]: Error loading image: OGRE EXCEPTION(2:InvalidParametersException): Stream size does not match calculated image size in Image::loadRawData at /build/ogre-1.9-B6QkmW/ogre-1.9-1.9.0+dfsg1/OgreMain/src/OgreImage.cpp (line 283) The transport Hint in rviz is set to "raw" and I cannot change it. Thanks, Avner

rtabmap fails to run a bag of depth and rgdb images

$
0
0
Hi, I am playing a bag of 16 bits-per-pixel depth and 8 bit rgb images. While playing the bag, I can extract both the rgb and the depth images into files (to verify that the data is ok) (I used the thread [here](https://answers.ros.org/question/313164/create-a-bag-out-of-16-bit-images/) to create and extract the depth images) The way the depth image is stored in the bag is by using a python script that uses the PIL package to read a pgm image from file into a PIL object, serialize the object data into byte array (using io.BytesIO()), and save the byte array in PPM format. To verify the data, I wanted to view the images first, before running rtabmap In rviz I can view the rgb image, but I cannot view the depth image. According to [here](https://answers.ros.org/question/313323/rviz-not-showing-16-bits-per-pixel-single-channel-depth-images/): > "RVIZ cannot view 16 bit images > natively. You'll have to convert them > to an 8 bit format." So I assume that the images are correct and move on to use rtabmap I launch rtabmap using the launch file below: roslaunch ~/avner/PGR/vision/trunk/ros/catkin_ws/src/bpc_slam_rtabmap/launch/rgbd_slam.rgbd_dataset4.launch But I cannot see the 3D map in rviz, and I do see error messages from rgbd_odometry ... BEG getCvType terminate called after throwing an instance of 'cv_bridge::Exception' what(): Image is wrongly formed: height * step != size or 385 * 1280 != 492817 Aborted The depth file is stored the bag in PPM format. The data payload is 492800 (640*2*385) - width: 640 - height: 385 - bytes per pixel 2 - step size 640*2 = 1280 The size of the file is 492817, due to the header payload So actually, the details in the error message are correct ("height*step" should not be equal to "size") Can someone tell me what am I doing wrong and how to satisfy rgbd_odometry? Thanks, Avner

Running rtabmap on bag of rgb and depth images

$
0
0
Hi, I want to run ros rtabmap on a bag of rgb and depth images, with the following topics: rostopic list /camera/depth/camera_info /camera/depth/image /camera/rgb/camera_info /camera/rgb/image_color I am using the well known canned bag rgbd_dataset_freiburg1_xyz as a reference example. This bag has the following topics: rosbag info /mnt/hdd3/avnerm/avner/slam/data/rgbd_dataset_freiburg1_xyz/rgbd_dataset_freiburg1_xyz.bag ... topics: /camera/depth/camera_info /camera/depth/image /camera/rgb/camera_info /camera/rgb/image_color /cortex_marker_array /imu /tf In my bag I don't have /imu, /tf topics. To get the program to flow, I added the following transforms. I added the transform with 0 values for x,y,z,rot,pitch,yaw (i.e. no effect), except for the yaw argument to align the orientation of 3d points with the image If I just play my bag, I can see in rviz the images (rgb, depth) and the 3d points (via /voxel_cloud), but in local frame coordinates. With these tf nodes the rtabmap runs, but the /rtabmap/mapData shows incorrect - the 3d points are just laid on top of each other without a proper 3d model. I created a bag rgbd_dataset_freiburg1_xyz_no_tf.bag without the topics: /cortex_marker_array /imu /tf, (and replaced with static transforms) With this setting, rtabmap creates the mapData incorrectly as well. So it looks like the launch file that I am using from [here](https://github.com/introlab/rtabmap_ros/blob/master/launch/tests/rgbdslam_datasets.launch) requires the topics that were filtered above. My questions: - Can rtabmap handle a bag with just rgb and depth images? - Is there an example canned bag that only uses rgb and depth images? Thanks, Avner

create point cloud2 from depth image

$
0
0
Hi, I have a image message of depth image. I need to create point cloud2 from that message, how can I do it? I don't have camera info!!!. I couldn't find any document for that. please help me.

depth data retrieved by image_transport has confusing format

$
0
0
- OS: Ubuntu 16.04 LTS - ROS distro: kinetic Hey, I am reading depth data from a Microsoft Kinect using the freenect package. My Node: #include #include #include #include #include #include #include void depthCallback(const sensor_msgs::Image::ConstPtr& msg) { sensor_msgs::ImagePtr msg_c(new sensor_msgs::Image); msg_c->data = msg->data; msg_c->encoding = msg->encoding; msg_c->header = msg->header; msg_c->height = msg->height; msg_c->width = msg->width; msg_c->is_bigendian = msg->is_bigendian; msg_c->step = msg->step; std::cout << msg_c->encoding << std::endl; std::cout << msg_c->height << std::endl; std::cout << msg_c->width << std::endl; std::cout << msg_c->step << std::endl; std::cout << msg_c->data.size() << std::endl; std::cout << typeid(msg_c->data).name() << std::endl; } int main(int argc, char **argv) { ros::init(argc, argv, "depth"); ros::NodeHandle nh; image_transport::ImageTransport it(nh); image_transport::Subscriber sub = it.subscribe("/camera/depth_registered/image_raw/", 1,depthCallback, image_transport::TransportHints("compressed")); //image_transport::Subscriber sub = it.subscribe("/camera/depth_registered/image_raw/", 1,depthCallback); ros::spin(); return 0; } - Encoding is **16UC1**, as expected after [http://wiki.ros.org/freenect_camera](http://wiki.ros.org/freenect_camera). - Height and width are **480** and **640**, as expected. - Step returns **640**. - data.size() returns only **307200**, I expected **614400** since 2 bytes should code for one float value per pixel. - typeid(msg_c->data).name()) returns **St6vectorIhSaIhEE**. It being a vector is confirmed by trying to access an index > 307199 with .at(), which throws an 'out of bounds' exception. **Question:** What is the content of my data field representing at this moment? Cause it doesn't seem to be 16UC1 coded float values.

How to get the correct X Y distance from depth camera

$
0
0
Hi, I would like to create TF for an object detected in OpenCV using depth camera. When I look at the code from the book "ROS Robotics By Example", they use the X, Y coordinate detected in the picture and put that directly into the TF. I am confused here. The X, Y coordinate is just the pixel from the photo, why they can put it in the TF directly? Thanks! # Find the circumcircle of the green ball and draw a blue outline around it (self.cf_u,self.cf_v),radius = cv2.minEnclosingCircle(ball_image) ball_center = (int(self.cf_u),int(self.cf_v)) #This function builds the Crazyflie base_link tf transform and publishes it. def update_cf_transform(self, x, y, z): # send position as transform from the parent "kinect2_ir_optical_frame" to the # child "crazyflie/base_link" (described by crazyflie.urdf.xacro) self.pub_tf.sendTransform(( x, y, z), tf.transformations.quaternion_from_euler(self.r, self.p, self.y), rospy.Time.now(), "crazyflie/base_link", "kinect2_ir_optical_frame") [https://github.com/PacktPublishing/ROS-Robotics-By-Example/blob/master/Chapter_9_code/crazyflie_autonomous/scripts/detect_crazyflie.py](https://github.com/PacktPublishing/ROS-Robotics-By-Example/blob/master/Chapter_9_code/crazyflie_autonomous/scripts/detect_crazyflie.py)

Are all the 16 bits of Realsense D415 depth message valid?

$
0
0
Hi! I am using an Intel Realsense camera D415 and I have got the depth message from a ros topic. The depth values for each pixel in RGB image is in the range of 0~65535. It is obvious that the depth value is of 16-bit. But I wonder whether all 16 bits are valid ? I mean how can I get the true depth value from the 16-bit value ? Are some bits useless if I want to restract true depth values ?

How to calculate the distance to object with orbbec astra camera

$
0
0
Hello, I am using orbbec Astra camera in order to get the distance to the already detected object. I have detected the object by using OpenCV and /brg/image_raw topic from Astra camera. Now I want to estimate the distance to the object by using depth in order to send command (move forward or stay) to the robot. I am tracking that object. Could someone help me with that, or where I can find the material about distance estimation with orbbec camera.
Viewing all 122 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>