Hi,
I want to publish disparity from the depth topic of type sensor_msgs/Image and the camera info topic of type sensor_msgs/CameraInfo, using the nodelet depth_image_proc/disparity. I want then to save it in a png file.
When I launch the nodelet depth_image_proc/disparity and the disparity_view, I got only an empty window. Here is the launch file:
The depth and the camera info are good.
Any idea why I don't see anything in the disparity_view? I want to save the disparity in a png file and use it with opencv in python.
Thanks!
↧
publish disparity from depth - Kinect
↧
why RGB and depth Image synchronization not working?
why this works ? but not the bottom one?
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
//names that will appear at the top of each window
static const std::string windowName = "Original Image";
using namespace std;
using namespace sensor_msgs;
using namespace message_filters;
void callback(const ImageConstPtr& image_rgb, const CameraInfoConstPtr& cam_info)
{
// Solve all of perception here...
cv::Mat image_color = cv_bridge::toCvCopy(image_rgb)->image;
cv::imshow(windowName,image_color);
cv::waitKey(3);
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "vision_node");
ros::NodeHandle nh;
message_filters::Subscriber rgb_sub(nh,"/camera/rgb/image_color", 1);
message_filters::Subscriber info_color_sub(nh,"/camera/rgb/camera_info", 1);
TimeSynchronizer sync(rgb_sub, info_color_sub, 10);
sync.registerCallback(boost::bind(&callback, _1, _2));
ros::spin();
return 0;
}
In the upper code I'm able to see the RGB image .
BUT
This is not giving any result.
I'm trying to synchronize both the images and then separate then inside callback . then process the RGB image to find the object. I want ti find depth of particular pixel u,v . but this is not working when I'm trying to send both RGB and depth image.
void callback(const ImageConstPtr& image_rgb, const ImageConstPtr& image_depth_source)
{
// Solve all of perception here...
cv::Mat image_color = cv_bridge::toCvCopy(image_rgb)->image;
cv::Mat image_depth = cv_bridge::toCvCopy(image_depth_source)->image;
cv::imshow(windowName1,image_color);
cv::imshow(windowName2,image_depth);
cv::waitKey(3);
}
int main(int argc, char** argv) {
ros::init(argc, argv, "vision_node");
ros::NodeHandle nh;
message_filters::Subscriber rgb_sub(nh,"/camera/rgb/image_color", 1);
message_filters::Subscriber depth_sub(nh,"/camera/depth_registered/image_raw", 1);
TimeSynchronizer sync(rgb_sub, depth_sub, 10);
sync.registerCallback(boost::bind(&callback, _1, _2));
ros::spin();
return 0;
}
↧
↧
Any suggestions for combining scans of different frequencies?
I'm combining two scans, one extracted from a depth camera, and the other from a 2D LIDAR. But I'm getting some jitter from the depth camera's scan's contribution. I suppose it's to be expected when combining a 30Hz depth scan with a 40Hz LIDAR, but I was wondering if there was anything that could be done better. A qualitative example of the jitter artifact in RVIZ; red/LIDAR, green/depth-camera, yellow/combined. Note how combined lagges with the camera, and the LIDAR remains quite static.

The example you see here is a turtlebot simulation from gazebo, with a asus pointing forward, and a gpu_lidar pointing backwards, using this small PR of ira_laser_tools: [#7](https://github.com/iralabdisco/ira_laser_tools/pull/7). I made an issue [#8](https://github.com/iralabdisco/ira_laser_tools/issues/8) of this, but thought It could be general enough for a good old approach question here.
↧
Intel RealSense depth camera on Ubuntu Arm
Hi all,
Im going to develop some package with the [Intel RealSense depth camera](http://click.intel.com/intelrrealsensetm-developer-kit-featuring-sr300.html), but before i get that hardware, i wanna know if its possible to use it on the Ubuntu Arm version of ROS.
Thank you so much!!
↧
Kinect accuracy in depth
Hi,
I use a Kinect for Xbox 360. What is the accuracy of depth in the different ranges ?
Thanks for reply.
↧
↧
Openni or Freenect?
Hi guys,
I'm quite new in ROS and I would like to know a general opinion about which one would you use, or which are you using nowadays and why. I would like to use a KINECT to do a SLAM and try to subscribe to topics related with the measured depth.
Thanks everyone.
↧
rviz does not display the depth cloud
I have a bag fille recorded with a kinect camera. RVIZ displays correctly the RGB but it does not display the depth cloud any more. It used to work ( the same bag fille) using oppeni_camera as fixed frame and using the raw configuration. But it's not working any more.
↧
Turtlebot depth image not showing in image_view
Hello,
I want to try some opencv algorithms with depth images from the turtlebot1 (create base). I want to save them for later use because of the complexity of modifying and remaking a node.
We have our Turtlebot started with Turtlebot_bringup minimal.launch and turtlebot_bringup 3dsensor.launch. RVIZ is currently showing */camera/depth_registered/image_raw* (which dont look as detailed as I imagined looking at the IR image) but not any */camera/depth/* topic. I tried `rostopic hz /camera/depth/image_raw` but it is not publishing.
`rosrun image_view image_view image:=/camera/depth_registered/image_raw` says:
> Unable to convert '16UC1' image to bgr8: '[16UC1] is not a color format. but [bgr8] is. The conversion does not make sense'
Can I somehow easily save these images for later processing?
↧
subscribing rgb and depth simutaneously very slow
I am subscribing the rgb and depth images published by openni_launch at the same time.
However, my problem becomes very slow and can only run at 15Hz.
How to solve the problem?
↧
↧
get a depth image using a kinect and openCV.
I have a code in C ++ that stores a Kinect's depth image, I wonder if there is any way to get an image, in where I can define the depth of the things captured into this image. for example, I only want the things that are at max at 1 meter of distance of the Kinect's sensor
I found the code into a thread here, so I edited it so I can try to get the image, the code is the next.
----------
#include
#include
#include
#include
#include
#include
using namespace std;
unsigned int cnt = 0;
void RetornoImagen(const sensor_msgs::ImageConstPtr& msg_depth )
{
//convertimos a un tipo de imagen de opencV
cv_bridge::CvImagePtr img_ptr_depth;
try{
img_ptr_depth = cv_bridge::toCvCopy(*msg_depth, sensor_msgs::image_encodings::TYPE_32FC1);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
double minVal, maxVal;
cv::Mat &mat = img_ptr_depth->image;
cv::Mat img2;
cv::Mat invertida;
cv::imshow("windowName",mat );
//mat.convertTo(img2,CV_8U, 255.0/(maxVal - minVal), 100.0 );
minMaxLoc(mat, &minVal, &maxVal,NULL,NULL); //find minimum and maximum intensities
img2=mat;
mat.convertTo(img2,CV_32FC1,65535.0/(maxVal - minVal), -minVal * 65535.0/(maxVal - minVal));
cv::imshow("ventana2",img2);
bbbb
cv::waitKey(3);
//ROS_INFO("asdasd");
char file1[100];
char file2[100];
cnt++;
sprintf( file1, "%04d_depth.png", cnt );
;
//parametros de compresion
std::vector pp;
pp.push_back(CV_IMWRITE_PNG_COMPRESSION);
pp.push_back(0);
//guardamos la imagen en el disco duro, donde se ejecuta el nodo.
imwrite(file1,img2,pp);
//imwrite(file2,img2);
ROS_INFO_STREAM("El minimo es:" << minVal);
ROS_INFO_STREAM("El maximo es:" << maxVal);
}
int main(int argc, char **argv)
{
char a;
//initialize the ROS system and become a node.
ros::init(argc, argv, "guardar_imagen");
ros::NodeHandle nh;
//image_transport::ImageTransport it(nh);
ros::Subscriber image_sub_;
ros::Rate rate(1);
//creamos ventanas para visualizar el contenido
cvNamedWindow("windowName", CV_WINDOW_AUTOSIZE);
cvNamedWindow("ventana2", CV_WINDOW_AUTOSIZE);
cvNamedWindow("ventana3", CV_WINDOW_AUTOSIZE);
image_sub_ = nh.subscribe("/camera/depth_registered/hw_registered/image_rect_raw", 1, &RetornoImagen);
cvStartWindowThread();
while(ros::ok())
{
char c;
ROS_INFO_STREAM("\nIngrese 'a' para guardar\n");
cin.get(c);
cin.ignore();
c = tolower(c);
ROS_INFO_STREAM("You entered " << c << "\n");
if( c == 'a' )
{
unsigned int cnt_init = cnt;
while( cnt - cnt_init < 1 )
{
ros::spinOnce();
}
}
else break;
}
ROS_INFO_STREAM("Cerrando nodo\nClosing node\n");
//Destruimos las ventanas
cvDestroyWindow("windowName");
cvDestroyWindow("ventana2");
cvDestroyWindow("ventana3");
return 0;
}
thank you.
----------
↧
Save images kinect - 30fps
I have to record a dataset using the kinect, it should contains RBG images and depth images..
Using this code:
char filename[80];
int i=0,j=0;
void imageCallbackdepth(const sensor_msgs::ImageConstPtr& msg)
{
// convert message from ROS to openCV
cv_bridge::CvImagePtr cv_ptr;
try
{
cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::TYPE_32FC1);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
sprintf(filename,"depth_%d.png", i++);
cv::imwrite(filename,cv_ptr->image);
}
void imageCallbackrgb(const sensor_msgs::ImageConstPtr& msg)
{
//The same for RGB
}
I have a couple of questions:
How can I make sure that I'm saving images at 30fps? Am I not missing any frame? How can I change this parameter?
Saving RBG and depth at the same time and adding another sensor (stereocamera) doesn't effect that?
My program is correct, is it the right way to do this?
↧
Depth map from bumblebee2 camera?
Hi,
I am trying to get the coordinates in space (X,Y,Z) of an object. For this, I am detecting the object using the image from the bumblebee's left camera (from here I can obtain the X and Y coordinates), and would like to obtain the Z-coordinate. I have the code for doing this with an Asus Xtion, but this uses a depth map. Can I get a depth map as well from my bumblebee2 camera? Or should I just use the disparity image I get from stereo_image_proc?
Thanks!
↧
How to obtain depth coordinate using two,widely sperated, cameras: ROS, python Baxter
Hi,
So basically I want to be able to obtain the depth,z, coordinate of a certain object. I figured since I am using two cameras, there could be some triangulation that can be done.
I also looked into the disparity map process, but my cameras are far apart, and the disparity map created is rubbish, so i need another method
↧
↧
Is this the right way to obatin depth information?
Hello,
I am trying to find the closest object to a zed camera. I am using the package: [zed_wrapper](http://wiki.ros.org/zed-ros-wrapper). I need to find the depth (z value) of each pixel in the scene.
My understanding is that the topic:
> /camera/depth/image_rect_color
will provide me the required information. The command `rostopic type /camera/depth/image_rect_color` returned
sensor_msgs/Image
which I believe is the msg type for depth images. The command `rosmsg show sensor_msgs/Image` returned
std_msgs/Header header
uint32 seq
time stamp
string frame_id
uint32 height
uint32 width
string encoding
uint8 is_bigendian
uint32 step
uint8[] data
I believe that the last field:
> uint8[] data
gives the actual info.
I believe that at this point, I should write a subscriber. Do I have enough information to write a subscriber? Is there any other way I can get this information without writing a subscriber? Kindly help. Thank You.
↧
How to interpret data matrix in sensor_msgs/Image?
I am subscribing to the topic `/camera/depth/image_rect_color` in the package [zed-ros-wrapper](http://wiki.ros.org/zed-ros-wrapper) to obtain depth information using a zed camera. Right now, my subscriber looks at a random pixel and reads the depth information. Is there any way to find out which pixel corresponds to what element(index) in the matrix?
Also if I run `rostopic echo /camera/depth/image_rect_color`, I find that the encoding is `32FC1`. But the data matrix is `int` (I believe `unit8` denotes `int`). Also, I haven't seen any value above 255. So, the matrix looks like `8UC1`. What am I missing here or how does a 32FC1 matrix looks like?
↧
Image array size is not consistent with the message type documentation
I am trying to interpret data from a message of the type `sensor_msgs/Image`. When I run `rostopic echo ...` for the topic, I get the following, along with the data array:
height: 720
width: 1280
encoding: 32FC1
is_bigendian: 0
step: 5120
The [documentation](http://docs.ros.org/api/sensor_msgs/html/msg/Image.html) says that the size of the array should be `step` times `rows` (i.e. height). This gives `720` times `5120` = 3686400 which is good because it says that 4 uint8 data elements in data array should be combined to get the 32 bit float value (see the encoding; `4` times `1280`=5120 which is the step size).
To confirm the size of the data array, I edited my subscriber (added at the end) to read array elements corresponding to indices `3686400 to 4686400`. I expected an error at the very beginning, but it threw a segmentation fault only after the index `3690487` . How can this be explained? What am I doing wrong? All the output values corresponding to these indices are zero.
Right now this is how my subscriber looks:
#include
#include "ros/ros.h"
//#include "std_msgs/String.h"
#include "sensor_msgs/Image.h"
using namespace std;
void depthCallback(const sensor_msgs::Image::ConstPtr& msg)
{
for(int i=3686400;i<4686400;i++)
{
//ROS_INFO("Height Info: [%d]", msg->height);
ROS_INFO("Depth Info: [%d]", msg->data[i]);
cout << "i VALUE: " << i << endl;
}
}
int main(int argc, char **argv)
{
//cout << "Hello world!" << endl;
ros::init(argc, argv, "zed_depth_subscriber_node");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("/camera/depth/image_rect_color",1000, depthCallback);
ros::spin ();
return 0;
}
↧
Get depth from Kinect sensor in gazebo simulator
I'm trying to find a specific pixel's depth of `/depth/image_raw` topic that is published by Kinect sensor mounted on **Turtlebot** robot.
It's what I get: www.uppic.com/do.php?img=97090) (Sorry I have not enough karma to upload image.
It's what I see in **rviz**: www.uppic.com/do.php?img=97092
How can I fix it to get depth of pixels?
↧
↧
Depth map estimation from monocular camera
Is anyone aware of ROS nodes implementing an algorithm to estimate a depth map from a monocular camera, like described in [this paper](https://papers.nips.cc/paper/2921-learning-depth-from-single-monocular-images.pdf)?
I've found a little published code such as [this](https://bitbucket.org/fayao/dcnf-fcsp) and [this](http://make3d.cs.cornell.edu/code.html) and [this](https://github.com/asousa/DepthPrediction), but it's all buggy Matlab code and undocumented.
↧
Kinect invalid flag less than 0.4 m
Hello,
I came to know from [this](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6521938) , kinect gives a invalid flag on for depth value less than 40cm distance. My question was how using ROS, I can read the invalid flag within the blind zone(less than 40 cm).
I wrote a publisher which read from /camera/depth/points as PCL2 and then calculated the ibstacle distacle. It publishes data to another topic. But for moving objects which suddenly came in front of kinect in les the 40 cm then it cannot detect obstacle.
Can anyone please help me regarding this?
thank you.
↧
ros indigo / xtion : getting depth from pixel coordinates
Hi,
Giving x,y pixels coordinate in camera/rgb/image_raw, I am trying to extract the related depth from camera/depth_registered/image_raw.
I start the xtion via "roslaunch openni2_launch openni2"
Running "rosrun rqt_reconfigure rqt_reconfigure" and checking camera/driver confirms depth_registration is active.
Also, displaying the /camera/depth_registered/points in rviz shows something that looks nice.
It seems to me that when using an xtion pro live, there is no need for calibration, so I did not do any.
It also seems to me that if registration is done, coordinates in the depth image and in the rgb image are the same, so I do the following in python:
in the callback for the rgb image (bridge is an instance of CvBridge):
cv_image = bridge.imgmsg_to_cv2(image, image.encoding)
# code for getting x,y coordinate of a pixel of interest in the image
I run some code that displays the image and shows the pixel x,y:
cv2.circle(cv_image,(x,y),2,(255,0,0),-1)
this confirms x,y are correct, in this case at the center of a colored ball in front of the camera
callback for depth image:
image_cv = bridge.imgmsg_to_cv2(depth_image, depth_image.encoding)
depth_image = np.squeeze(np.array(image_cv, dtype=np.float32))
depth = float(depth_image[x][y])
but I just get "0.0" for the depth, no matter where the ball is in the field of vision.
Anything I am doing incorrectly ?
↧