-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect generated mesh (due to number of scans less than min_weight parameter) #30
Comments
Have you been able to generate meshes using your system previously? If the preview of the reconstruction looks OK then that means the whole TSDF integration process is working correctly, and that the issue is occurring somewhere between downloading the voxel data from the GPU and saving the reult The marching cubes algorithm is implemented in Based on your screenshot of the intermediate data it looks like you're using very small voxels.
As a side note, the marching cubes implementation is currently CPU-parallelized rather than GPU-parallelized, so it'll take a long time to mesh a higher-resolution volume. |
Yes, I have been able to mesh the bunny demo and also a small region of a different hexagonal object during an earlier test.
Suspected as much, but thank you for confirming it. For now I will assume the problem does not lie in the TSDF integration process and focus on the
I am indeed using small voxels (1 mm). Some early experimentation with this parameter did not seem to have too much of an influence on the computation time or memory usage. I will have a more thorough look at this and its influence.
I will try this now and post an update when I know more.
Ah yes, I saw that when I investigated why the process was killed when I used an even larger TSDF volume (I ran out of memory). The mesh shown above did not feel like it took significantly longer than the bunny demo. I will measure this to make sure. |
Changing the TSDF volume size or volume resolution does not change the generated mesh. The only visible changes are due to the larger voxel sizes itself, resulting in 'pixelated' edges of the mesh. |
In trying to understand how the shape of the mesh was formed I looked at the mesh together with the point cloud data used as input. It does seem the mesh is a part of the box and is influenced by how the different pictures overlap. A picture to try and illustrate this: There is no mesh generated in parts where the point clouds do not overlap at all, could this be a lack of data resulting in low voxel weights? Is there a way to lower this threshold? I will try to improve the extrinsic calibration as well as try to take more pictures to increase the amount of data. |
That's a very helpful picture! Looks like this is caused by the marching cubes function not meshing voxels with low weights. There's a particular piece of code in
Because of this, the only surface that gets meshed in your reconstruction is the area of the box that appears in at least 4 scans. The bunny demo and our previous real-world applications use sensors with higher framerates, so the reconstruction is composed of possibly hundreds of scans and the weights of observed voxels are much greater than this threshold, which is why it wasn't causing obvious problems until now. I'll post an issue on |
We also observed what we believe is the result of a bad extrinsic calibration, causing pointclouds / depth images which contain 'surfaces' which should all be coplanar to actually be at different angles to each other (ie: only intersection at certain points, not actually being parallel). That probably exacerbates the effect you're describing, as it causes even fewer observations to be counted towards the weight of certain voxels (as observations which should come from "the same voxel" are now spread over a few neighbouring voxels). |
The workaround did the trick indeed, here is a screenshot of the resulting mesh with The extrinsic calibration indeed did not help, so that's indeed something to still do as the effects do show up in the resulting mesh. Is it still better to try and pursue more overlapping data, i.e. take a lot more captures? I created this dataset mostly just to have a minimal input for yak, but some better lighting should definitely allow me to make more captures in a reasonable timeframe. |
Definitely. Bad extrinsic calibration causes all sorts of problems, even independent of this voxel weight issue. One possible solution is to try to align each new depth image with the reconstructed surface using ICP. Yak technically supports this, but it hasn't worked very well since we tweaked it to primarily use externally-provided transforms. Even when it does work there's still the problem of applying some dubious additional transform to the "authoritative" transform provided by robot forward kinematics. A separate issue in any case. |
Excellent! Very nice-looking mesh!
One of the benefits of the TSDF algorithm is that it averages sensor noise across multiple observations, so if you have an opportunity to collect more scans from different perspectives it will produce a smoother surface. Those clusters of noisy data floating out in space will also be removed if you provide more views. In general I think it's desirable to throw as much data as possible into the reconstruction, but there's definitely am application-specific tradeoff between overall process time and quality/completeness of the output mesh. |
I will make some adjustments to allow for more observations, but this at least gives me a good starting point to continue the rest of the application. Thanks for the assistance and insights, it helped a lot!
Part of that those observations are actually correct, as I left some clamping bolts on the table ;). |
Yak PR #41 was merged in. @dave992, does that fully resolve this issue? |
It does not fully resolve the issue when using the I will make a PR for the changes somewhere this week. |
Right, I'm getting ahead of myself here. Thank you for the contributions! |
I've run into some trouble with generating a mesh using the
yak_ros1_node
. I am not sure if this is ayak_ros
oryak
issue but I hope to get some insight into where to start investigating this issue. I've tried using both the point cloud and dept image inputs of the yak_ros node, but they both result in the same generated mesh.This is a picture of the box I want to create a mesh for. It is just a placeholder object but gives some insight into what the mesh should roughly look like.
After taking roughly ~10 pictures using the Zivid One+ M, feeding them to the
yak_ros
node and generating the mesh using theGenerateMesh
service (which returns successfully) results in the following mesh. As you can see this does not look like the box at all:The point cloud data from the camera does look correct when visualized in RViz:
The intermediate fused data in the
yak
window does look correctly as well:The text was updated successfully, but these errors were encountered: