Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect generated mesh (due to number of scans less than min_weight parameter) #30

Open
dave992 opened this issue Jun 18, 2020 · 13 comments
Labels
bug Something isn't working

Comments

@dave992
Copy link
Contributor

dave992 commented Jun 18, 2020

I've run into some trouble with generating a mesh using the yak_ros1_node. I am not sure if this is a yak_ros or yak issue but I hope to get some insight into where to start investigating this issue. I've tried using both the point cloud and dept image inputs of the yak_ros node, but they both result in the same generated mesh.

This is a picture of the box I want to create a mesh for. It is just a placeholder object but gives some insight into what the mesh should roughly look like.
real_box

After taking roughly ~10 pictures using the Zivid One+ M, feeding them to the yak_ros node and generating the mesh using the GenerateMesh service (which returns successfully) results in the following mesh. As you can see this does not look like the box at all:
generated_mesh

The point cloud data from the camera does look correct when visualized in RViz:
rviz

The intermediate fused data in the yak window does look correctly as well:
yak_window

@schornakj
Copy link
Contributor

Have you been able to generate meshes using your system previously?

If the preview of the reconstruction looks OK then that means the whole TSDF integration process is working correctly, and that the issue is occurring somewhere between downloading the voxel data from the GPU and saving the reult .ply file to the output directory.

The marching cubes algorithm is implemented in yak, but the mesh is converted and saved in this yak_ros function:

https://github.com/ros-industrial/yak_ros/blob/068339025eaaa63badf2cb4b2c5fe2598c7cae2e/yak_ros/src/yak_ros1_node.cpp#L178-L206

Based on your screenshot of the intermediate data it looks like you're using very small voxels.

  • What are the dimensions of the volume?
  • Does anything different happen if you use fewer, larger voxels instead?

As a side note, the marching cubes implementation is currently CPU-parallelized rather than GPU-parallelized, so it'll take a long time to mesh a higher-resolution volume.

@dave992
Copy link
Contributor Author

dave992 commented Jun 19, 2020

Have you been able to generate meshes using your system previously?

Yes, I have been able to mesh the bunny demo and also a small region of a different hexagonal object during an earlier test.

If the preview of the reconstruction looks OK then that means the whole TSDF integration process is working correctly, and that the issue is occurring somewhere between downloading the voxel data from the GPU and saving the reult .ply file to the output directory.

The marching cubes algorithm is implemented in yak, but the mesh is converted and saved in this yak_ros function:

Suspected as much, but thank you for confirming it. For now I will assume the problem does not lie in the TSDF integration process and focus on the onGenerateMesh function and its content.

Based on your screenshot of the intermediate data it looks like you're using very small voxels.

  • What are the dimensions of the volume?

I am indeed using small voxels (1 mm). Some early experimentation with this parameter did not seem to have too much of an influence on the computation time or memory usage. I will have a more thorough look at this and its influence.

  • Does anything different happen if you use fewer, larger voxels instead?

I will try this now and post an update when I know more.

As a side note, the marching cubes implementation is currently CPU-parallelized rather than GPU-parallelized, so it'll take a long time to mesh a higher-resolution volume.

Ah yes, I saw that when I investigated why the process was killed when I used an even larger TSDF volume (I ran out of memory). The mesh shown above did not feel like it took significantly longer than the bunny demo. I will measure this to make sure.

@dave992
Copy link
Contributor Author

dave992 commented Jun 19, 2020

*Does anything different happen if you use fewer, larger voxels instead?

Changing the TSDF volume size or volume resolution does not change the generated mesh. The only visible changes are due to the larger voxel sizes itself, resulting in 'pixelated' edges of the mesh.

@dave992
Copy link
Contributor Author

dave992 commented Jun 19, 2020

In trying to understand how the shape of the mesh was formed I looked at the mesh together with the point cloud data used as input. It does seem the mesh is a part of the box and is influenced by how the different pictures overlap. A picture to try and illustrate this:

mesh_pattern_in_pointcloud

There is no mesh generated in parts where the point clouds do not overlap at all, could this be a lack of data resulting in low voxel weights? Is there a way to lower this threshold?

I will try to improve the extrinsic calibration as well as try to take more pictures to increase the amount of data.

@schornakj
Copy link
Contributor

In trying to understand how the shape of the mesh was formed I looked at the mesh together with the point cloud data used as input. It does seem the mesh is a part of the box and is influenced by how the different pictures overlap. A picture to try and illustrate this:

...

There is no mesh generated in parts where the point clouds do not overlap at all, could this be a lack of data resulting in low voxel weights? Is there a way to lower this threshold?

That's a very helpful picture! Looks like this is caused by the marching cubes function not meshing voxels with low weights. There's a particular piece of code in marching_cubes.cpp that causes this:

  const static uint16_t min_weight = 4;

  uint16_t w;
  val[0] = read(x + 1, y + 1, z, w);
  if (w < min_weight)
    return {};
...

Because of this, the only surface that gets meshed in your reconstruction is the area of the box that appears in at least 4 scans.

The bunny demo and our previous real-world applications use sensors with higher framerates, so the reconstruction is composed of possibly hundreds of scans and the weights of observed voxels are much greater than this threshold, which is why it wasn't causing obvious problems until now.

I'll post an issue on yak_ros so we can get this fixed. As an immediate workaround you can set min_weight = 1 at the line I linked above.

@gavanderhoorn
Copy link
Member

gavanderhoorn commented Jun 19, 2020

The bunny demo and our previous real-world applications use sensors with higher framerates, so the reconstruction is composed of possibly hundreds of scans and the weights of observed voxels are much greater than this threshold, which is why it wasn't causing obvious problems until now.

We also observed what we believe is the result of a bad extrinsic calibration, causing pointclouds / depth images which contain 'surfaces' which should all be coplanar to actually be at different angles to each other (ie: only intersection at certain points, not actually being parallel).

That probably exacerbates the effect you're describing, as it causes even fewer observations to be counted towards the weight of certain voxels (as observations which should come from "the same voxel" are now spread over a few neighbouring voxels).

@dave992
Copy link
Contributor Author

dave992 commented Jun 19, 2020

The workaround did the trick indeed, here is a screenshot of the resulting mesh with min_weight = 1:

complete_mesh_after_fix

The extrinsic calibration indeed did not help, so that's indeed something to still do as the effects do show up in the resulting mesh.

Is it still better to try and pursue more overlapping data, i.e. take a lot more captures? I created this dataset mostly just to have a minimal input for yak, but some better lighting should definitely allow me to make more captures in a reasonable timeframe.

@schornakj
Copy link
Contributor

We also observed what we believe is the result of a bad extrinsic calibration, causing pointclouds / depth images which contain 'surfaces' which should all be coplanar to actually be at different angles to each other (ie: only intersection at certain points, not actually being parallel).

That probably exacerbates the effect you're describing, as it causes even fewer observations to be counted towards the weight of certain voxels (as observations which should come from "the same voxel" are now spread over a few neighbouring voxels).

Definitely. Bad extrinsic calibration causes all sorts of problems, even independent of this voxel weight issue. One possible solution is to try to align each new depth image with the reconstructed surface using ICP. Yak technically supports this, but it hasn't worked very well since we tweaked it to primarily use externally-provided transforms. Even when it does work there's still the problem of applying some dubious additional transform to the "authoritative" transform provided by robot forward kinematics. A separate issue in any case.

@schornakj
Copy link
Contributor

The workaround did the trick indeed, here is a screenshot of the resulting mesh with min_weight = 1:

complete_mesh_after_fix

The extrinsic calibration indeed did not help, so that's indeed something to still do as the effects do show up in the resulting mesh.

Excellent! Very nice-looking mesh!

Is it still better to try and pursue more overlapping data, i.e. take a lot more captures? I created this dataset mostly just to have a minimal input for yak, but some better lighting should definitely allow me to make more captures in a reasonable timeframe.

One of the benefits of the TSDF algorithm is that it averages sensor noise across multiple observations, so if you have an opportunity to collect more scans from different perspectives it will produce a smoother surface. Those clusters of noisy data floating out in space will also be removed if you provide more views.

In general I think it's desirable to throw as much data as possible into the reconstruction, but there's definitely am application-specific tradeoff between overall process time and quality/completeness of the output mesh.

@dave992
Copy link
Contributor Author

dave992 commented Jun 20, 2020

I will make some adjustments to allow for more observations, but this at least gives me a good starting point to continue the rest of the application. Thanks for the assistance and insights, it helped a lot!

Those clusters of noisy data floating out in space will also be removed if you provide more views.

Part of that those observations are actually correct, as I left some clamping bolts on the table ;).

@schornakj
Copy link
Contributor

Yak PR #41 was merged in. @dave992, does that fully resolve this issue?

@schornakj schornakj changed the title Incorrect generated mesh Incorrect generated mesh (due to number of scans less than min_weight parameter) Jun 25, 2020
@dave992
Copy link
Contributor Author

dave992 commented Jun 25, 2020

It does not fully resolve the issue when using the yak_ros node, as it does not yet pass the new parameter. I am currently working on making the changes that will allow passing the min_voxel_weight to yak_ros (and in turn yak).

I will make a PR for the changes somewhere this week.

@schornakj
Copy link
Contributor

Right, I'm getting ahead of myself here.

Thank you for the contributions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants