-
Notifications
You must be signed in to change notification settings - Fork 427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions on Object Surface Placement and Floor Height Information #2484
Comments
Hey @Eku127 Glad you are finding the platform useful. I'll tackle these questions one at a time, feel free to ask further questions.
I suggest that you use the snap_down util from habitat-lab for this.
|
Multi-level handling is not very concrete in the current implementation. The best approaches I've seen involve sampling the navmesh and then clustering the points to get estimates of the floor heights. We are releasing a semantic region annotation format which includes floor and ceiling height extrusions. That may be useful to make this more concrete. In any case, clever use of the navmesh points is the best bet currently.
The mapping code takes a vertical slice with some margin as input and uses navmesh sampling to create an occupancy grid. Once you solve the floor identification problem you can pass the desired vertical chunks into the mapping code. A naive approach to do all the above may be to take many slices of the navmesh using the vertical bounds and check which one(s) have the maximum number of valid snap points. For example, you check is_navigable for each point in the grid and then count the total number. |
Thank you for the quick response! I’m aiming to place rigid objects on chairs and beds within the scanned scenes. As you pointed out, since all mesh nodes share the same ID (stage_id), the However, I think I can try using both the bounding box of the object and the top-down navigation map to define the xz plane for sampling, then use the |
Replica dataset does contain semantic meshes. Once loaded these will be used for the SemanticSensor and can be queried from the SemanticScene within Simulator. See the flat shaded multi-color renderings on the main page to get a feeling for the annotations there: https://github.com/facebookresearch/Replica-Dataset |
You are right! When I skip the When I load the Replica room 0 dataset using the following paths:
I modified the configuration in Further, when I refer to this issue #2042 (comment), I changed the All the test code work well in MP3D and HM3D dataset. How should I load the Replica dataset correctly and place the object? |
Habitat-Sim version
v0.3.1
Docs and Tutorials
Did you read the docs? https://aihabitat.org/docs/habitat-sim/
Yes
Did you check out the tutorials? https://aihabitat.org/tutorial/2020/
Yes
❓ Questions and Help
Hi, thank you for your great work on the Habitat simulation platform! I have a couple of questions regarding object placement:
I currently have access to the bounding box of an object, such as a bed from the MP3D or HM3D datasets. My goal is to place an object on the surface of these objects. Following the bounding box sampling tutorial, I need to identify the plane or surface of the object. Is there an API to obtain the height information of the mesh points within the specified bounding box? If this is possible, I could use a histogram to determine the primary height interval (like to use it in a pointcloud), which should correspond to the object's surface.
Second, what is the best way to retrieve the floor height? Is it possible to query this by level information? I am currently using
height = sim.pathfinder.get_bounds()[0][1]
, and how to get the navigational topdown map of the second floor? because for some scene level information is missing but contain multiple floors.Thank you for your guidance on these questions!
The text was updated successfully, but these errors were encountered: