Is it possible to programmatically change the sensor size to fit the scene object #1393
-
Hello. I have a scene with an arbitrary object (mesh/shape) and a sensor (orthographic) looking at it. I change the orientation of the object and want it to always fit within the sensor's boundaries, i.e. to have minimal space around the object in the rendered image. Is it possible to programmatically determine the width and height of the bounding box of the projection of the object onto the sensor's image plane before rendering? Then it would be possible to take the largest of them to calculate the scaling factor for the sensor. Or is there another solution? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi, import mitsuba as mi
mi.set_variant('cuda_ad_rgb', 'llvm_ad_rgb')
print(f"{mi.__version__ = }")
print(f"{mi.variant() = }")
# mi.__version__ = '3.5.2'
# mi.variant() = 'cuda_ad_rgb'
scene = mi.load_dict(mi.cornell_box())
bbox = scene.bbox()
print(f"{bbox = }")
print(f"{bbox.min = }")
print(f"{bbox.max = }")
print(f"{bbox.center() = }") Result
Then you can project it onto the image plane as... cam2world = scene.sensors()[0].world_transform()
# ========== NumPy Arrays ==========
pos_min = bbox.min.numpy() # shape: [3]
pos_max = bbox.max.numpy() # shape: [3]
pos_mm = np.stack([pos_min, pos_max], 0) # shape: [2, 3]
vertices = [[pos_mm[i,0], pos_mm[j,1], pos_mm[k,2]] for i,j,k in np.ndindex(2,2,2)]
vertices = np.array(vertices) # shape: [8, 3]
# ========== Mitsuba `Transform4f @ Point3f` ==========
pos_mm_cam = cam2world.inverse() @ mi.Point3f(vertices)
# ========== NumPy Arrays ==========
pos_mm_cam = pos_mm_cam.numpy() # shape: [8, 3]
pos_mm_img = pos_mm_cam[:, :2] / pos_mm_cam[:, 2:] # shape: [8, 2]
pos_min_cam = pos_mm_img.min(0) # shape: [2]
pos_max_cam = pos_mm_img.max(0) # shape: [2]
print(f"{pos_min_cam = }")
print(f"{pos_max_cam = }")
# sensor = scene.sensors()[0]
cam2world = sensor.world_transform()
# ========== NumPy Arrays ==========
pos_min = bbox.min.numpy() # shape: [3]
pos_max = bbox.max.numpy() # shape: [3]
pos_mm = np.stack([pos_min, pos_max], 0) # shape: [2, 3]
vertices = [[pos_mm[i,0], pos_mm[j,1], pos_mm[k,2]] for i,j,k in np.ndindex(2,2,2)]
vertices = np.array(vertices) # shape: [8, 3]
# ========== Mitsuba `Transform4f @ Point3f` ==========
pos_mm_cam = cam2world.inverse() @ mi.Point3f(vertices)
# ========== NumPy Arrays ==========
pos_mm_cam = pos_mm_cam.numpy() # shape: [8, 3]
pos_mm_nc = pos_mm_cam[:, :2] / pos_mm_cam[:, 2:] # shape: [8, 2]
pos_min_nc = pos_mm_nc.min(0) # shape: [2]
pos_max_nc = pos_mm_nc.max(0) # shape: [2]
print(f"{pos_min_nc = }")
print(f"{pos_max_nc = }")
# Result:
# pos_min_nc = array([-0.3448276 , -0.34827584], dtype=float32)
# pos_max_nc = array([0.3448276, 0.3448276], dtype=float32) |
Beta Was this translation helpful? Give feedback.
Hi,
I found that
mi.Scene
class has a bounding box attribute.Is it what you are looking for?
Result
Then you can project it onto the image plane as...