You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When bringing in keypoint data from the wild, we often have to do a bit of alignment to get it into an initial state that is feasible for optimization.
We should provide some quality of life utilities to make this a bit simpler on the enduser:
Keypoint-to-site mapping (Automate site-keypoint mapping #17): Many (if not most) experiments will induce different keypoint sets due to differences across experimental setups. 3D mocap data is hard to acquire, so we need to work with this. Right now we need to establish 1-to-1 correspondences between mocap keypoints and body model sites. It would be great to have an easier way to establish this mapping, possibly aided by a visualizer.
Rotation and origin: The global coordinate system for mocap data is also arbitrary. Right now we need to do this entirely by hand on the mocap data. We should be able to (at a minimum) provide these so that mocap data doesn't need to be re-generated by the user. A better solution would be to be able to do automatically (e.g., via Procrustes) or interactively (e.g., via a visualizer GUI).
Scaling: Due to lack of standardization at the camera calibration step (and the scale invariance of triangulation), units of keypoint coordinates might not be metric. We currently do this by specifying model and mocap scaling in the YAML files (example). It'd be very useful to be able to interactively or automatically (e.g., Procrustes?) do this.
The text was updated successfully, but these errors were encountered:
When bringing in keypoint data from the wild, we often have to do a bit of alignment to get it into an initial state that is feasible for optimization.
We should provide some quality of life utilities to make this a bit simpler on the enduser:
The text was updated successfully, but these errors were encountered: