| [Paper] | [Video] | [Project Page] |
DynaCam contains in-the-wild RGB videos captured by dynamic cameras, including annotations:
- 3D human trajectories in world coordinates
For the details, please refer to our project page.
The structure of dataset is supposed to be:
|-- DynaCam
| --|-- video_frames
| | |-- panorama_test
| | |-- panorama_train
| | |-- panorama_val
| | |-- translation_test
| | |-- translation_train
| | |-- translation_val
| |-- annotations
| | |-- *.npz
To visualize each video sequences and corresponding annotations, like 3D human trajectory, please download the SMPL_NEUTRAL.pkl and put it into 'assets/' , then run
sh install.sh
# set the path to dynacam_folder in show_examples.py
python show_examples.py
To re-implement all results on DynaCam in our paper, please download predictions, set the path in evaluation.py to ensure the structure like
|-- predictions
| --|-- TRACE
| --|-- GLAMR
| --|-- bev_dpvo
, then run:
sh install.sh
python evaluation.py
Please cite our paper if you use DynaCam in your research.
@InProceedings{TRACE,
author = {Sun, Yu and Bao, Qian and Liu, Wu and Mei, Tao and Black, Michael J.},
title = {{TRACE: 5D Temporal Regression of Avatars with Dynamic Cameras in 3D Environments}},
booktitle = {IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
month = June,
year = {2023}}