Authors: Erik Gärtner*, Aleksis Pirinen* and Cristian Sminchisescu (* denotes first authorship).
Official implementation of the NeurIPS 2019 paper Domes to Drones: Self-Supervised Active Triangulation for 3D Human Pose Reconstruction. This repo contains code for reproducing the results of our proposed ACTOR model and the baselines, as well as training ACTOR on Panoptic. A video overview of the paper is available here.
ACTOR is implemented in Caffe. The experiments are performed in the CMU Panoptic multi-camera framework. Our ACTOR implementation uses OpenPose as underlying 2d pose estimator. We have used a public TensorFlow implementation for pre-computing all pose and deep feature predictions from OpenPose.
If you find this implementation and/or our paper interesting or helpful, please consider citing:
@inproceedings{pirinen2019domes,
title={Domes to Drones: Self-Supervised Active Triangulation for 3D Human Pose Reconstruction},
author={Pirinen, Aleksis and G{\"a}rtner, Erik and Sminchisescu, Cristian},
booktitle={Advances in Neural Information Processing Systems},
pages={3907--3917},
year={2019}
}
- Clone the repository
- Read the following documentation on how to setup our system. This covers prerequisites and how to install our framework.
- See this dataset documentation for how to download and preprocess the Panoptic data, pre-compute OpenPose deep features and pose estimates and train/download instance features for matching.
Pretrained model weights for ACTOR can be downloaded here.
The Matlab script demo.m
contains code to reproduce the visualizations from the main paper.
Running the script will create an output folder that contains a "recording" of the active-view showing errors, camera choices and reconstructions.
To train the model run the command:
run_train_agent('train')
The results and weights will be stored in the location of CONFIG.output_dir
.
Given the model weights (either the provided weights or your own):
- Set flag
CONFIG.evaluation_mode = 'test';
- Set flag
CONFIG.agent_random_init = 0;
- Set flag
CONFIG.agent_weights = '<your-weights-path>';
- Set flag
CONFIG.training_agent_nbr_eps = 1;
(Note, this will not update weights, since they are updated every 40 eps.) - Run
run_train_agent('train');
, results will be stored in the location ofCONFIG.output_dir
.
This work was supported by the European Research Council Consolidator grant SEED, CNCS-UEFISCDI PN-III-P4-ID-PCE-2016-0535 and PCCF-2016-0180, the EU Horizon 2020 Grant DE-ENIGMA, Swedish Foundation for Strategic Research (SSF) Smart Systems Program, as well as the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. We would also like to thank Patrik Persson for support with the drone experiments.