Development kit for the dataset TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild.
Compete in our benchmark by submitting your result on our evaluation server.
For more details, please refer to our paper.
@InProceedings{Muller_2018_ECCV,
author = {Muller, Matthias and Bibi, Adel and Giancola, Silvio and Alsubaihi, Salman and Ghanem, Bernard},
title = {TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}
There are 12 chunks of 2511 sequences for the training and 1 chunk of 511 sequences for the testing.
Each chunk have subfolders for the zipped sequence (zips
), the unzipped frame (frames
) and eventually the annotation (anno
).
The structure of the dataset is the following:
TrackingNet
- Test / Train_X (with X from 0 to 11)
- zips
- frames
- anno (Test: annotation only for 1st frame)
Tested on Ubuntu 16.04 LTS
- Create the environment:
conda env create -f environment.yml
or (preferred for other platforms)
conda create -n TrackingNet python=3 requests pandas tqdm numpy
- Activate the environment:
source activate TrackingNet
(activate TrackingNet
for windows platforms)
You can download the whole dataset by running:
python download_TrackingNet.py --trackingnet_dir <trackingnet_dir>
--trackingnet_dir
: path where to download the TrackingNet dataset--data
select the data to download (sequences:--data zips
/ annotations:--data anno
)--chunk
select the chunk to download (testing set:--chunk Test
/ training set:--chunk Train
/ selected chunks:--chunk 0,2,4,11
)
Please look at python download_TrackingNet.py --help
for more details on the optional parameters.
In case an error such as Permission denied: https://drive.google.com/uc?id=<ID>, Maybe you need to change permission over 'Anyone with the link'?
occurs, please check your internet connection and run again the script.
The script will not overwrite the previous sequences of videos if are already completely downloaded.
Note that Google Drive limits the download bandwidth to ~10TB/day. To ensure a good share between all users, avoid downloading the dataset several times and prefer sharing it with your colleagues using an old-fashion HDD.
To extract all the zipped sequences for the complete dataset:
python extract_frame.py --trackingnet_dir <trackingnet_dir>
--trackingnet_dir
: path where to download the TrackingNet dataset--chunk
: select the chunk to download (testing set:--chunk Test
/ training set:--chunk Train
/ selected chunks:--chunk 0,2,4,11
)
In this step, make sure you don't have any error message. You can run this script several times to make sure all the files are properly extracted. By default, the unzipping script will not overwrite the frames that are properly extracted.
If any zip file is currupted, a error message will appear Error: the zip file [zip_file_name] is corrupted
.
In thas case, remove the corrupted zip file manually and run the download script again.
By default, the download script will not overwrite the zip files already downloaded.
This part requires opencv
: conda install -c menpo opencv
To generate the BB in the frames for the complete dataset:
python generate_BB_frames.py --output_dir <trackingnet_dir>
--output_dir
: path where to generate the images with boundingboxes--chunk
select the chunk to download (testing set:--chunk Test
/ training set:--chunk Train
/ selected chunks:--chunk 0,2,4,11
)
If you plan to submit results on our evaluation server, you may want to validate your results first.
The evaluation code we are using is available on metrics.py
, whhich can be used as following:
python metrics.py --GT_zip <GT.zip> --subm_zip <subm.zip>
A dummy example of file is provided here:
python metrics.py --GT_zip dummy_GT.zip --subm_zip dummy_subm.zip