This repo includes the source code of the paper: "Single View Stereo Matching" (CVPR'18 Spotlight) by Yue Luo*, Jimmy Ren*, Mude Lin, Jiahao Pang, Wenxiu Sun, Hongsheng Li, Liang Lin.
Contact: Yue Luo ([email protected])
The code is tested on 64 bit Linux (Ubuntu 14.04 LTS). You should also install Matlab (We have tested on R2015a). We have tested our code on GTX TitanX with CUDA8.0+cuDNNv5. Please install all these prerequisites before running our code.
-
Get the code.
git clone https://github.com/lawy623/SVS.git cd SVS
-
Build the code. Please follow Caffe instruction to install all necessary packages and build it.
cd caffe/ # Modify Makefile.config according to your Caffe installation/. Remember to allow CUDA and CUDNN. make -j8 make matcaffe
-
Prepare data. We write all data and labels into
.mat
files.
- Please go to directory
data/
, and runget_data.sh
to download Kitti Stereo 2015 and Kitti Raw datasets. - To create the
.mat
files, please go to directorydata
, and run the matlab scriptsprepareTrain.m
andprepareTest.m
respectively. It will take some time to prepare data. - If you only want to test our models, you can simply downloads the Eigen test file at [GoogleDrive|BaiduPan]. Put this test .mat file in
/data/testing/
.
- As described in our paper, we develop our View Synthesis Network based on the Deep3D method. Directly training the final model indicated in our paper using VGG16 initialization will easily sink into local optimum. We first keep the BatchNorm layers and train the model with VGG16 initialization. Go to
training/
to runtrain_viewSyn.m
. You can also run the matlab scripts from terminal at directorytraining/
by following commands. By default matlab is installed under/usr/local/MATLAB/R2015a
. If the location of your matlab is not the same, please modifytrain_ViewSyn.sh
if want to run the scripts from terminal. Download the VGG16 at [GoogleDrive|BaiduPan], and put it undertraining/prototxt/viewSynthesis_BN/preModel/
before finetuning. We train such a BN model for roughly 30k iterations.
## To run the training matlab scripts from terminal
sh prototxt/viewSynthesis/train_ViewSyn.sh #To trained the view synthesis network
- We further remove the Batch Norm layers and obtain a better performance. Rename the trained BN model (in
training/prototxt/viewSynthesis_BN/caffemodel
) mentioned above asviewSyn_BN.caffemodel
. Or you can directly download ours at [GoogleDrive|BaiduPan] and place it in the correct place. Changeline 11
oftrain_viewSyn.m
to be‘model = param.model(2);’
, and runtrain_viewSyn.m
again.
- We do not provide the training code for training this stereo matching network. We follow CRL and use their trained model. Relevant model settings can be found in
training/prototxt/stereo/
.
- To finetune our svs model, please first download the pretrain models for two sub-networks.
Download View Synthesis Network at [GoogleDrive|BaiduPan], and put it under
training/prototxt/viewSynthesis/caffemodel/
. Download Stereo Matching Network. You can download the model trained on FlyingThings synthetic dataset at [GoogleDrive|BaiduPan], and a model further finetuned on Kitti Stereo 2015 at [GoogleDrive|BaiduPan]. Put the downloaded models undertraining/prototxt/stereo/caffemodel/
- Go to
training/
to runtrain_svs.m
. You can also run the matlab scripts from terminal at directorytraining/
by following commands.
## To run the training matlab scripts from terminal
sh prototxt/svs/train_svs.sh #To trained the svs network
- Downloads the Eigen test file at [GoogleDrive|BaiduPan]. Put this test .mat file in
/data/testing/
. Or you can follow the data preparation step mentioned above. Download svs model at [GoogleDrive|BaiduPan], and put it undertraining/prototxt/svs/caffemodel/
. - Go to directory
testing/
. Runtest_svs.m
to get the result before finetune. Please make sure to have downloaded the trained View Synthesis Network and Stereo Matching Network. Runtest_svs_end2end.m
to get our state-of-the-art result on monocular depth estimation. - If you want to get some visible results, change
line 4
oftest_svs.m
ortest_svs_end2end.m
to be‘visual = 1;’
.
- Some of our qualitative results are shown here.
Please cite our paper if you find it useful for your work:
@InProceedings{Luo2018SVS,
title={Single View Stereo Matching},
author={Yue Luo, Jimmy Ren, Mude Lin, Jiahao Pang, Wenxiu Sun, Hongsheng Li, Liang Lin},
booktitle ={CVPR},
year={2018},
}