Skip to content

yanqi1811/Single-View-Stereo-Matching

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Single View Stereo Matching

This repo includes the source code of the paper: "Single View Stereo Matching" (CVPR'18 Spotlight) by Yue Luo*, Jimmy Ren*, Mude Lin, Jiahao Pang, Wenxiu Sun, Hongsheng Li, Liang Lin.

Contact: Yue Luo ([email protected])

Prerequisites

The code is tested on 64 bit Linux (Ubuntu 14.04 LTS). You should also install Matlab (We have tested on R2015a). We have tested our code on GTX TitanX with CUDA8.0+cuDNNv5. Please install all these prerequisites before running our code.

Installation

  1. Get the code.

    git clone https://github.com/lawy623/SVS.git
    cd SVS
  2. Build the code. Please follow Caffe instruction to install all necessary packages and build it.

    cd caffe/
    # Modify Makefile.config according to your Caffe installation/. Remember to allow CUDA and CUDNN.
    make -j8
    make matcaffe
  3. Prepare data. We write all data and labels into .mat files.

  • Please go to directory data/, and run get_data.sh to download Kitti Stereo 2015 and Kitti Raw datasets.
  • To create the .mat files, please go to directory data, and run the matlab scripts prepareTrain.m and prepareTest.m respectively. It will take some time to prepare data.
  • If you only want to test our models, you can simply downloads the Eigen test file at [GoogleDrive|BaiduPan]. Put this test .mat file in /data/testing/.

Training

View Synthesis Network

  • As described in our paper, we develop our View Synthesis Network based on the Deep3D method. Directly training the final model indicated in our paper using VGG16 initialization will easily sink into local optimum. We first keep the BatchNorm layers and train the model with VGG16 initialization. Go to training/ to run train_viewSyn.m. You can also run the matlab scripts from terminal at directory training/ by following commands. By default matlab is installed under /usr/local/MATLAB/R2015a. If the location of your matlab is not the same, please modify train_ViewSyn.sh if want to run the scripts from terminal. Download the VGG16 at [GoogleDrive|BaiduPan], and put it under training/prototxt/viewSynthesis_BN/preModel/ before finetuning. We train such a BN model for roughly 30k iterations.
   ## To run the training matlab scripts from terminal
   sh prototxt/viewSynthesis/train_ViewSyn.sh   #To trained the view synthesis network
  • We further remove the Batch Norm layers and obtain a better performance. Rename the trained BN model (in training/prototxt/viewSynthesis_BN/caffemodel) mentioned above as viewSyn_BN.caffemodel. Or you can directly download ours at [GoogleDrive|BaiduPan] and place it in the correct place. Change line 11 of train_viewSyn.m to be ‘model = param.model(2);’, and run train_viewSyn.m again.

Stereo Matching Network

  • We do not provide the training code for training this stereo matching network. We follow CRL and use their trained model. Relevant model settings can be found in training/prototxt/stereo/.

Single View Stereo Matching - End-to-end finetune.

  • To finetune our svs model, please first download the pretrain models for two sub-networks. Download View Synthesis Network at [GoogleDrive|BaiduPan], and put it under training/prototxt/viewSynthesis/caffemodel/. Download Stereo Matching Network. You can download the model trained on FlyingThings synthetic dataset at [GoogleDrive|BaiduPan], and a model further finetuned on Kitti Stereo 2015 at [GoogleDrive|BaiduPan]. Put the downloaded models under training/prototxt/stereo/caffemodel/
  • Go to training/ to run train_svs.m. You can also run the matlab scripts from terminal at directory training/ by following commands.
   ## To run the training matlab scripts from terminal
   sh prototxt/svs/train_svs.sh   #To trained the svs network

Testing

  • Downloads the Eigen test file at [GoogleDrive|BaiduPan]. Put this test .mat file in /data/testing/. Or you can follow the data preparation step mentioned above. Download svs model at [GoogleDrive|BaiduPan], and put it under training/prototxt/svs/caffemodel/.
  • Go to directory testing/. Run test_svs.m to get the result before finetune. Please make sure to have downloaded the trained View Synthesis Network and Stereo Matching Network. Run test_svs_end2end.m to get our state-of-the-art result on monocular depth estimation.
  • If you want to get some visible results, change line 4 of test_svs.m or test_svs_end2end.m to be ‘visual = 1;’.

Results

  • Some of our qualitative results are shown here.

Citation

Please cite our paper if you find it useful for your work:

@InProceedings{Luo2018SVS,
    title={Single View Stereo Matching},
    author={Yue Luo, Jimmy Ren, Mude Lin, Jiahao Pang, Wenxiu Sun, Hongsheng Li, Liang Lin},
    booktitle ={CVPR},
    year={2018},
}


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published