by Yongcheng Liu, Bin Fan*, Lingfeng Wang, Jun Bai, Shiming Xiang, Chunhong Pan.
- The encoder is based on VGG-Net variant (Chen et al., 2015), which is to obtain finer feature maps (about 1/8 of input size rather than 1/32).
- On the last layer of encoder, multi-scale contexts are captured by dilated convolutional operations with dilation rates of 24, 18, 12, 6.
- As a trade-off, we only choose three shallow layers for refinement. Moreover, BN layer is not used in VGG ScasNet.
The configuration of ResNet ScasNet is almost the same as VGG ScasNet, except for four aspects:
- the encoder is based on ResNet variant (Zhao et al., 2016)
- four shallow layers are used for refinement
- seven residual correction schemes are designed for feature fusions
- BN layer is used.
-
The encoder in VGG ScasNet is finetuned with VGG-Net_variant_caffemodel
-
The encoder in ResNet ScasNet is finetuned with ResNet_variant_caffemodel
-
The Caffe we used to train VGG ScasNet is released on DeepLab_v2.
-
The Caffe we used to train ResNet ScasNet is released on PSPNet.
Please follow the instructions of Caffe, DeepLab_v2 and PSPNet.
The code has been tested successfully on Ubuntu 14.04 with CUDA 8.0.
- Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A. L., 2015. Semantic image segmentation with deep convolutional nets and fully connected crfs. In: International Conference on Learning Representations.
- Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J., 2016. Pyramid scene parsing network. arXiv preprint arXiv:1612.01105.
We would be very glad if ScasNet is helpful for your research, and please consider citing our paper (arXiv):
@article{liu2018scasnet,
author = {Yongcheng Liu and
Bin Fan and
Lingfeng Wang and
Jun Bai and
Shiming Xiang and
Chunhong Pan},
title = {Semantic Labeling in Very High Resolution Images via A Self-Cascaded Convolutional Neural Network},
journal = {ISPRS J. Photogram. and Remote Sensing.},
volume = {145},
pages = {78--95},
year = {2018}
}
We would be very glad if you have some ideas or questions about ScasNet to share with us, please contact [email protected]