Fully Convolutional Networks for Semantic Segmentation of Very High Resolution Remotely Sensed Images
The semantic segmentation of very high resolution (VHR) remotely sensed images is to assign a categorical label for each pixel, which is an important but unsolved problem in remote sensing. In recent years, fully convolutional networks (FCN) have become the state-of-the-art framework for the semantic segmentation in computer vision. Thus, this work aims to improve the semantic segmentation of VHR images by utilizing FCN. Firstly, we propose a promising framework which achieves the top result (90.6%) on the ISPRS Vaihingen benchmark. In the framework, the proposed FCN-based network obtains a competitive result (90.1%). In addition, we develop the DSM backend to enhance the result of FCN by incorporating complementary information from color images and digital surface model (DSM). Secondly, we propose the recurrent FCN for modeling the continuous context inherent in VHR images. Experimental results demonstrate that the recurrent FCN significantly boosts the performance of FCN by incorporating the local contextual information from patches and the global contextual information between patches.
Sun, W. (2018). Fully Convolutional Networks for Semantic Segmentation of Very High Resolution Remotely Sensed Images (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca. doi:10.11575/PRISM/31837