YorkSpace has migrated to a new version of its software. Access our Help Resources to learn how to use the refreshed site. Contact diginit@yorku.ca if you have any questions about the migration.
 

Image-Based Spatial Change Detection Using Deep Learning

Loading...
Thumbnail Image

Date

2023-12-08

Authors

Bousias- Alexakis, Evangelos

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Image change detection is an invaluable tool in monitoring and understanding the built and physical environments and supporting decision-making. Many recent research approaches for automatic change detection have been based on Deep Learning (DL) techniques and especially on variations of Convolutional Neural Network (CNN) architectures. CNNs have achieved notable success thanks to their great representational capacity, straightforward training, and state-of-the-art performance in visual tasks. Nevertheless, CNNs, like most DL approaches, still face limitations relating to their reliance on extensive labelled datasets, the localization accuracy and detail of the predicted outputs, and the notable model performance degradation when the target data have different characteristics from the training data.

This research contributes to the development and evaluation of novel DL methods and algorithms for automated image-based spatial change detection. It investigates novel architectures for enhanced model performance and ways to address the limited availability of labelled data and improve model generalizability. Two main approaches are investigated: (i) a direct approach, where both instances are fed into the DL model that is trained to output the prediction of the change map, and (ii) a post-classification approach, where change detection is performed based on each epoch’s semantic segmentation predictions. In both cases, novel, enhanced CNN architectures have been proposed that leverage the semantic information of objects’ boundaries to improve the accuracy of the model’s predictions. Furthermore, training frameworks inspired by self-ensembling and the Mean Teacher method were developed for semi-supervised learning and domain adaptation, attenuating the models’ reliance on large, labelled training datasets and improving their generalization performance. We evaluated the proposed approaches on multiple datasets using the precision, recall, F1 score, and Intersection over Union (IoU) metrics. The results indicate that both boundary-enhanced approaches lead to consistent, albeit marginal, benefits between 1 and 2 %. Notably, the proposed semi-supervised training framework for direct CD approximately matches the performance of the fully supervised approach while using only 20% of the available training labels. In domain adaptation, our post-classification approach significantly outperforms typical supervised methods, with the most notable gains in recall rate (>22%) and IoU (12.6%). These findings highlight the effectiveness of our techniques.

Description

Keywords

Remote sensing

Citation