Image-Based Spatial Change Detection Using Deep Learning

dc.contributor.advisorArmenakis, Costas
dc.contributor.authorBousias- Alexakis, Evangelos
dc.date.accessioned2023-12-08T14:52:13Z
dc.date.available2023-12-08T14:52:13Z
dc.date.issued2023-12-08
dc.date.updated2023-12-08T14:52:12Z
dc.degree.disciplineEarth & Space Science
dc.degree.levelDoctoral
dc.degree.namePhD - Doctor of Philosophy
dc.description.abstractImage change detection is an invaluable tool in monitoring and understanding the built and physical environments and supporting decision-making. Many recent research approaches for automatic change detection have been based on Deep Learning (DL) techniques and especially on variations of Convolutional Neural Network (CNN) architectures. CNNs have achieved notable success thanks to their great representational capacity, straightforward training, and state-of-the-art performance in visual tasks. Nevertheless, CNNs, like most DL approaches, still face limitations relating to their reliance on extensive labelled datasets, the localization accuracy and detail of the predicted outputs, and the notable model performance degradation when the target data have different characteristics from the training data. This research contributes to the development and evaluation of novel DL methods and algorithms for automated image-based spatial change detection. It investigates novel architectures for enhanced model performance and ways to address the limited availability of labelled data and improve model generalizability. Two main approaches are investigated: (i) a direct approach, where both instances are fed into the DL model that is trained to output the prediction of the change map, and (ii) a post-classification approach, where change detection is performed based on each epoch’s semantic segmentation predictions. In both cases, novel, enhanced CNN architectures have been proposed that leverage the semantic information of objects’ boundaries to improve the accuracy of the model’s predictions. Furthermore, training frameworks inspired by self-ensembling and the Mean Teacher method were developed for semi-supervised learning and domain adaptation, attenuating the models’ reliance on large, labelled training datasets and improving their generalization performance. We evaluated the proposed approaches on multiple datasets using the precision, recall, F1 score, and Intersection over Union (IoU) metrics. The results indicate that both boundary-enhanced approaches lead to consistent, albeit marginal, benefits between 1 and 2 %. Notably, the proposed semi-supervised training framework for direct CD approximately matches the performance of the fully supervised approach while using only 20% of the available training labels. In domain adaptation, our post-classification approach significantly outperforms typical supervised methods, with the most notable gains in recall rate (>22%) and IoU (12.6%). These findings highlight the effectiveness of our techniques.
dc.identifier.urihttps://hdl.handle.net/10315/41799
dc.languageen
dc.rightsAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.
dc.subjectRemote sensing
dc.subject.keywordsChange detection
dc.subject.keywordsDeep learning
dc.subject.keywordsCNN
dc.subject.keywordsSemi-supervised learning
dc.subject.keywordsMean Teacher
dc.subject.keywordsDomain adaptation
dc.subject.keywordsSemantic segmentation
dc.subject.keywordsEdge detection
dc.subject.keywordsAI
dc.titleImage-Based Spatial Change Detection Using Deep Learning
dc.typeElectronic Thesis or Dissertation

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
BousiasAlexakis_Evangelos_2023_PhD.pdf
Size:
11.69 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
license.txt
Size:
1.87 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
YorkU_ETDlicense.txt
Size:
3.39 KB
Format:
Plain Text
Description: