Journal article 243 views 43 downloads
MLMT-CNN for object detection and segmentation in multi-layer and multi-spectral images
Machine Vision and Applications, Volume: 33, Issue: 1
Swansea University Authors: Majedaldein Almahasneh, Xianghua Xie
PDF | Version of Record
© The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International LicenseDownload (1.86MB)
DOI (Published version): 10.1007/s00138-021-01261-y
Precisely localising solar Active Regions (AR) from multi-spectral images is a challenging but important task in understanding solar activity and its influence on space weather. A main challenge comes from each modality capturing a different location of the 3D objects, as opposed to typical multi-sp...
|Published in:||Machine Vision and Applications|
Springer Science and Business Media LLC
Check full text
No Tags, Be the first to tag this record!
Precisely localising solar Active Regions (AR) from multi-spectral images is a challenging but important task in understanding solar activity and its influence on space weather. A main challenge comes from each modality capturing a different location of the 3D objects, as opposed to typical multi-spectral imaging scenarios where all image bands observe the same scene. Thus, we refer to this special multi-spectral scenario as multi-layer. We present a multi-task deep learning framework that exploits the dependencies between image bands to produce 3D AR localisation (segmentation and detection) where different image bands (and physical locations) have their own set of results. Furthermore, to address the difficulty of producing dense AR annotations for training supervised machine learning (ML) algorithms, we adapt a training strategy based on weak labels (i.e. bounding boxes) in a recursive manner. We compare our detection and segmentation stages against baseline approaches for solar image analysis (multi-channel coronal hole detection, SPOCA for ARs) and state-of-the-art deep learning methods (Faster RCNN, U-Net). Additionally, both detection and segmentation stages are quantitatively validated on artificially created data of similar spatial configurations made from annotated multi-modal magnetic resonance images. Our framework achieves an average of 0.72 IoU (segmentation) and 0.90 F1 score (detection) across all modalities, comparing to the best performing baseline methods with scores of 0.53 and 0.58, respectively, on the artificial dataset, and 0.84 F1 score in the AR detection task comparing to baseline of 0.82 F1 score. Our segmentation results are qualitatively validated by an expert on real ARs.
Image segmentation; object detection; deep learning; weakly supervised learning; multi-spectral images; solar image analysis; solar active regions
Faculty of Science and Engineering