Serverless Architecture for Data Processing and Detecting Anomalies with the Mars Express MARSIS Instrument
David Pacios, José Luis Vazquez-Poletti, Beatriz Sánchez-Cano, Rafael Moreno-Vozmediano, Nikolaos Schetakis, Luis Vazquez, and Dmitrij V. Titov
Published 2023 June 16 • © 2023. The Author(s). Published by the American Astronomical Society.
The Astronomical Journal, Volume 166, Number 1
Citation David Pacios et al 2023 AJ 166 19
DOI 10.3847/1538-3881/acd18d
The Astronomical Journal, Volume 166, Number 1
Citation David Pacios et al 2023 AJ 166 19
Abstract:
The Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) on board Mars Express has been sampling the topside ionosphere of Mars since mid-2005. The analysis of the main reflection (nadir) of the ionosphere through the ionograms provided by the MARSIS instrument is typically performed manually due to the high noise level in the lower frequencies. This task, which involves pattern recognition, turns out to be unfeasible for the >2 million ionograms available at the European Planetary Science Archive. In the present contribution, we propose a modular architecture based on serverless computing (a paradigm that stands on the cloud) for optimal processing of these ionograms. In particular, we apply serverless computing to detect oblique echoes in the ionosphere, which are nonnadir reflections produced when MARSIS is sounding regions above or nearby crustal magnetic fields, where the ionosphere loses the spherical symmetry. Oblique echoes are typically observed at similar frequencies to the nadir reflections but at different times delays, sometimes even overlaying the nadir reflection. Oblique echoes are difficult to analyze with the standard technique due to their nonconstant and highly variable appearance, but they harbor essential information on the state of the ionosphere over magnetized regions. In this work we compare the proposed serverless architecture with two local alternatives while processing a representative data subset and finally provide a study by means of cost and performance.
Exploring Deep Learning Models on GPR Data: A Comparative Study of AlexNet and VGG on a Dataset from Archaeological Sites
Merope Manataki, Nikos Papadopoulos, Nikolaos Schetakis, and Alessio Di Iorio
Remote Sens. 2023, 15(12), 3193;
Received: 10 May 2023 / Revised: 8 June 2023 / Accepted: 16 June 2023 / Published: 20 June 2023
(This article belongs to the Special Issue Application of Remote Sensing in Cultural Heritage Research II)
DOI https://doi.org/10.3390/rs15123193(This article belongs to the Special Issue Application of Remote Sensing in Cultural Heritage Research II)
Abstract:
This comparative study evaluates the performance of three popular deep learning architectures, AlexNet, VGG-16, and VGG-19, on a custom-made dataset of GPR C-scans collected from several archaeological sites. The introduced dataset has 15,000 training images and 3750 test images assigned to three classes: Anomaly, Noise, and Structure. The aim is to assess the performance of the selected architectures applied to the custom dataset and examine the potential gains of using deeper and more complex architectures. Further, this study aims to improve the training dataset using augmentation techniques. For the comparisons, learning curves, confusion matrices, precision, recall, and f1-score metrics are employed. The Grad-CAM technique is also used to gain insights into the models’ learning. The results suggest that using more convolutional layers improves overall performance. Further, augmentation techniques can also be used to increase the dataset volume without causing overfitting. In more detail, the best-obtained model was trained using VGG-19 architecture and the modified dataset, where the training samples were raised to 60,000 images through augmentation techniques. This model reached a classification accuracy of 94.12% on an evaluation set with 170 unseen data.
EYE-Sense: empowering remote sensing with machine learning for socio-economic analysis
Konstantinos Stavrakakis, David Pacios, Napoleon Papoutsakis, Nikolaos Schetakis, Paolo Bonfini, Thomas Papakosmas, Betty Charalampopoulou, José Luis Vázquez-Poletti, Alessio Di Iorio
Proceedings Volume 12786, Ninth International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2023); 127860D (2023)
DOI: https://doi.org/10.1117/12.2681739Abstract:
EYE-Sense is a Web-GIS platform which allows for easy access to valuable socio-economic insights from Earth Observation (EO) data by offering a code-less approach. The platform enables users to access various EO parameters, such as atmospheric and water quality indexes, and night-light activity. Moreover, the platform's pre-trained computer vision models (Faster-RCNN, Mask-RCNN, and YOLO) empower users to detect objects such as, e.g., airplanes, ships, containers, and beach umbrellas, to address specific user-based tasks. To provide cost-efficiency, scalability, flexibility, and easy maintenance, EYE-sense adopts a serverless architecture, leading to up to 50.4% processing cost reduction when compared to traditional server-based solutions. By bridging the gap between data gathering and processing, EYE-Sense extends the reach of Earth observation data to a broader audience.