Skip to content

Techniques for deep learning with satellite & aerial imagery

License

Notifications You must be signed in to change notification settings

nwut/techniques

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Techniques for deep learning on satellite and aerial imagery.

Introduction

Deep learning has transformed the way satellite and aerial images are analyzed and interpreted. These images pose unique challenges, such as large sizes and diverse object classes, which offer opportunities for deep learning researchers. This repository offers a comprehensive overview of various deep learning techniques for analyzing satellite and aerial imagery, including architectures, models, and algorithms for tasks such as classification, segmentation, and object detection. It serves as a valuable resource for researchers, practitioners, and anyone interested in the latest advances in deep learning and its impact on computer vision and remote sensing.

How to use this repository: if you know exactly what you are looking for (e.g. you have the paper name) you can Control+F to search for it in this page. Note BEGINNER is used to identify material that is suitable for begginers & getting started with a topic

Techniques

  1. Classification
  2. Segmentation
  3. Instance segmentation
  4. Object detection
  5. Object counting
  6. Regression
  7. Cloud detection & removal
  8. Change detection
  9. Time series
  10. Crop classification
  11. Crop yield
  12. Wealth and economic activity
  13. Disaster response
  14. Super-resolution
  15. Pansharpening
  16. Image-to-image translation
  17. Data fusion
  18. Generative Adversarial Networks (GANs)
  19. Autoencoders, dimensionality reduction, image embeddings & similarity search
  20. Image retrieval
  21. Image Captioning
  22. Visual Question Answering
  23. Mixed data learning
  24. Few-shot learning
  25. Self-supervised, unsupervised & contrastive learning
  26. Weakly & semi-supervised learning
  27. Active learning
  28. Image registration
  29. Terrain mapping, Disparity Estimation, Lidar, DEMs & NeRF
  30. Thermal Infrared
  31. SAR
  32. General image quality
  33. Synthetic data

1. Classification


The UC merced dataset is a well known classification dataset.

Classification is a fundamental task in remote sensing data analysis, where the goal is to assign a semantic label to each image, such as 'urban', 'forest', 'agricultural land', etc. The process of assigning labels to an image is known as image-level classification. However, in some cases, a single image might contain multiple different land cover types, such as a forest with a river running through it, or a city with both residential and commercial areas. In these cases, image-level classification becomes more complex and involves assigning multiple labels to a single image. This can be accomplished using a combination of feature extraction and machine learning algorithms to accurately identify the different land cover types. It is important to note that image-level classification should not be confused with pixel-level classification, also known as semantic segmentation. While image-level classification assigns a single label to an entire image, semantic segmentation assigns a label to each individual pixel in an image, resulting in a highly detailed and accurate representation of the land cover types in an image. Read A brief introduction to satellite image classification with neural networks

1.1. Land classification on Sentinel 2 data using a simple sklearn cluster algorithm or deep learning CNN BEGINNER

1.2. Land Use Classification on Merced dataset using CNN in Keras or fastai. Also checkout Multi-label Land Cover Classification using the redesigned multi-label Merced dataset with 17 land cover classes BEGINNER

1.3. Multi-Label Classification of Satellite Photos of the Amazon Rainforest using keras or FastAI BEGINNER

1.4. EuroSat-Satellite-CNN-and-ResNet -> Classifying custom image datasets by creating Convolutional Neural Networks and Residual Networks from scratch with PyTorch BEGINNER

1.5. Detecting Informal Settlements from Satellite Imagery using fine-tuning of ResNet-50 classifier with repo

1.6. Land-Cover-Classification-using-Sentinel-2-Dataset -> well written Medium article accompanying this repo but using the EuroSAT dataset

1.7. Land Cover Classification of Satellite Imagery using Convolutional Neural Networks using Keras and a multi spectral dataset captured over vineyard fields of Salinas Valley, California

1.8. Detecting deforestation from satellite images -> using FastAI and ResNet50, with repo fsdl_deforestation_detection

1.9. Neural Network for Satellite Data Classification Using Tensorflow in Python -> A step-by-step guide for Landsat 5 multispectral data classification for binary built-up/non-built-up class prediction, with repo

1.10. Slums mapping from pretrained CNN network on VHR (Pleiades: 0.5m) and MR (Sentinel: 10m) imagery

1.11. Comparing urban environments using satellite imagery and convolutional neural networks -> includes interesting study of the image embedding features extracted for each image on the Urban Atlas dataset. Accompanying paper

1.12. RSI-CB -> A Large Scale Remote Sensing Image Classification Benchmark via Crowdsource Data. See also Remote-sensing-image-classification

1.13. NAIP_PoolDetection -> modelled as an object recognition problem, a CNN is used to identify images as being swimming pools or something else - specifically a street, rooftop, or lawn

1.14. Land Use and Land Cover Classification using a ResNet Deep Learning Architecture -> uses fastai and the EuroSAT dataset

1.15. Vision Transformers Use Case: Satellite Image Classification without CNNs

1.16. WaterNet -> a CNN that identifies water in satellite images

1.17. Road-Network-Classification -> Road network classification model using ResNet-34, road classes organic, gridiron, radial and no pattern

1.18. Scaling AI to map every school on the planet

1.19. Landsat classification CNN tutorial with repo

1.20. satellite-crosswalk-classification

1.21. Understanding the Amazon Rainforest with Multi-Label Classification + VGG-19, Inceptionv3, AlexNet & Transfer Learning

1.22. Implementation of the 3D-CNN model for land cover classification -> uses the Sundarbans dataset, with repo. Also read Land cover classification of Sundarbans satellite imagery using K-Nearest Neighbor(K-NNC), Support Vector Machine (SVM), and Gradient Boosting classification algorithms which is by the same author and shares the repo

1.23. SSTN -> PyTorch Implementation of SSTNs for hyperspectral image classifications from the IEEE T-GRS paper "Spectral-Spatial Transformer Network for Hyperspectral Image Classification: A FAS Framework." Demonstrates a novel spectral-spatial transformer network (SSTN), which consists of spatial attention and spectral association modules, to overcome the constraints of convolution kernels

1.24. SatellitePollutionCNN -> A novel algorithm to predict air pollution levels with state-of-art accuracy using deep learning and GoogleMaps satellite images

1.25. PropertyClassification -> Classifying the type of property given Real Estate, satellite and Street view Images

1.26. remote-sense-quickstart -> classification on a number of datasets, including with attention visualization

1.27. Satellite image classification using multiple machine learning algorithms

1.28. satsense -> a Python library for land use/cover classification using classical features including HoG & NDVI

1.29. PyTorch_UCMerced_LandUse -> simple pytorch implementation fine tuned on ResNet and basic augmentations

1.30. EuroSAT-image-classification -> simple pytorch implementation fine tuned on ResNet

1.31. landcover_classification -> using fast.ai on EuroSAT

1.32. IGARSS2020_BWMS -> Band-Wise Multi-Scale CNN Architecture for Remote Sensing Image Scene Classification with a novel CNN architecture for the feature embedding of high-dimensional RS images

1.33. image.classification.on.EuroSAT -> solution in pure pytorch

1.34. hurricane_damage -> Post-hurricane structure damage assessment based on aerial imagery with CNN

1.35. openai-drivendata-challenge -> Using deep learning to classify the building material of rooftops (aerial imagery from South America)

1.36. is-it-abandoned -> Can we tell if a house is abandoned based on aerial LIDAR imagery?

1.37. BoulderAreaDetector -> CNN to classify whether a satellite image shows an area would be a good rock climbing spot or not

1.38. ISPRS_S2FL -> code for paper: Multimodal Remote Sensing Benchmark Datasets for Land Cover Classification with A Shared and Specific Feature Learning Model. S2FL is capable of decomposing multimodal RS data into modality-shared and modality-specific components, enabling the information blending of multi-modalities more effectively 1.39. Brazilian-Coffee-Detection -> uses Keras with public dataset

1.40. tf-crash-severity -> predict the crash severity for given road features contained within satellite images

1.41. ensemble_LCLU -> code for 2021 paper: Deep neural network ensembles for remote sensing land cover and land use classification

1.42. cerraNet -> contextually classify the types of use and coverage in the Brazilian Cerrado

1.43. Urban-Analysis-Using-Satellite-Imagery -> classify urban area as planned or unplanned using a combination of segmentation and classification

1.44. ChipClassification -> code for 2019 paper: Deep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery

1.45. DeeplearningClassficationLandsat-tImages -> Water/Ice/Land Classification Using Large-Scale Medium Resolution Landsat Satellite Images

1.46. wildfire-detection-from-satellite-images-ml -> detect whether an image contains a wildfire, with example flask web app

1.47. mining-discovery-with-deep-learning -> code for the 2020 paper: Mining and Tailings Dam Detection in Satellite Imagery Using Deep Learning

1.48. e-Farmerce-platform -> classify crop type

1.49. sentinel2-deep-learning -> Novel Training Methodologies for Land Classification of Sentinel-2 Imagery

1.50. RSSC-transfer -> code for 2021 paper: The Role of Pre-Training in High-Resolution Remote Sensing Scene Classification

1.51. Classifying Geo-Referenced Photos and Satellite Images for Supporting Terrain Classification -> detect floods

1.52. Pay-More-Attention -> code for 2021 paper: Remote Sensing Image Scene Classification Based on an Enhanced Attention Module

1.53. Remote-Sensing-Image-Classification-via-Improved-Cross-Entropy-Loss-and-Transfer-Learning-Strategy -> code for 2019 paper: Remote Sensing Image Classification via Improved Cross-Entropy Loss and Transfer Learning Strategy Based on Deep Convolutional Neural Networks

1.54. DenseNet40-for-HRRSISC -> DenseNet40 for remote sensing image scene classification, uses UC Merced Dataset

1.55. SKAL -> code for 2022 paper: Looking Closer at the Scene: Multiscale Representation Learning for Remote Sensing Image Scene Classification

1.56. potsdam-tensorflow-practice -> image classification of Potsdam dataset using tensorflow

1.57. SAFF -> code for 2021 paper: Self-Attention-Based Deep Feature Fusion for Remote Sensing Scene Classification

1.58. GLNET -> code for 2021 paper: Convolutional Neural Networks Based Remote Sensing Scene Classification under Clear and Cloudy Environments

1.59. Remote-sensing-image-classification -> transfer learning using pytorch to classify remote sensing data into three classes: aircrafts, ships, none

1.60. remote_sensing_pretrained_models -> as an alternative to fine tuning on models pretrained on ImageNet, here some CNN are pretrained on the RSD46-WHU & AID datasets

1.61. CNN_AircraftDetection -> CNN for aircraft detection in satellite images using keras

1.62. OBIC-GCN -> code for 2021 paper: Object-based Classification Framework of Remote Sensing Images with Graph Convolutional Networks

1.63. aitlas-arena -> An open-source benchmark framework for evaluating state-of-the-art deep learning approaches for image classification in Earth Observation (EO)

1.64. droughtwatch -> code for 2020 paper: Satellite-based Prediction of Forage Conditions for Livestock in Northern Kenya

1.65. JSTARS_2020_DPN-HRA -> code for 2020 paper: Deep Prototypical Networks With Hybrid Residual Attention for Hyperspectral Image Classification

1.66. SIGNA -> code for 2022 paper: Semantic Interleaving Global Channel Attention for Multilabel Remote Sensing Image Classification

1.67. Satellite Image Classification using rmldnn and Sentinel 2 data

1.68. PBDL -> code for 2022 paper: Patch-Based Discriminative Learning for Remote Sensing Scene Classification

1.69. EmergencyNet -> identify fire and other emergencies from a drone

1.70. satellite-deforestation -> Using Satellite Imagery to Identify the Leading Indicators of Deforestation, applied to the Kaggle Challenge Understanding the Amazon from Space

1.71. RSMLC -> code for 2023 paper: Deep Network Architectures as Feature Extractors for Multi-Label Classification of Remote Sensing Images

1.72. FireRisk -> A Remote Sensing Dataset for Fire Risk Assessment with Benchmarks Using Supervised and Self-supervised Learning

1.73. flood_susceptibility_mapping -> Towards urban flood susceptibility mapping using data-driven models in Berlin, Germany

1.74. tick-tick-bloom -> Winners of the Tick Tick Bloom: Harmful Algal Bloom Detection Challenge. Task was to predict severity of algae bloom, winners used decision trees

2.1. Segmentation


(left) a satellite image and (right) the semantic classes in the image.

Image segmentation is a crucial step in image analysis and computer vision, with the goal of dividing an image into semantically meaningful segments or regions. The process of image segmentation assigns a class label to each pixel in an image, effectively transforming an image from a 2D grid of pixels into a 2D grid of pixels with assigned class labels. One common application of image segmentation is road or building segmentation, where the goal is to identify and separate roads and buildings from other features within an image. To accomplish this task, single class models are often trained to differentiate between roads and background, or buildings and background. These models are designed to recognize specific features, such as color, texture, and shape, that are characteristic of roads or buildings, and use this information to assign class labels to the pixels in an image. Another common application of image segmentation is land use or crop type classification, where the goal is to identify and map different land cover types within an image. In this case, multi-class models are typically used to recognize and differentiate between multiple classes within an image, such as forests, urban areas, and agricultural land. These models are capable of recognizing complex relationships between different land cover types, allowing for a more comprehensive understanding of the image content. Read A brief introduction to satellite image segmentation with neural networks. Note that many articles which refer to 'hyperspectral land classification' are often actually describing semantic segmentation. Image source

2.1.1. awesome-satellite-images-segmentation

2.1.2. Satellite Image Segmentation: a Workflow with U-Net is a decent intro article BEGINNER

2.1.3. mmsegmentation -> Semantic Segmentation Toolbox with support for many remote sensing datasets including LoveDA , Potsdam, Vaihingen & iSAID

2.1.4. segmentation_gym -> A neural gym for training deep learning models to carry out geoscientific image segmentation

2.1.5. How to create a DataBlock for Multispectral Satellite Image Semantic Segmentation using Fastai

2.1.6. Using a U-Net for image segmentation, blending predicted patches smoothly is a must to please the human eye -> python code to blend predicted patches smoothly. See Satellite-Image-Segmentation-with-Smooth-Blending

2.1.7. DCA -> code for 2022 paper: Deep Covariance Alignment for Domain Adaptive Remote Sensing Image Segmentation

2.1.8. SCAttNet -> Semantic Segmentation Network with Spatial and Channel Attention Mechanism

2.1.9. unetseg -> A set of classes and CLI tools for training a semantic segmentation model based on the U-Net architecture, using Tensorflow and Keras. This implementation is tuned specifically for satellite imagery and other geospatial raster data

2.1.10. Semantic Segmentation of Satellite Imagery using U-Net & fast.ai -> with repo

2.1.11. clusternet_segmentation -> Unsupervised Segmentation by applying K-Means clustering to the features generated by Neural Network

2.1.12. Collection of different Unet Variant -> demonstrates VggUnet, ResUnet, DenseUnet, Unet. AttUnet, MobileNetUnet, NestedUNet, R2AttUNet, R2UNet, SEUnet, scSEUnet, Unet_Xception_ResNetBlock, in keras

2.1.13. Efficient-Transformer -> code for 2021 paper: Efficient Transformer for Remote Sensing Image Segmentation

2.1.14. weakly_supervised -> code for the 2020 paper: Weakly Supervised Deep Learning for Segmentation of Remote Sensing Imagery

2.1.15. HRCNet-High-Resolution-Context-Extraction-Network -> code to 2021 paper: High-Resolution Context Extraction Network for Semantic Segmentation of Remote Sensing Images

2.1.16. Semantic segmentation of SAR images using a self supervised technique

2.1.17. satellite-segmentation-pytorch -> explores a wide variety of image augmentations to increase training dataset size

2.1.18. IEEE_TGRS_SpectralFormer -> code for 2021 paper: Spectralformer: Rethinking hyperspectral image classification with transformers

2.1.19. Unsupervised Segmentation of Hyperspectral Remote Sensing Images with Superpixels -> code for 2022 paper

2.1.20. Semantic-Segmentation-with-Sparse-Labels -> codes and data for learning from sparse annotations

2.1.21. SNDF -> code for 2020 paper: Superpixel-enhanced deep neural forest for remote sensing image semantic segmentation

2.1.22. Satellite-Image-Classification -> using random forest or support vector machines (SVM) and sklearn

2.1.23. dynamic-rs-segmentation -> code for 2019 paper: Dynamic Multi-Context Segmentation of Remote Sensing Images based on Convolutional Networks

2.1.24. Remote-sensing-image-semantic-segmentation-tf2 -> remote sensing image semantic segmentation repository based on tf.keras includes backbone networks such as resnet, densenet, mobilenet, and segmentation networks such as deeplabv3+, pspnet, panet, and refinenet

2.1.25. segmentation_models.pytorch -> Segmentation models with pretrained backbones, has been used in multiple winning solutions to remote sensing competitions

2.1.26. SSRN -> code for 2017 paper: Spectral-Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework

2.1.27. SO-DNN -> code for 2021 paper: Simplified object-based deep neural network for very high resolution remote sensing image classification

2.1.28. SANet -> code for 2019 paper: Scale-Aware Network for Semantic Segmentation of High-Resolution Aerial Images

2.1.29. aerial-segmentation -> code for 2017 paper: Learning Aerial Image Segmentation from Online Maps

2.1.30. IterativeSegmentation -> code for 2016 paper: Recurrent Neural Networks to Correct Satellite Image Classification Maps

2.1.31. Detectron2 FPN + PointRend Model for amazing Satellite Image Segmentation -> 15% increase in accuracy when compared to the U-Net model

2.1.32. HybridSN -> code for 2019 paper: HybridSN: Exploring 3D-2D CNN Feature Hierarchy for Hyperspectral Image Classification. Also a pytorch implementation here

2.1.33. TNNLS_2022_X-GPN -> code for 2022 paper: Semisupervised Cross-scale Graph Prototypical Network for Hyperspectral Image Classification

2.1.34. singleSceneSemSegTgrs2022 -> code for 2022 paper: Unsupervised Single-Scene Semantic Segmentation for Earth Observation

2.1.35. A-Fast-and-Compact-3-D-CNN-for-HSIC -> code for 2020 paper: A Fast and Compact 3-D CNN for Hyperspectral Image Classification

2.1.36. HSNRS -> code for 2017 paper: Hourglass-ShapeNetwork Based Semantic Segmentation for High Resolution Aerial Imagery

2.1.37. GiGCN -> code for 2022 paper: Graph-in-Graph Convolutional Network for Hyperspectral Image Classification

2.1.38. SSAN -> code for 2019 paper: Spectral-Spatial Attention Networks for Hyperspectral Image Classification

2.1.39. drone-images-semantic-segmentation -> Multiclass Semantic Segmentation of Aerial Drone Images Using Deep Learning

2.1.40. Satellite-Image-Segmentation-with-Smooth-Blending -> uses Smoothly-Blend-Image-Patches

2.1.41. BayesianUNet -> Pytorch Bayesian UNet model for segmentation and uncertainty prediction, applied to the Potsdam Dataset

2.1.42. RAANet -> code for 2022 paper: RAANet: A Residual ASPP with Attention Framework for Semantic Segmentation of High-Resolution Remote Sensing Images

2.1.43. wheelRuts_semanticSegmentation -> code for 2022 paper: Mapping wheel-ruts from timber harvesting operations using deep learning techniques in drone imagery

2.1.44. LWN-for-UAVRSI -> Light-Weight Semantic Segmentation Network for UAV Remote Sensing Images, applied to Vaihingen, UAVid and UDD6 datasets

2.1.45. hypernet -> library which implements; accurate hyperspectral image (HSI) segmentation and analysis using deep neural networks, optimization of deep neural network architectures for hyperspectral data segmentation, hyperspectral data augmentation, validation of existent and emerging HSI segmentation algorithms, simulation of multispectral data using HSI

2.1.46. ST-UNet -> code for 2022 paper: Swin Transformer Embedding UNet for Remote Sensing Image Semantic Segmentation

2.1.47. EDFT -> code for 2022 paper: Efficient Depth Fusion Transformer for Aerial Image Semantic Segmentation

2.1.48. WiCoNet -> code for 2022 paper: Looking Outside the Window: Wide-Context Transformer for the Semantic Segmentation of High-Resolution Remote Sensing Images

2.1.49. CRGNet -> code for 2022 paper: Consistency-Regularized Region-Growing Network for Semantic Segmentation of Urban Scenes with Point-Level Annotations

2.1.50. SA-UNet -> code for 2022 paper: Improved U-Net Remote Sensing Classification Algorithm Fusing Attention and Multiscale Features

2.1.51. MANet -> code for 2020 paper: Multi-Attention-Network for Semantic Segmentation of Fine Resolution Remote Sensing Images

2.1.52. BANet -> code for 2021 paper: Transformer Meets Convolution: A Bilateral Awareness Network for Semantic Segmentation of Very Fine Resolution Urban Scene Images

2.1.53. MACU-Net -> code for 2022 paper: MACU-Net for Semantic Segmentation of Fine-Resolution Remotely Sensed Images

2.1.54. DNAS -> code for 2022 paper: DNAS: Decoupling Neural Architecture Search for High-Resolution Remote Sensing Image Semantic Segmentation

2.1.55. A2-FPN -> code for 2021 paper: A2-FPN for Semantic Segmentation of Fine-Resolution Remotely Sensed Images

2.1.56. MAResU-Net -> code for 2020 paper: Multi-stage Attention ResU-Net for Semantic Segmentation of Fine-Resolution Remote Sensing Images

2.1.57. ml_segmentation -> semantic segmentation of buildings using Random Forest, Support Vector Machine (SVM) & Gradient Boosting Classifier (GBC)

2.1.58. RSEN -> code for 2021 paper: Robust Self-Ensembling Network for Hyperspectral Image Classification

2.1.59. MSNet -> code for 2022 paper: MSNet: multispectral semantic segmentation network for remote sensing images

2.1.60. k-textures -> code (R) for 2022 paper: K-textures, a self-supervised hard clustering deep learning algorithm for satellite image segmentation

2.1.61. Swin-Transformer-Semantic-Segmentation -> code for 2021 paper: Satellite Image Semantic Segmentation

2.1.62. UDA_for_RS -> code for 2022 paper: Unsupervised Domain Adaptation for Remote Sensing Semantic Segmentation with Transformer

2.1.63. A-3D-CNN-AM-DSC-model-for-hyperspectral-image-classification -> code for 2022 paper: Attention Mechanism and Depthwise Separable Convolution Aided 3DCNN for Hyperspectral Remote Sensing Image Classification

2.1.64. contrastive-distillation -> code for paper: A Contrastive Distillation Approach for Incremental Semantic Segmentation in Aerial Images

2.1.65. SegForestNet -> code for 2023 paper: SegForestNet: Spatial-Partitioning-Based Aerial Image Segmentation

2.1.66. MFVNet -> code for 2023 paper: MFVNet: Deep Adaptive Fusion Network with Multiple Field-of-Views for Remote Sensing Image Semantic Segmentation

2.1.67. Wildebeest-UNet -> detecting wildebeest and zebras in Serengeti-Mara ecosystem from very-high-resolution satellite imagery

2.1.68. segment-anything-eo -> Earth observation tools for Meta AI Segment Anything (SAM - Segment Anything Model)

2.1.69. HR-Image-classification_SDF2N -> code for 2023 paper: A Shallow-to-Deep Feature Fusion Network for VHR Remote Sensing Image Classification

2.2. Segmentation - Land use & land cover

2.2.1. U-Net for Semantic Segmentation on Unbalanced Aerial Imagery -> using the Dubai dataset BEGINNER

2.2.2. Semantic Segmentation of Dubai dataset Using a TensorFlow U-Net Model BEGINNER

2.2.3. nga-deep-learning -> performs semantic segmentation on high resultion GeoTIF data using a modified U-Net & Keras, published by NASA researchers

2.2.4. Automatic Detection of Landfill Using Deep Learning

2.2.5. SpectralNET -> a 2D wavelet CNN for Hyperspectral Image Classification, uses Salinas Scene dataset & Keras

2.2.6. laika -> The goal of this repo is to research potential sources of satellite image data and to implement various algorithms for satellite image segmentation

2.2.7. PEARL -> a human-in-the-loop AI tool to drastically reduce the time required to produce an accurate Land Use/Land Cover (LULC) map, blog post, uses Microsoft Planetary Computer and ML models run locally in the browser. Code for backelnd and frontend

2.2.8. Land Cover Classification with U-Net -> Satellite Image Multi-Class Semantic Segmentation Task with PyTorch Implementation of U-Net, uses DeepGlobe Land Cover Segmentation dataset, with code

2.2.9. Multi-class semantic segmentation of satellite images using U-Net using DSTL dataset, tensorflow 1 & python 2.7. Accompanying article

2.2.10. Codebase for multi class land cover classification with U-Net accompanying a masters thesis, uses Keras

2.2.11. dubai-satellite-imagery-segmentation -> due to the small dataset, image augmentation was used

2.2.12. CDL-Segmentation -> code for the 2021 paper: Deep Learning Based Land Cover and Crop Type Classification: A Comparative Study. Compares UNet, SegNet & DeepLabv3+

2.2.13. LoveDA -> code for the 2021 paper: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation

2.2.14. Satellite Imagery Semantic Segmentation with CNN -> 7 different segmentation classes, DeepGlobe Land Cover Classification Challenge dataset, with repo

2.2.15. Aerial Semantic Segmentation using U-Net Deep Learning Model medium article, with repo

2.2.16. UNet-Satellite-Image-Segmentation -> A Tensorflow implentation of light UNet semantic segmentation framework

2.2.17. DeepGlobe Land Cover Classification Challenge solution

2.2.18. Semantic-segmentation-with-PyTorch-Satellite-Imagery -> predict 25 classes on RGB imagery taken to assess the damage after Hurricane Harvey

2.2.19. Semantic Segmentation With Sentinel-2 Imagery -> uses LandCoverNet dataset and fast.ai

2.2.20. CNN_Enhanced_GCN -> code for 2021 paper: CNN-Enhanced Graph Convolutional Network With Pixel- and Superpixel-Level Feature Fusion for Hyperspectral Image Classification

2.2.21. LULCMapping-WV3images-CORINE-DLMethods -> Land Use and Land Cover Mapping Using Deep Learning Based Segmentation Approaches and VHR Worldview-3 Images

2.2.22. SOLC -> code for 2022 paper: MCANet: A joint semantic segmentation framework of optical and SAR images for land use classification. Uses WHU-OPT-SAR-dataset

2.2.23. MUnet-LUC -> Land Use with mUnet

2.2.24. land-cover -> code for 2021 paper: Model Generalization in Deep Learning Applications for Land Cover Mapping

2.2.25. generalizablersc -> code for 2022 paper: Cross-dataset Learning for Generalizable Land Use Scene Classification

2.2.26. Large-scale-Automatic-Identification-of-Urban-Vacant-Land -> code for 2022 paper: Large-scale automatic identification of urban vacant land using semantic segmentation of high-resolution remote sensing images

2.2.27. SSLTransformerRS -> code for 2022 paper: Self-supervised Vision Transformers for Land-cover Segmentation and Classification

2.2.28. aerial-tile-segmentation -> Large satellite image semantic segmentation into 6 classes using Tensorflow 2.0 and ISPRS benchmark dataset

2.2.29. LULCMapping-WV3images-CORINE-DLMethods -> code for 2022 paper: Land Use and Land Cover Mapping Using Deep Learning Based Segmentation Approaches and VHR Worldview-3 Images

2.2.30. DCSA-Net -> code for 2022 paper: Dynamic Convolution Self-Attention Network for Land-Cover Classification in VHR Remote-Sensing Images

2.2.31. CHeGCN-CNN_enhanced_Heterogeneous_Graph -> code for 2022 paper: CNN-Enhanced Heterogeneous Graph Convolutional Network: Inferring Land Use from Land Cover with a Case Study of Park Segmentation

2.2.32. TCSVT_2022_DGSSC -> code for the 2022 paper: DGSSC: A Deep Generative Spectral-Spatial Classifier for Imbalanced Hyperspectral Imagery

2.2.33. DeepForest-Wetland-Paper -> code for 2021 paper: Deep Forest classifier for wetland mapping using the combination of Sentinel-1 and Sentinel-2 data, GIScience & Remote Sensing

2.2.34. Wetland_UNet -> UNet models that can delineate wetlands using remote sensing data input including bands from Sentinel-2 LiDAR and geomorphons. By the Conservation Innovation Center of Chesapeake Conservancy and Defenders of Wildlife

2.2.35. DeepGlobe2018 -> PyTorch U-net for multi-class semantic segmentation

2.3. Segmentation - Vegetation, crops & crop boundaries

2.3.1. Сrор field boundary detection: approaches and main challenges -> Medium article, covering historical and modern approaches BEGINNER

2.3.2. kenya-crop-mask -> Annual and in-season crop mapping in Kenya - LSTM classifier to classify pixels as containing crop or not, and a multi-spectral forecaster that provides a 12 month time series given a partial input. Dataset downloaded from GEE and pytorch lightning used for training BEGINNER

2.3.3. What’s growing there? Identify crops from multi-spectral remote sensing data (Sentinel 2) using eo-learn for data pre-processing, cloud detection, NDVI calculation, image augmentation & fastai

2.3.4. Tree species classification from from airborne LiDAR and hyperspectral data using 3D convolutional neural networks accompanies research paper and uses fastai

2.3.5. crop-type-classification -> using Sentinel 1 & 2 data with a U-Net + LSTM, more features (i.e. bands) and higher resolution produced better results (article, no code)

2.3.6. Find sports fields using Mask R-CNN and overlay on open-street-map

2.3.7. An LSTM to generate a crop mask for Togo

2.3.8. DeepSatModels -> Code for paper "Context-self contrastive pretraining for crop type semantic segmentation"

2.3.9. farm-pin-crop-detection-challenge -> Using eo-learn and fastai to identify crops from multi-spectral remote sensing data

2.3.10. Detecting Agricultural Croplands from Sentinel-2 Satellite Imagery -> We developed UNet-Agri, a benchmark machine learning model that classifies croplands using open-access Sentinel-2 imagery at 10m spatial resolution

2.3.11. DeepTreeAttention -> Implementation of Hang et al. 2020 "Hyperspectral Image Classification with Attention Aided CNNs" for tree species prediction

2.3.12. Crop-Classification -> crop classification using multi temporal satellite images

2.3.13. ParcelDelineation -> using a French polygons dataset and unet in keras

2.3.14. crop-mask -> End-to-end workflow for generating high resolution cropland maps, uses GEE & LSTM model

2.3.15. DeepCropMapping -> A multi-temporal deep learning approach with improved spatial generalizability for dynamic corn and soybean mapping, uses LSTM

2.3.16. Segment Canopy Cover and Soil using NDVI and Rasterio

2.3.17. Use KMeans clustering to segment satellite imagery by land cover/land use

2.3.18. ResUnet-a -> Implementation of the paper "ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data" in TensorFlow

2.3.19. DSD_paper_2020 -> The code for the paper: Crop Type Classification based on Machine Learning with Multitemporal Sentinel-1 Data

2.3.20. MR-DNN -> extract rice field from Landsat 8 satellite imagery

2.3.21. deep_learning_forest_monitoring -> Estimate vegetation height, code for paper: Forest mapping and monitoring of the African continent using Sentinel-2 data and deep learning

2.3.22. global-cropland-mapping -> global multi-temporal cropland mapping

2.3.23. U-Net for Semantic Segmentation of Soyabean Crop Fields with SAR images

2.3.24. UNet-RemoteSensing -> uses 7 bands of Landsat and keras

2.3.25. Landuse_DL -> delineate landforms due to the thawing of ice-rich permafrost

2.3.26. canopy -> code for 2019 paper: A Convolutional Neural Network Classifier Identifies Tree Species in Mixed-Conifer Forest from Hyperspectral Imagery

2.3.27. RandomForest-Classification -> script is for random forest classification of remote sensing multi-band images, used in 2019 paper: Multisensor data to derive peatland vegetation communities using a fixed-wing unmanned aerial vehicle

2.3.28. forest_change_detection -> forest change segmentation with time-dependent models, including Siamese, UNet-LSTM, UNet-diff, UNet3D models. Code for 2021 paper: Deep Learning for Regular Change Detection in Ukrainian Forest Ecosystem With Sentinel-2

2.3.29. cultionet -> segmentation of cultivated land, built on PyTorch Geometric and PyTorch Lightning

2.3.30. sentinel-tree-cover -> code for 2020 paper: A global method to identify trees outside of closed-canopy forests with medium-resolution satellite imagery

2.3.31. crop-type-detection-ICLR-2020 -> Winning Solutions from Crop Type Detection Competition at CV4A workshop, ICLR 2020

2.3.32. Crop identification using satellite imagery -> Medium article, introduction to crop identification

2.3.33. S4A-Models -> Various experiments on the Sen4AgriNet dataset

2.3.34. attention-mechanism-unet -> code for 2022 paper: An attention-based U-Net for detecting deforestation within satellite sensor imagery

2.3.35. Cocoa_plantations_detection -> Detecting cocoa plantation in Ivory Coast using Sentinel-2 remote sensing data using KNN, SVM, Random Forest and MLP

2.3.36. SummerCrop_Deeplearning -> code for 2022 paper: A Transferable Learning Classification Model and Carbon Sequestration Estimation of Crops in Farmland Ecosystem

2.3.37. DeepForest is a python package for training and predicting individual tree crowns from airborne RGB imagery

2.3.38. Official repository for the "Identifying trees on satellite images" challenge from Omdena

2.3.39. Counting-Trees-using-Satellite-Images -> create an inventory of incoming and outgoing trees for an annual tree inspections, uses keras & semantic segmentation

2.3.40. 2020 Nature paper - An unexpectedly large count of trees in the West African Sahara and Sahel -> tree detection framework based on U-Net & tensorflow 2 with code here

2.3.41. TreeDetection -> A color-based classifier to detect the trees in google image data along with tree visual localization and crown size calculations via OpenCV

2.3.42. PTDM -> code for 2022 paper: Pomelo Tree Detection Method Based on Attention Mechanism and Cross-Layer Feature Fusion

2.3.43. urban-tree-detection -> code for 2022 paper: Individual Tree Detection in Large-Scale Urban Environments using High-Resolution Multispectral Imagery. With dataset

2.3.44. BioMassters_baseline -> a basic pytorch lightning baseline using a UNet for getting started with the BioMassters challenge (biomass estimation)

2.3.45. Biomassters winners -> top 3 solutions

2.3.46. kbrodt biomassters solution -> 1st place solution

2.3.47. quqixun biomassters solution

2.3.48. biomass-estimation -> from Azavea, applied to Sentinel 1 & 2

2.3.49. 3DUNetGSFormer -> code for 2022 paper: 3DUNetGSFormer: A deep learning pipeline for complex wetland mapping using generative adversarial networks and Swin transformer

2.3.50. SEANet_torch -> code for 2023 paper: Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images

2.3.51. arborizer -> Tree crowns segmentation and classification

2.3.52. ReUse -> UNet to estimate carbon absorbed by forests, using Biomass & Sentinel-2 imagery. Code for paper: ReUse: REgressive Unet for Carbon Storage and Above-Ground Biomass Estimation

2.3.53. unet-sentinel -> UNet to handle Sentinel-1 SAR images to identify deforestation

2.4. Segmentation - Water, coastlines & floods

2.4.1. pytorch-waterbody-segmentation -> UNET model trained on the Satellite Images of Water Bodies dataset from Kaggle. The model is deployed on Hugging Face Spaces BEGINNER

2.4.2. Flood Detection and Analysis using UNET with Resnet-34 as the back bone uses fastai BEGINNER

2.4.3. Automatic Flood Detection from Satellite Images Using Deep Learning BEGINNER

2.4.4. UNSOAT used fastai to train a Unet to perform semantic segmentation on satellite imageries to detect water - paper + notebook, accuracy 0.97, precision 0.91, recall 0.92

2.4.5. Semi-Supervised Classification and Segmentation on High Resolution Aerial Images - Solving the FloodNet problem

2.4.6. Houston_flooding -> labeling each pixel as either flooded or not using data from Hurricane Harvey. Dataset consisted of pre and post flood images, and a ground truth floodwater mask was created using unsupervised clustering (with DBScan) of image pixels with human cluster verification/adjustment

2.4.7. ml4floods -> An ecosystem of data, models and code pipelines to tackle flooding with ML

2.4.8. A comprehensive guide to getting started with the ETCI Flood Detection competition -> using Sentinel1 SAR & pytorch

2.4.9. Map Floodwater of SAR Imagery with SageMaker -> applied to Sentinel-1 dataset

2.4.10. 1st place solution for STAC Overflow: Map Floodwater from Radar Imagery hosted by Microsoft AI for Earth -> combines Unet with Catboostclassifier, taking their maxima, not the average

2.4.11. hydra-floods -> an open source Python application for downloading, processing, and delivering surface water maps derived from remote sensing data

2.4.12. CoastSat -> tool for mapping coastlines which has an extension CoastSeg using segmentation models

2.4.13. Satellite_Flood_Segmentation_of_Harvey -> explores both deep learning and traditional kmeans

2.4.14. Flood Event Detection Utilizing Satellite Images

2.4.15. ETCI-2021-Competition-on-Flood-Detection -> Experiments on Flood Segmentation on Sentinel-1 SAR Imagery with Cyclical Pseudo Labeling and Noisy Student Training, with arxiv paper

2.4.16. FDSI -> Flood Detection in Satellite Images - 2017 Multimedia Satellite Task

2.4.17. deepwatermap -> a deep model that segments water on multispectral images

2.4.18. rivamap -> an automated river analysis and mapping engine

2.4.19. deep-water -> track changes in water level

2.4.20. WatNet -> A deep ConvNet for surface water mapping based on Sentinel-2 image, uses the Earth Surface Water Dataset

2.4.21. A-U-Net-for-Flood-Extent-Mapping -> in keras

2.4.22. floatingobjects -> code for the paper: TOWARDS DETECTING FLOATING OBJECTS ON A GLOBAL SCALE WITHLEARNED SPATIAL FEATURES USING SENTINEL 2. Uses U-Net & pytorch

2.4.23. SpaceNet8 -> baseline Unet solution to detect flooded roads and buildings

2.4.24. dlsim -> code for 2020 paper: Breaking the Limits of Remote Sensing by Simulation and Deep Learning for Flood and Debris Flow Mapping

2.4.25. Water-HRNet -> HRNet trained on Sentinel 2

2.4.26. semantic segmentation model to identify newly developed or flooded land using NAIP imagery provided by the Chesapeake Conservancy, training on MS Azure

2.4.27. BandNet -> code for 2022 paper: Analysis and application of multispectral data for water segmentation using machine learning. Uses Sentinel-2 data

2.4.28. mmflood -> code for 2022 paper: MMFlood: A Multimodal Dataset for Flood Delineation From Satellite Imagery (Sentinel 1 SAR)

2.4.29. Urban_flooding -> Towards transferable data-driven models to predict urban pluvial flood water depth in Berlin, Germany

2.4.30. Flood-Mapping-Using-Satellite-Images -> masters thesis comparing Random Forest & Unet

2.5. Segmentation - Fire, smoke & burn areas

2.5.1. SatelliteVu-AWS-Disaster-Response-Hackathon -> fire spread prediction using classical ML & deep learning BEGINNER

2.5.2. Wild Fire Detection using U-Net trained on Databricks & Keras, semantic segmentation

2.5.3. A Practical Method for High-Resolution Burned Area Monitoring Using Sentinel-2 and VIIRS with code. Dataset created on Google Earth Engine, downloaded to local machine for model training using fastai. The BA-Net model used is much smaller than U-Net, resulting in lower memory requirements and a faster computation

2.5.4. AI Geospatial Wildfire Risk Prediction -> A predictive model using geospatial raster data to asses wildfire hazard potential over the contiguous United States using Unet

2.5.5. IndustrialSmokePlumeDetection -> using Sentinel-2 & a modified ResNet-50

2.5.6. burned-area-detection -> uses Sentinel-2

2.5.7. rescue -> code of the paper: Attention to fires: multi-channel deep-learning models forwildfire severity prediction

2.5.8. smoke_segmentation -> Segmenting smoke plumes and predicting density from GOES imagery

2.5.9. wildfire-detection -> Using Vision Transformers for enhanced wildfire detection in satellite images

2.5.10. Burned_Area_Detection -> Detecting Burned Areas with Sentinel-2 data

2.5.11. burned-area-baseline -> baseline unet model accompanying the Satellite Burned Area Dataset (Sentinel 1 & 2)

2.6. Segmentation - Landslides

2.6.1. landslide4sense -> a competition focused on landslide detection using globally distributed multi-source satellite imagery. Baseline solution unet BEGINNER

2.6.2. landslide-mapping-with-cnn -> code for 2021 paper: A new strategy to map landslides with a generalized convolutional neural network

2.6.3. Relict_landslides_CNN_kmeans -> code for 2022 paper: Relict landslide detection in rainforest areas using a combination of k-means clustering algorithm and Deep-Learning semantic segmentation models

2.6.4. Landslide-mapping-on-SAR-data-by-Attention-U-Net -> code for 2022 paper: Rapid Mapping of landslide on SAR data by Attention U-net

2.6.5. SAR-landslide-detection-pretraining -> code for the 2022 paper: SAR-based landslide classification pretraining leads to better segmentation

2.6.6. landslide-sar-unet -> code for 2022 paper: Deep Learning for Rapid Landslide Detection using Synthetic Aperture Radar (SAR) Datacubes

2.7. Segmentation - Glaciers

2.7.1. HED-UNet -> a model for simultaneous semantic segmentation and edge detection, examples provided are glacier fronts and building footprints using the Inria Aerial Image Labeling dataset

2.7.2. glacier_mapping -> Mapping glaciers in the Hindu Kush Himalaya, Landsat 7 images, Shapefile labels of the glaciers, Unet with dropout

2.7.3. glacier-detect-ML -> a simple logistic regression model to identify a glacier in Landsat satellite imagery

2.7.4. GlacierSemanticSegmentation -> uses unet

2.7.5. Antarctic-fracture-detection -> uses UNet with the MODIS Mosaic of Antarctica to detect surface fractures (paper)

2.8. Segmentation - Other environmental

2.8.1. Detection of Open Landfills -> uses Sentinel-2 to detect large changes in the Normalized Burn Ratio (NBR)

2.8.2. sea_ice_remote_sensing -> Sea Ice Concentration classification

2.8.3. Methane-detection-from-hyperspectral-imagery -> code for 2020 paper: Deep Remote Sensing Methods for Methane Detection in Overhead Hyperspectral Imagery

2.8.4. EddyNet -> A Deep Neural Network For Pixel-Wise Classification of Oceanic Eddies

2.8.5. schisto-vegetation -> code for 2022 paper: Deep Learning Segmentation of Satellite Imagery Identifies Aquatic Vegetation Associated with Snail Intermediate Hosts of Schistosomiasis in Senegal, Africa

2.8.6. earth-forecasting-transformer -> code for 2022 paper: Earthformer: exploring space-time transformers for earth system forecasting

2.8.7. weather4cast-2022 -> Unet-3D baseline model for Weather4cast Rain Movie Prediction competition

2.8.8. WeatherFusionNet -> code for paper: WeatherFusionNet: Predicting Precipitation from Satellite Data. weather4cast-2022 1st place solution

2.9. Segmentation - Roads

Extracting roads is challenging due to the occlusions caused by other objects and the complex traffic environment

2.9.1. Road detection using semantic segmentation and albumentations for data augmention using the Massachusetts Roads Dataset, U-net & Keras. With code BEGINNER

2.9.2. ML_EPFL_Project_2 -> U-Net in Pytorch to perform semantic segmentation of roads on satellite images BEGINNER

2.9.3. Semantic Segmentation of roads using U-net Keras, OSM data, project summary article by student, no code

2.9.4. Winning Solutions from SpaceNet Road Detection and Routing Challenge

2.9.5. RoadVecNet -> Road-Network-Segmentation-and-Vectorization in keras with dataset and paper

2.9.6. Detecting road and road types jupyter notebook

2.9.7. awesome-deep-map -> A curated list of resources dedicated to deep learning / computer vision algorithms for mapping. The mapping problems include road network inference, building footprint extraction, etc.

2.9.8. RoadTracer: Automatic Extraction of Road Networks from Aerial Images -> uses an iterative search process guided by a CNN-based decision function to derive the road network graph directly from the output of the CNN

2.9.9. road_detection_mtl -> Road Detection using a multi-task Learning technique to improve the performance of the road detection task by incorporating prior knowledge constraints, uses the SpaceNet Roads Dataset

2.9.10. road_connectivity -> Improved Road Connectivity by Joint Learning of Orientation and Segmentation (CVPR2019)

2.9.11. Road-Network-Extraction using classical Image processing -> blur & canny edge detection

2.9.12. SPIN_RoadMapper -> Extracting Roads from Aerial Images via Spatial and Interaction Space Graph Reasoning for Autonomous Driving

2.9.13. road_extraction_remote_sensing -> pytorch implementation, CVPR2018 DeepGlobe Road Extraction Challenge submission. See also DeepGlobe-Road-Extraction-Challenge

2.9.14. RoadDetections dataset by Microsoft

2.9.15. CoANet -> Connectivity Attention Network for Road Extraction From Satellite Imagery. The CoA module incorporates graphical information to ensure the connectivity of roads are better preserved. With paper

2.9.16. Satellite Imagery Road Segmentation -> intro articule on Medium using the kaggle Massachusetts Roads Dataset

2.9.17. Label-Pixels -> for semantic segmentation of roads and other features

2.9.18. Satellite-image-road-extraction -> code for 2018 paper: Road Extraction by Deep Residual U-Net

2.9.19. road_building_extraction -> Pytorch implementation of U-Net architecture for road and building extraction

2.9.20. Satellite-Imagery-Road-Extraction -> research project in keras

2.9.21. SGCN -> code for 2021 paper: Split Depth-Wise Separable Graph-Convolution Network for Road Extraction in Complex Environments From High-Resolution Remote-Sensing Images

2.9.22. ASPN -> code for 2020 paper: Road Segmentation for Remote Sensing Images using Adversarial Spatial Pyramid Networks

2.9.23. FCNs-for-road-extraction-keras -> Road extraction of high-resolution remote sensing images based on various semantic segmentation networks

2.9.24. cresi -> Road network extraction from satellite imagery, with speed and travel time estimates

2.9.25. road-extraction-d-linknet -> code for 2018 paper: D-LinkNet: LinkNet with Pretrained Encoder and Dilated Convolution for High Resolution Satellite Imagery Road Extraction

2.9.26. Sat2Graph -> code for 2020 paper: Road Graph Extraction through Graph-Tensor Encoding

2.9.27. Image-Segmentation) -> using Massachusetts Road dataset and fast.ai

2.9.28. RoadTracer-M -> code for 2019 paper: Road Network Extraction from Satellite Images Using CNN Based Segmentation and Tracing

2.9.29. ScRoadExtractor -> code for 2020 paper: Scribble-based Weakly Supervised Deep Learning for Road Surface Extraction from Remote Sensing Images

2.9.30. RoadDA -> code for 2021 paper: Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training for Road Segmentation of Remote Sensing Images

2.9.31. DeepSegmentor -> A Pytorch implementation of DeepCrack and RoadNet projects

2.9.32. Cascade_Residual_Attention_Enhanced_for_Refinement_Road_Extraction -> code for 2021 paper: Cascaded Residual Attention Enhanced Road Extraction from Remote Sensing Images

2.9.33. nia-road-baseline -> code for 2020 paper: NL-LinkNet: Toward Lighter but More Accurate Road Extraction with Non-Local Operations

2.9.34. IRSR-net -> code for 2022 paper: Lightweight Remote Sensing Road Detection Network

2.9.35. hironex -> A python tool for automatic, fully unsupervised extraction of historical road networks from historical maps

2.9.36. Road_detection_model -> code for 2022 paper: Mapping Roads in the Brazilian Amazon with Artificial Intelligence and Sentinel-2

2.9.37. DTnet -> code for 2022 paper: Road detection via a dual-task network based on cross-layer graph fusion modules

2.9.38. Automatic-Road-Extraction-from-Historical-Maps-using-Deep-Learning-Techniques -> code for the paper: Automatic Road Extraction from Historical Maps using Deep Learning Techniques: A Regional Case Study of Turkey in a German World War II map

2.9.39. Istanbul_Dataset -> segmentation on the Istanbul, Inria and Massachusetts datasets

2.9.40. Road-Segmentation -> Road segmentation on Satellite Images using CNN (U-Nets and FCN8) and Logistic Regression

2.9.41. D-LinkNet -> 1st place solution in DeepGlobe Road Extraction Challenge

2.9.42. PaRK-Detect -> code for 2023 paper: PaRK-Detect: Towards Efficient Multi-Task Satellite Imagery Road Extraction via Patch-Wise Keypoints Detection

2.9.43. tile2net -> code for 2023 paper: Mapping the walk: A scalable computer vision approach for generating sidewalk network datasets from aerial imagery

2.10. Segmentation - Buildings & rooftops

2.10.1. Road and Building Semantic Segmentation in Satellite Imagery uses U-Net on the Massachusetts Roads Dataset & keras BEGINNER

2.10.2. find-unauthorized-constructions-using-aerial-photography -> semantic segmentation using U-Net with custom_f1 metric & Keras. The creation of the dataset is described in this article BEGINNER

2.10.3. Semantic Segmentation on Aerial Images using fastai uses U-Net on the Inria Aerial Image Labeling Dataset of urban settlements in Europe and the United States, and is labelled as a building and not building classes (no repo) BEGINNER

2.10.4. Building footprint detection with fastai on the challenging SpaceNet7 dataset uses U-Net & fastai BEGINNER

2.10.5. Pix2Pix-for-Semantic-Segmentation-of-Satellite-Images -> using Pix2Pix GAN network to segment the building footprint from Satellite Images, uses tensorflow

2.10.6. SpaceNetUnet -> Baseline model is U-net like, applied to SpaceNet Vegas data, using Keras

2.10.7. automated-building-detection -> Input: very-high-resolution (<= 0.5 m/pixel) RGB satellite images. Output: buildings in vector format (geojson), to be used in digital map products. Built on top of robosat and robosat.pink.

2.10.8. project_sunroof_india -> Analyzed Google Satellite images to generate a report on individual house rooftop's solar power potential, uses a range of classical computer vision techniques (e.g Canny Edge Detection) to segment the roofs

2.10.9. JointNet-A-Common-Neural-Network-for-Road-and-Building-Extraction

2.10.10. Mapping Africa’s Buildings with Satellite Imagery: Google AI blog post. See the open-buildings dataset

2.10.11. nz_convnet -> A U-net based ConvNet for New Zealand imagery to classify building outlines

2.10.12. polycnn -> End-to-End Learning of Polygons for Remote Sensing Image Classification

2.10.13. spacenet_building_detection solution by motokimura using Unet

2.10.14. How to extract building footprints from satellite images using deep learning

2.10.15. Vec2Instance -> applied to the SpaceNet challenge AOI 2 (Vegas) building footprint dataset, tensorflow v1.12

2.10.16. EarthquakeDamageDetection -> Buildings segmentation from satellite imagery and damage classification for each build, using Keras

2.10.17. Semantic-segmentation repo by fuweifu-vtoo -> uses pytorch and the Massachusetts Buildings & Roads Datasets

2.10.18. Extracting buildings and roads from AWS Open Data using Amazon SageMaker -> uses merged RGB (SpaceNet) and LiDAR (USGS 3DEP) datasets with Unet to reproduce the winning algorithm from SpaceNet challenge 4 by XD_XD. With repo

2.10.19. TF-SegNet -> AirNet is a segmentation network based on SegNet, but with some modifications

2.10.20. rgb-footprint-extract -> a Semantic Segmentation Network for Urban-Scale Building Footprint Extraction Using RGB Satellite Imagery, DeepLavV3+ module with a Dilated ResNet C42 backbone

2.10.21. SpaceNetExploration -> A sample project demonstrating how to extract building footprints from satellite images using a semantic segmentation model. Data from the SpaceNet Challenge

2.10.22. Rooftop-Instance-Segmentation -> VGG-16, Instance Segmentation, uses the Airs dataset

2.10.23. solar-farms-mapping -> An Artificial Intelligence Dataset for Solar Energy Locations in India

2.10.24. poultry-cafos -> This repo contains code for detecting poultry barns from high-resolution aerial imagery and an accompanying dataset of predicted barns over the United States

2.10.25. ssai-cnn -> This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset

2.10.26. Remote-sensing-building-extraction-to-3D-model-using-Paddle-and-Grasshopper

2.10.27. segmentation-enhanced-resunet -> Urban building extraction in Daejeon region using Modified Residual U-Net (Modified ResUnet) and applying post-processing

2.10.28. Mask RCNN for Spacenet Off Nadir Building Detection

2.10.29. GRSL_BFE_MA -> Deep Learning-based Building Footprint Extraction with Missing Annotations using a novel loss function

2.10.30. FER-CNN -> Detection, Classification and Boundary Regularization of Buildings in Satellite Imagery Using Faster Edge Region Convolutional Neural Networks, with paper

2.10.31. UNET-Image-Segmentation-Satellite-Picture -> Unet to predict roof tops on Crowed AI Mapping dataset, uses keras

2.10.32. Vector-Map-Generation-from-Aerial-Imagery-using-Deep-Learning-GeoSpatial-UNET -> applied to geo-referenced images which are very large size > 10k x 10k pixels

2.10.33. building-footprint-segmentation -> pip installable library to train building footprint segmentation on satellite and aerial imagery, applied to Massachusetts Buildings Dataset and Inria Aerial Image Labeling Dataset

2.10.34. SemSegBuildings -> Project using fast.ai framework for semantic segmentation on Inria building segmentation dataset

2.10.35. FCNN-example -> overfit to a given single image to detect houses

2.10.36. SAT2LOD2 -> an open-source, python-based GUI-enabled software that takes the satellite images as inputs and returns LoD2 building models as outputs, with paper

2.10.37. SatFootprint -> building segmentation on the Spacenet 7 dataset

2.10.38. Building-Detection -> code for running a Raster Vision experiment to train a model to detect buildings from satellite imagery in three cities in Latin America

2.10.39. Multi-building-tracker -> code for paper: Multi-target building tracker for satellite images using deep learning

2.10.40. Boundary Enhancement Semantic Segmentation for Building Extraction

2.10.41. UNet_keras_for_RSimage -> keras code for binary semantic segmentation

2.10.42. Spacenet-Building-Detection -> uses keras

2.10.43. LGPNet-BCD -> code for 2021 paper: Building Change Detection for VHR Remote Sensing Images via Local-Global Pyramid Network and Cross-Task Transfer Learning Strategy

2.10.44. MTL_homoscedastic_SRB -> code for 2021 paper: A Multi-Task Deep Learning Framework for Building Footprint Segmentation

2.10.45. UNet_CNN -> UNet model to segment building coverage in Boston using Remote sensing data, uses keras

2.10.46. FDANet -> code for 2021 paper: Full-Level Domain Adaptation for Building Extraction in Very-High-Resolution Optical Remote-Sensing Images

2.10.47. CBRNet -> code for 2022 paper: A Coarse-to-fine Boundary Refinement Network for Building Extraction from Remote Sensing Imagery

2.10.48. ASLNet -> code for 2021 paper: Adversarial Shape Learning for Building Extraction in VHR Remote Sensing Images

2.10.49. BRRNet -> implementation of Modified U-Net from 2020 paper: BRRNet: A Fully Convolutional Neural Network for Automatic Building Extraction From High-Resolution Remote Sensing Images

2.10.50. Multi-Scale-Filtering-Building-Index -> Python implementation of building extraction index proposed in 2019 paper: A Multi - Scale Filtering Building Index for Building Extraction in Very High - Resolution Satellite Imagery

2.10.51. Models for Remote Sensing -> long list of unets etc applied to building detection

2.10.52. boundary_loss_for_remote_sensing -> code for 2019 paper: Boundary Loss for Remote Sensing Imagery Semantic Segmentation

2.10.53. Open Cities AI Challenge -> Segmenting Buildings for Disaster Resilience. Winning solutions on Github

2.10.54. MAPNet -> code for 2020 paper: Multi Attending Path Neural Network for Building Footprint Extraction from Remote Sensed Imagery

2.10.55. dual-hrnet -> localizing buildings and classifying their damage level

2.10.56. ESFNet -> code for 2019 paper: Efficient Network for Building Extraction from High-Resolution Aerial Images

2.10.57. rooftop-detection-python -> Detect Rooftops from low resolution satellite images and calculate area for cultivation and solar panel installment using classical computer vision techniques

2.10.58. keras_segmentation_models -> code for 2022 paper: Using Open Vector-Based Spatial Data to Create Semantic Datasets for Building Segmentation for Raster Data

2.10.59. CVCMFFNet -> code for 2021 paper: Complex-Valued Convolutional and Multifeature Fusion Network for Building Semantic Segmentation of InSAR Images

2.10.60. STEB-UNet -> code for 2022 paper: A Swin Transformer-Based Encoding Booster Integrated in U-Shaped Network for Building Extraction

2.10.61. dfc2020_baseline -> Baseline solution for the IEEE GRSS Data Fusion Contest 2020. Predict land cover labels from Sentinel-1 and Sentinel-2 imagery. Code for 2020 paper: Weakly Supervised Semantic Segmentation of Satellite Images for Land Cover Mapping

2.10.62. Fusing multiple segmentation models based on different datasets into a single edge-deployable model -> roof, car & road segmentation

2.10.63. ground-truth-gan-segmentation -> use Pix2Pix to segment the footprint of a building. The dataset used is AIRS

2.10.64. UNICEF-Giga_Sudan -> Detecting school lots from satellite imagery in Southern Sudan using a UNET segmentation model

2.10.65. building_footprint_extraction -> The project retrieves satellite imagery from Google and performs building footprint extraction using a U-Net.

2.10.66. projectRegularization -> code for 2019 paper: Regularization of building boundaries in satellite images using adversarial and regularized losses

2.10.67. PolyWorldPretrainedNetwork -> code for 2021 paper: Polygonal Building Extraction with Graph Neural Networks in Satellite Images

2.10.68. dl_image_segmentation -> code for 2022 paper: Uncertainty-Aware Interpretable Deep Learning for Slum Mapping and Monitoring. Uses SHAP

2.10.69. UBC-dataset -> a dataset for building detection and classification from very high-resolution satellite imagery with the focus on object-level interpretation of individual buildings

2.10.70. GeoSeg -> code for 2022 paper: UNetFormer: A UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery

2.10.71. BESNet -> code for 2022 paper: BES-Net: Boundary Enhancing Semantic Context Network for High-Resolution Image Semantic Segmentation. Applied to Vaihingen and Potsdam datasets

2.10.72. CVNet -> code for 2022 paper: CVNet: Contour Vibration Network for Building Extraction

2.10.73. CFENet -> code for 2022 paper: A Context Feature Enhancement Network for Building Extraction from High-Resolution Remote Sensing Imagery

2.10.74. HiSup -> code for 2022 paper: Accurate Polygonal Mapping of Buildings in Satellite Imagery

2.10.75. BuildingExtraction -> code for 2021 paper: Building Extraction from Remote Sensing Images with Sparse Token Transformers

2.10.76. coseg_building -> code for the 2022 paper: CrossGeoNet: A Framework for Building Footprint Generation of Label-Scarce Geographical Regions

2.10.77. AFM_building -> code for 2021 paper: Building Footprint Generation Through Convolutional Neural Networks With Attraction Field Representation

2.10.78. ramp-code -> code for the RAMP (Replicable AI for MicroPlanning) project, which enables building detection in low and middle income countries

2.10.79. Building-instance-segmentation -> code for 2022 paper: Multi-Modal Feature Fusion Network with Adaptive Center Point Detector for Building Instance Extraction

2.10.80. CGSANet -> code for the 2021 paper: CGSANet: A Contour-Guided and Local Structure-Aware Encoder–Decoder Network for Accurate Building Extraction From Very High-Resolution Remote Sensing Imagery

2.10.81. building-footprints-update -> code for 2022 paper: Learning Color Distributions from Bitemporal Remote Sensing Images to Update Existing Building Footprints

2.10.82. Istanbul_Dataset -> this repo contains weights of Unet++ model with SE-ResNeXt101 encoder trained with Istanbul, Inria and Massachusetts datasets seperately. Accompanies the paper: Comparative analysis of deep learning based building extraction methods with the new VHR Istanbul dataset

2.10.83. RAMP -> model and buildings dataset to support a wide variety of humanitarian use cases

2.10.84. Thesis_Semantic_Image_Segmentation_on_Satellite_Imagery_using_UNets -> This master thesis aims to perform semantic segmentation of buildings on satellite images from the SpaceNet challenge 1 dataset using the U-Net architecture

2.11. Segmentation - Solar panels

2.11.1. DeepSolar -> A Machine Learning Framework to Efficiently Construct a Solar Deployment Database in the United States. Dataset on kaggle, actually used a CNN for classification and segmentation is obtained by applying a threshold to the activation map. Original code is tf1 but tf2/kers and a pytorch implementation are available. Also checkout Visualizations and in-depth analysis .. of the factors that can explain the adoption of solar energy in .. Virginia and DeepSolar tracker: towards unsupervised assessment with open-source data of the accuracy of deep learning-based distributed PV mapping

2.11.2. hyperion_solar_net -> trained classificaton & segmentation models on RGB imagery from Google Maps. Provides app for viewing predictions, and has arxiv paper

2.11.3. 3D-PV-Locator -> Large-scale detection of rooftop-mounted photovoltaic systems in 3D

2.11.4. PV_Pipeline -> PyTorch models and pipeline developed for "DeepSolar for Germany"

2.11.5. solar-panels-detection -> using SegNet, Fast SCNN & ResNet

2.11.6. predict_pv_yield -> Using optical flow & machine learning to predict PV yield

2.11.7. Large-scale-solar-plant-monitoring -> code for the paper "Remote Sensing for Monitoring of Photovoltaic Power Plants in Brazil Using Deep Semantic Segmentation"

2.11.8. Panel-Segmentation -> Determine the presence of a solar array in the satellite image (boolean True/False), using a VGG16 classification model

2.11.9. Roofpedia -> an open registry of green roofs and solar roofs across the globe identified by Roofpedia through deep learning

2.11.10. Predicting the Solar Potential of Rooftops using Image Segmentation and Structured Data Medium article, using 20cm imagery & Unet

2.11.11. solar-pv-global-inventory -> code from the Nature paper of Kruitwagen et al, used to produce a global inventory of utility-scale solar photvoltaic generating stations

2.11.12. remote-sensing-solar-pv -> A repository for sharing progress on the automated detection of solar PV arrays in sentinel-2 remote sensing imagery

2.11.13. solar-panel-segmentation) -> Finding solar panels using USGS satellite imagery

2.11.14. solar_seg -> Solar segmentation of PV modules (sub elements of panels) using drone images and fast.ai

2.11.15. solar_plant_detection -> boundary extraction of Photovoltaic (PV) plants using Mask RCNN and Amir dataset

2.11.16. SolarDetection -> unet on satellite image from the USA and France

2.11.17. adopptrs -> Automatic Detection Of Photovoltaic Panels Through Remote Sensing using unet & pytorch

2.11.18. solar-panel-locator -> the number of solar panel pixels was only ~0.2% of the total pixels in the dataset, so solar panel data was upsampled to account for the class imbalance

2.11.19. projects-solar-panel-detection -> List of project to detect solar panels from aerial/satellite images

2.11.20. Satellite_ComputerVision -> UNET to detect solar arrays from Sentinel-2 data, using Google Earth Engine and Tensorflow. Also covers parking lot detection

2.11.21. photovoltaic-detection -> Detecting available rooftop area from satellite images to install photovoltaic panels

2.11.22. Solar_UNet -> U-Net models delineating solar arrays in Sentinel-2 imagery

2.12. Segmentation - Other manmade

2.12.1. Aarsh2001/ML_Challenge_NRSC -> Electrical Substation detection

2.12.2. electrical_substation_detection -> using UNet, Albumentations for image augmentation, and OpenCV for computer vision tasks

2.12.3. PLGAN-for-Power-Line-Segmentation -> code for 2022 paper: PLGAN: Generative Adversarial Networks for Power-Line Segmentation in Aerial Images

2.12.4. MCAN-OilSpillDetection -> Oil Spill Detection with A Multiscale Conditional Adversarial Network under Small Data Training, with paper. A multiscale conditional adversarial network (MCAN) trained with four oil spill observation images accurately detects oil spills in new images.

2.12.5. plastics -> Detecting and Monitoring Plastic Waste Aggregations in Sentinel-2 Imagery for globalplasticwatch.org

2.12.6. mining-detector -> detection of artisanal gold mines in Sentinel-2 satellite imagery for Amazon Mining Watch. Also covers clandestine airstrips

2.12.7. EG-UNet code for 2023 paper: Deep Feature Enhancement Method for Land Cover With Irregular and Sparse Spatial Distribution Features: A Case Study on Open-Pit Mining

2.13. Panoptic segmentation

2.13.1. Things and stuff or how remote sensing could benefit from panoptic segmentation

2.13.2. Panoptic Segmentation Meets Remote Sensing (paper)

2.13.3. pastis-benchmark

2.13.4. Panoptic-Generator -> This module converts GIS data into panoptic segmentation tiles

2.13.5. BSB-Aerial-Dataset -> an example on how to use Detectron2's Panoptic-FPN in the BSB Aerial Dataset

2.13.6. utae-paps -> PyTorch implementation of U-TAE and PaPs for satellite image time series panoptic segmentation

Instance segmentation

In instance segmentation, each individual 'instance' of a segmented area is given a unique lable. For detection of very small objects this may a good approach, but it can struggle seperating individual objects that are closely spaced.

Object detection


Image showing the suitability of rotated bounding boxes in remote sensing.

Object detection in remote sensing involves locating and surrounding objects of interest with bounding boxes. Due to the large size of remote sensing images and the fact that objects may only comprise a few pixels, object detection can be challenging in this context. The imbalance between the area of the objects to be detected and the background, combined with the potential for objects to be easily confused with random features in the background, further complicates the task. Object detection generally performs better on larger objects, but becomes increasingly difficult as the objects become smaller and more densely packed. The accuracy of object detection models can also degrade rapidly as image resolution decreases, which is why it is common to use high resolution imagery, such as 30cm RGB, for object detection in remote sensing. A unique characteristic of aerial images is that objects can be oriented in any direction. To effectively extract measurements of the length and width of an object, it can be crucial to use rotated bounding boxes that align with the orientation of the object. This approach enables more accurate and meaningful analysis of the objects within the image. Image source

Object detection with rotated bounding boxes

Orinted bounding boxes (OBB) are polygons representing rotated rectangles. For datasets checkout DOTA & HRSC2016

  • mmrotate -> Rotated Object Detection Benchmark, with pretrained models and function for inferencing on very large images
  • OBBDetection -> an oriented object detection library, which is based on MMdetection
  • rotate-yolov3 -> Rotation object detection implemented with yolov3. Also see yolov3-polygon
  • DRBox -> for detection tasks where the objects are orientated arbitrarily, e.g. vehicles, ships and airplanes
  • s2anet -> Official code of the paper 'Align Deep Features for Oriented Object Detection'
  • CFC-Net -> Official implementation of "CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images"
  • ReDet -> Official code of the paper "ReDet: A Rotation-equivariant Detector for Aerial Object Detection"
  • BBAVectors-Oriented-Object-Detection -> Oriented Object Detection in Aerial Images with Box Boundary-Aware Vectors
  • CSL_RetinaNet_Tensorflow -> Code for ECCV 2020 paper: Arbitrary-Oriented Object Detection with Circular Smooth Label
  • r3det-on-mmdetection -> R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object
  • R-DFPN_FPN_Tensorflow -> Rotation Dense Feature Pyramid Networks (Tensorflow)
  • R2CNN_Faster-RCNN_Tensorflow -> Rotational region detection based on Faster-RCNN
  • Rotated-RetinaNet -> implemented in pytorch, it supports the following datasets: DOTA, HRSC2016, ICDAR2013, ICDAR2015, UCAS-AOD, NWPU VHR-10, VOC2007
  • OBBDet_Swin -> The sixth place winning solution in 2021 Gaofen Challenge
  • CG-Net -> Learning Calibrated-Guidance for Object Detection in Aerial Images. With paper
  • OrientedRepPoints_DOTA -> Oriented RepPoints + Swin Transformer/ReResNet
  • yolov5_obb -> yolov5 + Oriented Object Detection
  • How to Train YOLOv5 OBB -> YOLOv5 OBB tutorial and YOLOv5 OBB noteboook
  • OHDet_Tensorflow -> can be applied to rotation detection and object heading detection
  • Seodore -> framework maintaining recent updates of mmdetection
  • Rotation-RetinaNet-PyTorch -> oriented detector Rotation-RetinaNet implementation on Optical and SAR ship dataset
  • AIDet -> an open source object detection in aerial image toolbox based on MMDetection
  • rotation-yolov5 -> rotation detection based on yolov5
  • ShipDetection -> Ship Detection in HR Optical Remote Sensing Images via Rotated Bounding Box, based on Faster R-CNN and ORN, uses caffe
  • SLRDet -> project based on mmdetection to reimplement RRPN and use the model Faster R-CNN OBB
  • AxisLearning -> code for 2020 paper: Axis Learning for Orientated Objects Detection in Aerial Images
  • Detection_and_Recognition_in_Remote_Sensing_Image -> This work uses PaNet to realize Detection and Recognition in Remote Sensing Image by MXNet
  • DrBox-v2-tensorflow -> tensorflow implementation of DrBox-v2 which is an improved detector with rotatable boxes for target detection in remote sensing images
  • Rotation-EfficientDet-D0 -> A PyTorch Implementation Rotation Detector based EfficientDet Detector, applied to custom rotation vehicle datasets
  • DODet -> Dual alignment for oriented object detection, uses DOTA dataset. With paper
  • GF-CSL -> code for 2022 paper: Gaussian Focal Loss: Learning Distribution Polarized Angle Prediction for Rotated Object Detection in Aerial Images
  • simplified_rbox_cnn -> code for 2018 paper: RBox-CNN: rotated bounding box based CNN for ship detection in remote sensing image. Uses Tensorflow object detection API
  • Polar-Encodings -> code for 2021 [paper](Learning Polar Encodings for Arbitrary-Oriented Ship Detection in SAR Images)
  • R-CenterNet -> detector for rotated-object based on CenterNet
  • piou -> Orientated Object Detection; IoU Loss, applied to DOTA dataset
  • DAFNe -> code for 2021 paper: DAFNe: A One-Stage Anchor-Free Approach for Oriented Object Detection
  • AProNet -> code for 2021 paper: AProNet: Detecting objects with precise orientation from aerial images. Applied to datasets DOTA and HRSC2016
  • UCAS-AOD-benchmark -> A benchmark of UCAS-AOD dataset
  • RotateObjectDetection -> based on Ultralytics/yolov5, with adjustments to enable rotate prediction boxes. Also see PolygonObjectDetection
  • AD-Toolbox -> Aerial Detection Toolbox based on MMDetection and MMRotate, with support for more datasets
  • GGHL -> code for 2022 paper: A General Gaussian Heatmap Label Assignment for Arbitrary-Oriented Object Detection
  • NPMMR-Det -> code for 2021 paper: A Novel Nonlocal-Aware Pyramid and Multiscale Multitask Refinement Detector for Object Detection in Remote Sensing Images
  • AOPG -> code for 2022 paper: Anchor-Free Oriented Proposal Generator for Object Detection
  • SE2-Det -> code for 2022 paper: Semantic-Edge-Supervised Single-Stage Detector for Oriented Object Detection in Remote Sensing Imagery
  • OrientedRepPoints -> code for 2021 paper: Oriented RepPoints for Aerial Object Detection
  • TS-Conv -> code for 2022 paper: Task-wise Sampling Convolutions for Arbitrary-Oriented Object Detection in Aerial Images
  • FCOSR -> A Simple Anchor-free Rotated Detector for Aerial Object Detection. This implement is modified from mmdetection. See also TensorRT_Inference
  • OBB_Detection -> Finalist's solution in the track of Oriented Object Detection in Remote Sensing Images, 2022 Guangdong-Hong Kong-Macao Greater Bay Area International Algorithm Competition
  • sam-mmrotate -> SAM (Segment Anything Model) for generating rotated bounding boxes with MMRotate, which is a comparison method of H2RBox-v2
  • mmrotate-dcfl -> code for 2023 paper: Dynamic Coarse-to-Fine Learning for Oriented Tiny Object Detection
  • h2rbox-mmrotate -> code for 2022 paper: H2RBox: Horizontal Box Annotation is All You Need for Oriented Object Detection

Object detection enhanced by super resolution

Salient object detection

Detecting the most noticeable or important object in a scene

  • ACCoNet -> code for 2022 paper: Adjacent Context Coordination Network for Salient Object Detection in Optical Remote Sensing Images
  • MCCNet -> Multi-Content Complementation Network for Salient Object Detection in Optical Remote Sensing Images
  • CorrNet -> Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation. With paper
  • Reading list for deep learning based Salient Object Detection in Optical Remote Sensing Images
  • ORSSD-dataset -> salient object detection dataset
  • EORSSD-dataset -> Extended Optical Remote Sensing Saliency Detection (EORSSD) Dataset
  • DAFNet_TIP20 -> code for 2020 paper: Dense Attention Fluid Network for Salient Object Detection in Optical Remote Sensing Images
  • EMFINet -> code for 2021 paper: Edge-Aware Multiscale Feature Integration Network for Salient Object Detection in Optical Remote Sensing Images
  • ERPNet -> code for 2022 paper: Edge-guided Recurrent Positioning Network for Salient Object Detection in Optical Remote Sensing Images
  • FSMINet -> code for 2022 paper: Fully Squeezed Multi-Scale Inference Network for Fast and Accurate Saliency Detection in Optical Remote Sensing Images
  • AGNet -> code for 2022 paper: AGNet: Attention Guided Network for Salient Object Detection in Optical Remote Sensing Images
  • MSCNet -> code for 2022 paper: A lightweight multi-scale context network for salient object detection in optical remote sensing images
  • GPnet -> code for 2022 paper: Global Perception Network for Salient Object Detection in Remote Sensing Images
  • SeaNet -> code for 2023 paper: Lightweight Salient Object Detection in Optical Remote Sensing Images via Semantic Matching and Edge Alignment

Object detection - Buildings, rooftops & solar panels

Object detection - Ships & boats

Object detection - Cars, vehicles & trains

Object detection - Planes & aircraft

Object detection - Infrastructure & utilities

Object detection - Oil storage tank detection

Oil is stored in tanks at many points between extraction and sale, and the volume of oil in storage is an important economic indicator.

Object detection - Animals

A variety of techniques can be used to count animals, including object detection and instance segmentation. For convenience they are all listed here:

Object tracking in videos

Object counting

When the object count, but not its shape is required, U-net can be used to treat this as an image-to-image translation problem.

  • centroid-unet -> Centroid-UNet is deep neural network model to detect centroids from satellite images, with paper BEGINNER
  • cownter_strike -> counting cows, located with point-annotations, two models: CSRNet (a density-based method) & LCFCN (a detection-based method)
  • DO-U-Net -> an effective approach for when the size of an object needs to be known, as well as the number of objects in the image, initially created to segment and count Internally Displaced People (IDP) camps in Afghanistan
  • Cassava Crop Counting
  • Counting from Sky -> A Large-scale Dataset for Remote Sensing Object Counting and A Benchmark Method
  • PSGCNet -> code for 2022 paper: PSGCNet: A Pyramidal Scale and Global Context Guided Network for Dense Object Counting in Remote Sensing Images
  • psgcnet -> code for 2022 paper: PSGCNet: A Pyramidal Scale and Global Context Guided Network for Dense Object Counting in Remote-Sensing Images

Regression


Regression prediction of windspeed.

Regression in remote sensing involves predicting continuous variables such as wind speed, tree height, or soil moisture from an image. Both classical machine learning and deep learning approaches can be used to accomplish this task. Classical machine learning utilizes feature engineering to extract numerical values from the input data, which are then used as input for a regression algorithm like linear regression. On the other hand, deep learning typically employs a convolutional neural network (CNN) to process the image data, followed by a fully connected neural network (FCNN) for regression. The FCNN is trained to map the input image to the desired output, providing predictions for the continuous variables of interest. Image source

Cloud detection & removal


(left) False colour image and (right) a cloud & shadow mask.

Clouds are a major issue in remote sensing images as they can obscure the underlying ground features. This hinders the accuracy and effectiveness of remote sensing analysis, as the obscured regions cannot be properly interpreted. In order to address this challenge, various techniques have been developed to detect clouds in remote sensing images. Both classical algorithms and deep learning approaches can be employed for cloud detection. Classical algorithms typically use threshold-based techniques and hand-crafted features to identify cloud pixels. However, these techniques can be limited in their accuracy and are sensitive to changes in image appearance and cloud structure. On the other hand, deep learning approaches leverage the power of convolutional neural networks (CNNs) to accurately detect clouds in remote sensing images. These models are trained on large datasets of remote sensing images, allowing them to learn and generalize the unique features and patterns of clouds. The generated cloud mask can be used to identify the cloud pixels and eliminate them from further analysis or, alternatively, cloud inpainting techniques can be used to fill in the gaps left by the clouds. This approach helps to improve the accuracy of remote sensing analysis and provides a clearer view of the ground, even in the presence of clouds. Image adapted from this source

Change detection


(left) Initial and (middle) after some development, with (right) the change highlighted.

Change detection is a vital component of remote sensing analysis, enabling the monitoring of landscape changes over time. This technique can be applied to identify a wide range of changes, including land use changes, urban development, coastal erosion, and deforestation. Change detection can be performed on a pair of images taken at different times, or by analyzing multiple images collected over a period of time. It is important to note that while change detection is primarily used to detect changes in the landscape, it can also be influenced by the presence of clouds and shadows. These dynamic elements can alter the appearance of the image, leading to false positives in change detection results. Therefore, it is essential to consider the impact of clouds and shadows on change detection analysis, and to employ appropriate methods to mitigate their influence. Image source

  • awesome-remote-sensing-change-detection lists many datasets and publications
  • Change-Detection-Review -> A review of change detection methods, including code and open data sets for deep learning
  • Change Detection using Siamese Networks -> Medium article BEGINNER
  • STANet -> official implementation of the spatial-temporal attention neural network (STANet) for remote sensing image change detection BEGINNER
  • UNet-based-Unsupervised-Change-Detection -> A convolutional neural network (CNN) and semantic segmentation is implemented to detect the changes between the images, as well as classify the changes into the correct semantic class, with arxiv paper BEGINNER
  • BIT_CD -> Official Pytorch Implementation of Remote Sensing Image Change Detection with Transformers
  • Unstructured-change-detection-using-CNN
  • Siamese neural network to detect changes in aerial images -> uses Keras and VGG16 architecture
  • Change Detection in 3D: Generating Digital Elevation Models from Dove Imagery
  • QGIS plugin for applying change detection algorithms on high resolution satellite imagery
  • LamboiseNet -> Master thesis about change detection in satellite imagery using Deep Learning
  • Fully Convolutional Siamese Networks for Change Detection -> with paper
  • Urban Change Detection for Multispectral Earth Observation Using Convolutional Neural Networks -> with paper, used the Onera Satellite Change Detection (OSCD) dataset
  • IAug_CDNet -> Official Pytorch Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images
  • dpm-rnn-public -> Code implementing a damage mapping method combining satellite data with deep learning
  • SenseEarth2020-ChangeDetection -> 1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime; predictions of five HRNet-based segmentation models are ensembled, serving as pseudo labels of unchanged areas
  • KPCAMNet -> Python implementation of the paper Unsupervised Change Detection in Multi-temporal VHR Images Based on Deep Kernel PCA Convolutional Mapping Network
  • CDLab -> benchmarking deep learning-based change detection methods.
  • Siam-NestedUNet -> The pytorch implementation for "SNUNet-CD: A Densely Connected Siamese Network for Change Detection of VHR Images"
  • SUNet-change_detection -> Implementation of paper SUNet: Change Detection for Heterogeneous Remote Sensing Images from Satellite and UAV Using a Dual-Channel Fully Convolution Network
  • Self-supervised Change Detection in Multi-view Remote Sensing Images
  • MFPNet -> Remote Sensing Change Detection Based on Multidirectional Adaptive Feature Fusion and Perceptual Similarity
  • GitHub for the DIUx xView Detection Challenge -> The xView2 Challenge focuses on automating the process of assessing building damage after a natural disaster
  • DASNet -> Dual attentive fully convolutional siamese networks for change detection of high-resolution satellite images
  • Self-Attention for Raw Optical Satellite Time Series Classification
  • planet-movement -> Find and process Planet image pairs to highlight object movement
  • temporal-cluster-matching -> detecting change in structure footprints from time series of remotely sensed imagery
  • autoRIFT -> fast and intelligent algorithm for finding the pixel displacement between two images
  • DSAMNet -> Code for “A Deeply Supervised Attention Metric-Based Network and an Open Aerial Image Dataset for Remote Sensing Change Detection”. The main types of changes in the dataset include: (a) newly built urban buildings; (b) suburban dilation; (c) groundwork before construction; (d) change of vegetation; (e) road expansion; (f) sea construction.
  • SRCDNet -> The pytorch implementation for "Super-resolution-based Change Detection Network with Stacked Attention Module for Images with Different Resolutions ". SRCDNet is designed to learn and predict change maps from bi-temporal images with different resolutions
  • Land-Cover-Analysis -> Land Cover Change Detection using Satellite Image Segmentation
  • A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sening images
  • Satellite-Image-Alignment-Differencing-and-Segmentation -> thesis on change detection
  • Change Detection in Multi-temporal Satellite Images -> uses Principal Component Analysis (PCA) and K-means clustering
  • Unsupervised Change Detection Algorithm using PCA and K-Means Clustering -> in Matlab but has paper
  • ChangeFormer -> A Transformer-Based Siamese Network for Change Detection. Uses transformer architecture to address the limitations of CNN in handling multi-scale long-range details. Demonstrates that ChangeFormer captures much finer details compared to the other SOTA methods, achieving better performance on benchmark datasets
  • Heterogeneous_CD -> Heterogeneous Change Detection in Remote Sensing Images. Accompanies Code-Aligned Autoencoders for Unsupervised Change Detection in Multimodal Remote Sensing Images
  • ChangeDetectionProject -> Trying out Active Learning in with deep CNNs for Change detection on remote sensing data
  • DSFANet -> Unsupervised Deep Slow Feature Analysis for Change Detection in Multi-Temporal Remote Sensing Images
  • siamese-change-detection -> Targeted synthesis of multi-temporal remote sensing images for change detection using siamese neural networks
  • Bi-SRNet -> code for 2022 paper: Bi-Temporal Semantic Reasoning for the Semantic Change Detection in HR Remote Sensing Images
  • SiROC -> Implementation of the paper Spatial Context Awareness for Unsupervised Change Detection in Optical Satellite Images. Applied to Sentinel-2 and high-resolution Planetscope imagery on four datasets
  • DSMSCN -> Tensorflow implementation for Change Detection in Multi-temporal VHR Images Based on Deep Siamese Multi-scale Convolutional Neural Networks
  • RaVAEn -> a lightweight, unsupervised approach for change detection in satellite data based on Variational Auto-Encoders (VAEs) with the specific purpose of on-board deployment. It flags changed areas to prioritise for downlink, shortening the response time
  • SemiCD -> Code for paper: Revisiting Consistency Regularization for Semi-supervised Change Detection in Remote Sensing Images. Achieves the performance of supervised CD even with access to as little as 10% of the annotated training data
  • FCCDN_pytorch -> code for paper: FCCDN: Feature Constraint Network for VHR Image Change Detection. Uses the LEVIR-CD building change detection dataset
  • INLPG_Python -> code for paper: Structure Consistency based Graph for Unsupervised Change Detection with Homogeneous and Heterogeneous Remote Sensing Images
  • NSPG_Python -> code for paper: Nonlocal patch similarity based heterogeneous remote sensing change detection
  • LGPNet-BCD -> code for 2021 paper: Building Change Detection for VHR Remote Sensing Images via Local-Global Pyramid Network and Cross-Task Transfer Learning Strategy
  • DS_UNet -> code for 2021 paper: Sentinel-1 and Sentinel-2 Data Fusion for Urban Change Detection using a Dual Stream U-Net, uses Onera Satellite Change Detection dataset
  • SiameseSSL -> code for 2022 paper: Urban change detection with a Dual-Task Siamese network and semi-supervised learning. Uses SpaceNet 7 dataset
  • CD-SOTA-methods -> Remote sensing change detection: State-of-the-art methods and available datasets
  • multimodalCD_ISPRS21 -> code for 2021 paper: Fusing Multi-modal Data for Supervised Change Detection
  • Unsupervised-CD-in-SITS-using-DL-and-Graphs -> code for article: Unsupervised Change Detection Analysis in Satellite Image Time Series using Deep Learning Combined with Graph-Based Approaches
  • LSNet -> code for 2022 paper: Extremely Light-Weight Siamese Network For Change Detection in Remote Sensing Image
  • Change-Detection-in-Remote-Sensing-Images -> using PCA & K-means
  • End-to-end-CD-for-VHR-satellite-image -> code for 2019 paper: End-to-End Change Detection for High Resolution Satellite Images Using Improved UNet++
  • Semantic-Change-Detection -> code for 2021 paper: SCDNET: A novel convolutional network for semantic change detection in high resolution optical remote sensing imagery
  • ERCNN-DRS_urban_change_monitoring -> code for 2021 paper: Neural Network-Based Urban Change Monitoring with Deep-Temporal Multispectral and SAR Remote Sensing Data
  • EGRCNN -> code for 2021 paper: Edge-guided Recurrent Convolutional Neural Network for Multi-temporal Remote Sensing Image Building Change Detection
  • Unsupervised-Remote-Sensing-Change-Detection -> code for 2021 paper: An Unsupervised Remote Sensing Change Detection Method Based on Multiscale Graph Convolutional Network and Metric Learning
  • CropLand-CD -> code for 2022 paper: A CNN-transformer Network with Multi-scale Context Aggregation for Fine-grained Cropland Change Detection
  • contrastive-surface-image-pretraining -> code for 2022 paper: Supervising Remote Sensing Change Detection Models with 3D Surface Semantics
  • dcvaVHROptical -> Deep Change Vector Analysis (DCVA) change detection. Code for 2019 paper: Unsupervised Deep Change Vector Analysis for Multiple-Change Detection in VHR Images
  • hyperdimensionalCD -> code for 2021 paper: Change Detection in Hyperdimensional Images Using Untrained Models
  • DSFANet -> code for 2018 paper: Unsupervised Deep Slow Feature Analysis for Change Detection in Multi-Temporal Remote Sensing Images
  • FCD-GAN-pytorch -> Fully Convolutional Change Detection Framework with Generative Adversarial Network (FCD-GAN) is a framework for change detection in multi-temporal remote sensing images
  • DARNet-CD -> code for 2022 paper: A Densely Attentive Refinement Network for Change Detection Based on Very-High-Resolution Bitemporal Remote Sensing Images
  • xView2_Vulcan -> Damage assessment using pre and post orthoimagery. Modified + productionized model based off the first-place model from the xView2 challenge.
  • ESCNet -> code for 2021 paper: An End-to-End Superpixel-Enhanced Change Detection Network for Very-High-Resolution Remote Sensing Images
  • ForestCoverChange -> Detecting and Predicting Forest Cover Change in Pakistani Areas Using Remote Sensing Imagery
  • deforestation-detection -> code for 2020 paper: DEEP LEARNING FOR HIGH-FREQUENCY CHANGE DETECTION IN UKRAINIAN FOREST ECOSYSTEM WITH SENTINEL-2
  • forest_change_detection -> forest change segmentation with time-dependent models, including Siamese, UNet-LSTM, UNet-diff, UNet3D models. Code for 2021 paper: Deep Learning for Regular Change Detection in Ukrainian Forest Ecosystem With Sentinel-2
  • SentinelClearcutDetection -> Scripts for deforestation detection on the Sentinel-2 Level-A images
  • clearcut_detection -> research & web-service for clearcut detection
  • CDRL -> code for 2022 paper: Unsupervised Change Detection Based on Image Reconstruction Loss
  • ddpm-cd -> code for 2022 paper: Remote Sensing Change Detection (Segmentation) using Denoising Diffusion Probabilistic Models
  • Remote-sensing-time-series-change-detection -> code for 2022 paper: Graph-based block-level urban change detection using Sentinel-2 time series
  • austin-ml-change-detection-demo -> A change detection demo for the Austin area using a pre-trained PyTorch model scaled with Dask on Planet imagery
  • dfc2021-msd-baseline -> A baseline for the "Multitemporal Semantic Change Detection" track of the 2021 IEEE GRSS Data Fusion Competition
  • CorrFusionNet -> code for 2020 paper: Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion
  • ChangeDetectionPCAKmeans -> MATLAB implementation for Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering.
  • IRCNN -> code for 2022 paper: IRCNN: An Irregular-Time-Distanced Recurrent Convolutional Neural Network for Change Detection in Satellite Time Series
  • UTRNet -> An Unsupervised Time-Distance-Guided Convolutional Recurrent Network for Change Detection in Irregularly Collected Images
  • open-cd -> an open source change detection toolbox based on a series of open source general vision task tools
  • Tiny_model_4_CD -> code for 2022 paper: TINYCD: A (Not So) Deep Learning Model For Change Detection. Uses LEVIR-CD & WHU-CD datasets
  • FHD -> code for 2022 paper: Feature Hierarchical Differentiation for Remote Sensing Image Change Detection
  • Change detection with Raster Vision -> blog post with Colab notebook
  • building-expansion -> code for 2021 paper: Enhancing Environmental Enforcement with Near Real-Time Monitoring: Likelihood-Based Detection of Structural Expansion of Intensive Livestock Farms
  • SaDL_CD -> code for 2022 paper: Semantic-aware Dense Representation Learning for Remote Sensing Image Change Detection
  • EGCTNet_pytorch -> code for 2022 paper: Building Change Detection Based on an Edge-Guided Convolutional Neural Network Combined with a Transformer
  • S2-cGAN -> code for 2020 paper: S2-cGAN: Self-Supervised Adversarial Representation Learning for Binary Change Detection in Multispectral Images
  • A-loss-function-for-change-detection -> code for 2022 paper: UAL: Unchanged Area Loss-Function for Change Detection Networks
  • IEEE_TGRS_SSTFormer -> code for 2022 paper: Spectral–Spatial–Temporal Transformers for Hyperspectral Image Change Detection
  • DMINet -> code for 2023 paper: Change Detection on Remote Sensing Images Using Dual-Branch Multilevel Intertemporal Network
  • AFCF3D-Net -> code for 2023 paper: Adjacent-level Feature Cross-Fusion with 3D CNN for Remote Sensing Image Change Detection
  • DSAHRNet -> code for paper: A Deeply Attentive High-Resolution Network for Change Detection in Remote Sensing Images
  • RDPNet -> code for 2022 paper: RDP-Net: Region Detail Preserving Network for Change Detection
  • BGAAE_CD -> code for 2022 paper: Bipartite Graph Attention Autoencoders for Unsupervised Change Detection Using VHR Remote Sensing Images
  • Unsupervised-Change-Detection -> code for 2009 paper: Unsupervised Change Detection in Satellite Images Using Principal Component Analysis and k-Means Clustering
  • Metric-CD -> code for 2023 paper: Deep Metric Learning for Unsupervised Change Detection in Remote Sensing Images
  • HANet-CD -> code for 2023 paper: HANet: A hierarchical attention network for change detection with bi-temporal very-high-resolution remote sensing images
  • SRGCAE -> code for 2022 paper: Unsupervised Multimodal Change Detection Based on Structural Relationship Graph Representation Learning
  • change_detection_onera_baselines -> Siamese version of U-Net baseline model
  • SiamCRNN -> code for 2020 paper: Change Detection in Multisource VHR Images via Deep Siamese Convolutional Multiple-Layers Recurrent Neural Network

Time series


Prediction of the next image in a series.

The analysis of time series observations in remote sensing data has numerous applications, including enhancing the accuracy of classification models and forecasting future patterns and events. Image source. Note: since classifying crops and predicting crop yield are such prominent use case for time series data, these tasks have dedicated sections after this one.

  • LANDSAT Time Series Analysis for Multi-temporal Land Cover Classification using Random Forest
  • temporalCNN -> Temporal Convolutional Neural Network for the Classification of Satellite Image Time Series
  • pytorch-psetae -> code for the paper: Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention
  • satflow -> optical flow models for predicting future satellite images from current and past ones
  • esa-superresolution-forecasting -> Forecasting air pollution using ESA Sentinel-5p data, and an encoder-decoder convolutional LSTM neural network architecture, implemented in Pytorch
  • lightweight-temporal-attention-pytorch -> A PyTorch implementation of the Light Temporal Attention Encoder (L-TAE) for satellite image time series
  • dtwSat -> Time-Weighted Dynamic Time Warping for satellite image time series analysis
  • MTLCC -> code for paper: Multitemporal Land Cover Classification Network. A recurrent neural network approach to encode multi-temporal data for land cover classification
  • PWWB -> Code for the 2021 paper: Real-Time Spatiotemporal Air Pollution Prediction with Deep Convolutional LSTM through Satellite Image Analysis
  • spaceweather -> predicting geomagnetic storms from satellite measurements of the solar wind and solar corona, uses LSTMs
  • Forest_wildfire_spreading_convLSTM -> Modeling of the spreading of forest wildfire using a neural network with ConvLSTM cells. Prediction 3-days forward
  • ConvTimeLSTM -> Extension of ConvLSTM and Time-LSTM for irregularly spaced images, appropriate for Remote Sensing
  • dl-time-series -> Deep Learning algorithms applied to characterization of Remote Sensing time-series
  • tpe -> code for 2022 paper: Generalized Classification of Satellite Image Time Series With Thermal Positional Encoding
  • wildfire_forecasting -> code for 2021 paper: Deep Learning Methods for Daily Wildfire Danger Forecasting. Uses ConvLSTM
  • satellite_image_forecasting -> predict future satellite images from past ones using features such as precipitation and elevation maps. Entry for the EarthNet2021 challenge
  • Deep Learning for Cloud Gap-Filling on Normalized Difference Vegetation Index using Sentinel Time-Series -> A CNN-RNN based model that identifies correlations between optical and SAR data and exports dense Normalized Difference Vegetation Index (NDVI) time-series of a static 6-day time resolution and can be used for Events Detection tasks
  • DeepSatModels -> code for the 2023 paper: ViTs for SITS: Vision Transformers for Satellite Image Time Series

Crop classification


(left) false colour image and (right) the crop map.

Crop classification in remote sensing is the identification and mapping of different crops in images or sequences of images. It aims to provide insight into the distribution and composition of crops in a specific area, with applications that include monitoring crop growth and evaluating crop damage. Both traditional machine learning methods, such as decision trees and support vector machines, and deep learning techniques, such as convolutional neural networks (CNNs), can be used to perform crop classification. The optimal method depends on the size and complexity of the dataset, the desired accuracy, and the available computational resources. However, the success of crop classification relies heavily on the quality and resolution of the input data, as well as the availability of labeled training data. Image source.

  • Classification of Crop Fields through Satellite Image Time Series -> using a pytorch-psetae & Sentinel-2 data
  • CropDetectionDL -> using GRU-net, First place solution for Crop Detection from Satellite Imagery competition organized by CV4A workshop at ICLR 2020
  • Radiant-Earth-Spot-the-Crop-Challenge -> The main objective of this challenge was to use time-series of Sentinel-2 multi-spectral data to classify crops in the Western Cape of South Africa. The challenge was to build a machine learning model to predict crop type classes for the test dataset
  • Crop-Classification -> crop classification using multi temporal satellite images
  • DeepCropMapping -> A multi-temporal deep learning approach with improved spatial generalizability for dynamic corn and soybean mapping, uses LSTM
  • CropMappingInterpretation -> An interpretation pipeline towards understanding multi-temporal deep learning approaches for crop mapping
  • timematch -> code for 2022 paper: A method to perform unsupervised cross-region adaptation of crop classifiers trained with satellite image time series. We also introduce an open-access dataset for cross-region adaptation with SITS from four different regions in Europe
  • elects -> code for 2022 paper: End-to-End Learned Early Classification of Time Series for In-Season Crop Type Mapping

Crop yield


Wheat yield data. Blue vertical lines denote observation dates.

Crop yield is a crucial metric in agriculture, as it determines the productivity and profitability of a farm. It is defined as the amount of crops produced per unit area of land and is influenced by a range of factors including soil fertility, weather conditions, the type of crop grown, and pest and disease control. By utilizing time series of satellite images, it is possible to perform accurate crop type classification and take advantage of the seasonal variations specific to certain crops. This information can be used to optimize crop management practices and ultimately improve crop yield. However, to achieve accurate results, it is essential to consider the quality and resolution of the input data, as well as the availability of labeled training data. Appropriate pre-processing and feature extraction techniques must also be employed. Image source.

Wealth and economic activity


COVID-19 impacts on human and economic activities.

The traditional approach of collecting economic data through ground surveys is a time-consuming and resource-intensive process. However, advancements in satellite technology and machine learning offer an alternative solution. By utilizing satellite imagery and applying machine learning algorithms, it is possible to obtain accurate and current information on economic activity with greater efficiency. This shift towards satellite imagery-based forecasting not only provides cost savings but also offers a wider and more comprehensive perspective of economic activity. As a result, it is poised to become a valuable asset for both policymakers and businesses. Image source.

Disaster response


Detecting buildings destroyed in a disaster.

Remote sensing images are used in disaster response to identify and assess damage to an area. This imagery can be used to detect buildings that are damaged or destroyed, identify roads and road networks that are blocked, determine the size and shape of a disaster area, and identify areas that are at risk of flooding. Remote sensing images can also be used to detect and monitor the spread of forest fires and monitor vegetation health. Also checkout the sections on change detection and water/fire/building segmentation. Image source.

Super-resolution


Super resolution using multiple low resolution images as input.

Super-resolution is a technique aimed at improving the resolution of an imaging system. This process can be applied prior to other image processing steps to increase the visibility of small objects or boundaries. Despite its potential benefits, the use of super-resolution is controversial due to the possibility of introducing artifacts that could be mistaken for real features. Super-resolution techniques are broadly categorized into two groups: single image super-resolution (SISR) and multi-image super-resolution (MISR). SISR focuses on enhancing the resolution of a single image, while MISR utilizes multiple images of the same scene to create a high-resolution output. Each approach has its own advantages and limitations, and the choice of method depends on the specific application and desired outcome. Image source.

Single image super-resolution (SISR)

Multi image super-resolution (MISR)

Note that nearly all the MISR publications resulted from the PROBA-V Super Resolution competition

  • deepsum -> Deep neural network for Super-resolution of Unregistered Multitemporal images (ESA PROBA-V challenge)
  • 3DWDSRNet -> code to reproduce Satellite Image Multi-Frame Super Resolution (MISR) Using 3D Wide-Activation Neural Networks
  • RAMS -> Official TensorFlow code for paper Multi-Image Super Resolution of Remotely Sensed Images Using Residual Attention Deep Neural Networks
  • TR-MISR -> Transformer-based MISR framework for the the PROBA-V super-resolution challenge. With paper
  • HighRes-net -> Pytorch implementation of HighRes-net, a neural network for multi-frame super-resolution, trained and tested on the European Space Agency’s Kelvin competition
  • ProbaVref -> Repurposing the Proba-V challenge for reference-aware super resolution
  • The missing ingredient in deep multi-temporal satellite image super-resolution -> Permutation invariance harnesses the power of ensembles in a single model, with repo piunet
  • MSTT-STVSR -> Space-time Super-resolution for Satellite Video: A Joint Framework Based on Multi-Scale Spatial-Temporal Transformer, JAG, 2022
  • Self-Supervised Super-Resolution for Multi-Exposure Push-Frame Satellites
  • DDRN -> Deep Distillation Recursive Network for Video Satellite Imagery Super-Resolution
  • worldstrat -> SISR and MISR implementations of SRCNN
  • MISR-GRU -> Pytorch implementation of MISR-GRU, a deep neural network for multi image super-resolution (MISR), for ProbaV Super Resolution Competition
  • MSDTGP -> code for 2021 paper: Satellite Video Super-Resolution via Multiscale Deformable Convolution Alignment and Temporal Grouping Projection
  • proba-v-super-resolution-challenge -> Solution to ESA's satellite imagery super resolution challenge
  • PROBA-V-Super-Resolution -> solution using a custom deep learning architecture

Pansharpening


Pansharpening example with a resolution difference of factor 4.

Pansharpening is a data fusion method that merges the high spatial detail from a high-resolution panchromatic image with the rich spectral information from a lower-resolution multispectral image. The result is a single, high-resolution color image that retains both the sharpness of the panchromatic band and the color information of the multispectral bands. This process enhances the spatial resolution while preserving the spectral qualities of the original images. Image source

  • Several algorithms described in the ArcGIS docs, with the simplest being taking the mean of the pan and RGB pixel value.
  • For into to classical methods see this notebook and this kaggle kernel
  • rio-pansharpen -> pansharpening Landsat scenes
  • Simple-Pansharpening-Algorithms
  • Working-For-Pansharpening -> long list of pansharpening methods and update of Awesome-Pansharpening
  • PSGAN -> A Generative Adversarial Network for Remote Sensing Image Pan-sharpening, arxiv paper
  • Pansharpening-by-Convolutional-Neural-Network
  • PBR_filter -> {P}ansharpening by {B}ackground {R}emoval algorithm for sharpening RGB images
  • py_pansharpening -> multiple algorithms implemented in python
  • Deep-Learning-PanSharpening -> deep-learning based pan-sharpening code package, we reimplemented include PNN, MSDCNN, PanNet, TFNet, SRPPNN, and our purposed network DIPNet
  • HyperTransformer -> A Textural and Spectral Feature Fusion Transformer for Pansharpening
  • DIP-HyperKite -> Hyperspectral Pansharpening Based on Improved Deep Image Prior and Residual Reconstruction
  • D2TNet -> code for 2022 paper: A ConvLSTM Network with Dual-direction Transfer for Pan-sharpening
  • PanColorGAN-VHR-Satellite-Images -> code for 2020 paper: Rethinking CNN-Based Pansharpening: Guided Colorization of Panchromatic Images via GANs
  • MTL_PAN_SEG -> code for 2019 paper: Multi-task deep learning for satellite image pansharpening and segmentation
  • Z-PNN -> code for 2022 paper: Pansharpening by convolutional neural networks in the full resolution framework
  • GTP-PNet -> code for 2021 paper: GTP-PNet: A residual learning network based on gradient transformation prior for pansharpening
  • UDL -> code for 2021 paper: Dynamic Cross Feature Fusion for Remote Sensing Pansharpening
  • PSData -> A Large-Scale General Pan-sharpening DataSet, which contains PSData3 (QB, GF-2, WV-3) and PSData4 (QB, GF-1, GF-2, WV-2).
  • AFPN -> Adaptive Detail Injection-Based Feature Pyramid Network For Pan-sharpening
  • pan-sharpening -> multiple methods demonstrated for multispectral and panchromatic images
  • PSGan-Family -> code for 2020 paper: PSGAN: A Generative Adversarial Network for Remote Sensing Image Pan-Sharpening
  • PanNet-Landsat -> code for 2017 paper: A Deep Network Architecture for Pan-Sharpening
  • DLPan-Toolbox -> code for 2022 paper: Machine Learning in Pansharpening: A Benchmark, from Shallow to Deep Networks
  • LPPN -> code for 2021 paper: Laplacian pyramid networks: A new approach for multispectral pansharpening
  • S2_SSC_CNN -> code for 2020 paper: Zero-shot Sentinel-2 Sharpening Using A Symmetric Skipped Connection Convolutional Neural Network
  • S2S_UCNN -> code for 2021 paper: Sentinel 2 sharpening using a single unsupervised convolutional neural network with MTF-Based degradation model
  • SSE-Net -> code for 2022 paper: Spatial and Spectral Extraction Network With Adaptive Feature Fusion for Pansharpening
  • UCGAN -> code for 2022 paper: Unsupervised Cycle-consistent Generative Adversarial Networks for Pan-sharpening
  • GCPNet -> code for 2022 paper: When Pansharpening Meets Graph Convolution Network and Knowledge Distillation
  • PanFormer -> code for 2022 paper: PanFormer: a Transformer Based Model for Pan-sharpening
  • Pansharpening -> code for 2021 paper: Pansformers: Transformer-Based Self-Attention Network for Pansharpening
  • Sentinel-2 Band Pan-Sharpening
  • PGCU -> code for 2023 paper: Probability-based Global Cross-modal Upsampling for Pansharpening

Image-to-image translation


(left) Sentinel-1 SAR input, (middle) translated to RGB and (right) Sentinel-2 true RGB image for comparison.

Image-to-image translation is a crucial aspect of computer vision that utilizes machine learning models to transform an input image into a new, distinct output image. In the field of remote sensing, it plays a significant role in bridging the gap between different imaging domains, such as converting Synthetic Aperture Radar (SAR) images into RGB (Red Green Blue) images. This technology has a wide range of applications, including improving image quality, filling in missing information, and facilitating cross-domain image analysis and comparison. By leveraging deep learning algorithms, image-to-image translation has become a powerful tool in the arsenal of remote sensing researchers and practitioners. Image source

Data fusion


Illustration of a fusion workflow.

Data fusion is a technique for combining information from different sources such as Synthetic Aperture Radar (SAR), optical imagery, and non-imagery data such as Internet of Things (IoT) sensor data. The integration of diverse data sources enables data fusion to overcome the limitations of individual sources, leading to the creation of models that are more accurate and informative than those constructed from a single source. Image source

  • Awesome-Data-Fusion-for-Remote-Sensing
  • UDALN_GRSL -> Deep Unsupervised Blind Hyperspectral and Multispectral Data Fusion
  • CropTypeMapping -> Crop type mapping from optical and radar (Sentinel-1&2) time series using attention-based deep learning
  • Multimodal-Remote-Sensing-Toolkit -> uses Hyperspectral and LiDAR Data
  • Aerial-Template-Matching -> development of an algorithm for template Matching on aerial imagery applied to UAV dataset
  • DS_UNet -> code for 2021 paper: Sentinel-1 and Sentinel-2 Data Fusion for Urban Change Detection using a Dual Stream U-Net, uses Onera Satellite Change Detection dataset
  • DDA_UrbanExtraction -> Unsupervised Domain Adaptation for Global Urban Extraction using Sentinel-1 and Sentinel-2 Data
  • swinstfm -> code for paper: Remote Sensing Spatiotemporal Fusion using Swin Transformer
  • LoveCS -> code for 2022 paper: Cross-sensor domain adaptation for high-spatial resolution urban land-cover mapping: from airborne to spaceborne imagery
  • comingdowntoearth -> code for 2021 paper: Implementation of 'Coming Down to Earth: Satellite-to-Street View Synthesis for Geo-Localization'
  • Matching between acoustic and satellite images
  • MapRepair -> Deep Cadastre Maps Alignment and Temporal Inconsistencies Fix in Satellite Images
  • Compressive-Sensing-and-Deep-Learning-Framework -> Compressive Sensing is used as an initial guess to combine data from multiple sources, with LSTM used to refine the result
  • DeepSim -> code for paper: DeepSIM: GPS Spoofing Detection on UAVs using Satellite Imagery Matching
  • MHF-net -> code for 2019 paper: Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net
  • Remote_Sensing_Image_Fusion -> code for 2021 paper: Semi-Supervised Remote Sensing Image Fusion Using Multi-Scale Conditional Generative Adversarial network with Siamese Structure
  • CNNs for Multi-Source Remote Sensing Data Fusion -> code for 2021 paper: Single-stream CNN with Learnable Architecture for Multi-source Remote Sensing Data
  • Deep Generative Reflectance Fusion -> Achieving Landsat-like reflectance at any date by fusing Landsat and MODIS surface reflectance with deep generative models
  • IEEE_TGRS_MDL-RS -> code for 2021 paper: More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification
  • SSRNET -> code for 2022 paper: SSR-NET: Spatial-Spectral Reconstruction Network for Hyperspectral and Multispectral Image Fusion
  • cross-view-image-matching -> code for 2019 paper: Bridging the Domain Gap for Ground-to-Aerial Image Matching
  • CoF-MSMG-PCNN -> code for 2020 paper: Remote Sensing Image Fusion via Boundary Measured Dual-Channel PCNN in Multi-Scale Morphological Gradient Domain
  • robust_matching_network_on_remote_sensing_imagery_pytorch -> code for 2019 paper: A Robust Matching Network for Gradually Estimating Geometric Transformation on Remote Sensing Imagery
  • edcstfn -> code for 2019 paper: An Enhanced Deep Convolutional Model for Spatiotemporal Image Fusion
  • ganstfm -> code for 2021 paper: A Flexible Reference-Insensitive Spatiotemporal Fusion Model for Remote Sensing Images Using Conditional Generative Adversarial Network
  • CMAFF -> code for 2021 paper: Cross-Modality Attentive Feature Fusion for Object Detection in Multispectral Remote Sensing Imagery
  • SOLC -> code for 2022 paper: MCANet: A joint semantic segmentation framework of optical and SAR images for land use classification. Uses WHU-OPT-SAR-dataset
  • MFT -> code for 2022 paper: Multimodal Fusion Transformer for Remote Sensing Image Classification
  • ISPRS_S2FL -> code for 2021 paper: Multimodal Remote Sensing Benchmark Datasets for Land Cover Classification with A Shared and Specific Feature Learning Model
  • HSHT-Satellite-Imagery-Synthesis -> code for thesis - Improving Flood Maps by Increasing the Temporal Resolution of Satellites Using Hybrid Sensor Fusion
  • MDC -> code for 2021 paper: Unsupervised Data Fusion With Deeper Perspective: A Novel Multisensor Deep Clustering Algorithm
  • FusAtNet -> code for 2020 paper: FusAtNet: Dual Attention based SpectroSpatial Multimodal Fusion Network for Hyperspectral and LiDAR Classification
  • AMM-FuseNet -> code for 2022 paper: AMM-FuseNet: Attention-Based Multi-Modal Image Fusion Network for Land Cover Mapping
  • MANet -> code for 2022 paper: MANet: A Network Architecture for Remote Sensing Spatiotemporal Fusion Based on Multiscale and Attention Mechanisms
  • DCSA-Net -> code for 2022 paper: Dynamic Convolution Self-Attention Network for Land-Cover Classification in VHR Remote-Sensing Images
  • deforestation-from-data-fusion -> Fusing Sentinel-1 and Sentinel-2 images for deforestation detection in the Brazilian Amazon under diverse cloud conditions

Generative Adversarial Networks (GANs)


Example generated images using a GAN.

Generative Adversarial Networks (GANs) are a type of deep learning architecture that leverages the power of competition between two neural networks. The objective of a GAN is to generate new, synthetic data that appears similar to real-world data. This is achieved by training the two networks, the generator and the discriminator, in a zero-sum game, where the generator attempts to produce data that is indistinguishable from the real data, while the discriminator tries to distinguish between the generated data and the real data. In the field of remote sensing, GANs have found numerous applications, particularly in generating synthetic data. This synthetic data can be used for a wide range of purposes, including data augmentation, data imbalance correction, and filling in missing or corrupted data. By generating realistic synthetic data, GANs can improve the performance of remote sensing algorithms and models, leading to more accurate and reliable results. Additionally, GANs can also be used for various other tasks in remote sensing, such as super-resolution, denoising, and inpainting. Image source

Autoencoders, dimensionality reduction, image embeddings & similarity search


Example of using an autoencoder to create a low dimensional representation of hyperspectral data.

Autoencoders are a type of neural network that aim to simplify the representation of input data by compressing it into a lower dimensional form. This is achieved through a two-step process of encoding and decoding, where the encoding step compresses the data into a lower dimensional representation, and the decoding step restores the data back to its original form. The goal of this process is to reduce the data's dimensionality, making it easier to store and process, while retaining the essential information. Dimensionality reduction, as the name suggests, refers to the process of reducing the number of dimensions in a dataset. This can be achieved through various techniques such as principal component analysis (PCA) or singular value decomposition (SVD). Autoencoders are one type of neural network that can be used for dimensionality reduction. In the field of computer vision, image embeddings are vector representations of images that capture the most important features of the image. These embeddings can then be used to perform similarity searches, where images are compared based on their features to find similar images. This process can be used in a variety of applications, such as image retrieval, where images are searched based on certain criteria like color, texture, or shape. It can also be used to identify duplicate images in a dataset. Image source

Image retrieval


Illustration of the remote sensing image retrieval process.

Image retrieval is the task of retrieving images from a collection that are similar to a query image. Image retrieval plays a vital role in remote sensing by enabling the efficient and effective search for relevant images from large image archives, and by providing a way to quantify changes in the environment over time. Image source

  • Demo_AHCL_for_TGRS2022 -> code for 2022 paper: Asymmetric Hash Code Learning (AHCL) for remote sensing image retrieval
  • GaLR -> code for 2022 paper: Remote Sensing Cross-Modal Text-Image Retrieval Based on Global and Local Information
  • retrievalSystem -> cross-modal image retrieval system
  • AMFMN -> code for the 2021 paper: Exploring a Fine-grained Multiscale Method for Cross-modal Remote Sensing Image Retrieval
  • Active-Learning-for-Remote-Sensing-Image-Retrieval -> unofficial implementation of paper: A Novel Active Learning Method in Relevance Feedback for Content-Based Remote Sensing Image Retrieval
  • CMIR-NET -> code for 2020 paper: A deep learning based model for cross-modal retrieval in remote sensing
  • Deep-Hash-learning-for-Remote-Sensing-Image-Retrieval -> code for 2020 paper: Deep Hash Learning for Remote Sensing Image Retrieval
  • MHCLN -> code for 2018 paper: Deep Metric and Hash-Code Learning for Content-Based Retrieval of Remote Sensing Images
  • HydroViet_VOR -> Object Retrieval in satellite images with Triplet Network
  • AMFMN -> code for 2021 paper: Exploring a Fine-Grained Multiscale Method for Cross-Modal Remote Sensing Image Retrieval

Image Captioning


Example captioned image.

Image Captioning is the task of automatically generating a textual description of an image. In remote sensing, image captioning can be used to automatically generate captions for satellite or aerial images, which can be useful for a variety of purposes, such as image search and retrieval, data cataloging, and data dissemination. The generated captions can provide valuable information about the content of the images, including the location, the type of terrain or objects present, and the weather conditions, among others. This information can be used to quickly and easily understand the content of the images, without having to manually examine each image. Image source

Visual Question Answering

Visual Question Answering (VQA) is the task of automatically answering a natural language question about an image. In remote sensing, VQA enables users to interact with the images and retrieve information using natural language questions. For example, a user could ask a VQA system questions such as "What is the type of land cover in this area?", "What is the dominant crop in this region?" or "What is the size of the city in this image?". The system would then analyze the image and generate an answer based on its understanding of the image content.

  • VQA-easy2hard -> code for 2022 paper: From Easy to Hard: Learning Language-guided Curriculum for Visual Question Answering on Remote Sensing Data

Mixed data learning

Mixed data learning is the process of learning from datasets that may contain an mix of images, textual and numeric data. Mixed data learning can help improve the accuracy of models by allowing them to learn from multiple sources at once and use more sophisticated methods to identify patterns and correlations.

Few-shot learning

This is a class of techniques which attempt to make predictions for classes with few, one or even zero examples provided during training. In zero shot learning (ZSL) the model is assisted by the provision of auxiliary information which typically consists of descriptions/semantic attributes/word embeddings for both the seen and unseen classes at train time (ref). These approaches are particularly relevant to remote sensing, where there may be many examples of common classes, but few or even zero examples for other classes of interest.

Self-supervised, unsupervised & contrastive learning

Self-supervised, unsupervised & contrastive learning are all methods of machine learning that use unlabeled data to train algorithms. Self-supervised learning uses labeled data to create an artificial supervisor, while unsupervised learning uses only the data itself to identify patterns and similarities. Contrastive learning uses pairs of data points to learn representations of data, usually for classification tasks.

Weakly & semi-supervised learning

Weakly & semi-supervised learning are two methods of machine learning that use both labeled and unlabeled data for training. Weakly supervised learning uses weakly labeled data, which may be incomplete or inaccurate, while semi-supervised learning uses both labeled and unlabeled data. Weakly supervised learning is typically used in situations where labeled data is scarce and unlabeled data is abundant. Semi-supervised learning is typically used in situations where labeled data is abundant but also contains some noise or errors. Both techniques can be used to improve the accuracy of machine learning models by making use of additional data sources.

  • MARE -> self-supervised Multi-Attention REsu-net for semantic segmentation in remote sensing
  • SSGF-for-HRRS-scene-classification -> code for 2018 paper: A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification
  • SFGAN -> code for 2018 paper: Semantic-Fusion Gans for Semi-Supervised Satellite Image Classification
  • SSDAN -> code for 2021 paper: Multi-Source Semi-Supervised Domain Adaptation Network for Remote Sensing Scene Classification
  • HR-S2DML -> code for 2020 paper: High-Rankness Regularized Semi-Supervised Deep Metric Learning for Remote Sensing Imagery
  • Semantic Segmentation of Satellite Images Using Point Supervision
  • fcd -> code for 2021 paper: Fixed-Point GAN for Cloud Detection. A weakly-supervised approach, training with only image-level labels
  • weak-segmentation -> Weakly supervised semantic segmentation for aerial images in pytorch
  • TNNLS_2022_X-GPN -> Code for paper: Semisupervised Cross-scale Graph Prototypical Network for Hyperspectral Image Classification
  • weakly_supervised -> code for the paper Weakly Supervised Deep Learning for Segmentation of Remote Sensing Imagery. Demonstrates that segmentation can be performed using small datasets comprised of pixel or image labels
  • wan -> Weakly-Supervised Domain Adaptation for Built-up Region Segmentation in Aerial and Satellite Imagery, with arxiv paper
  • sourcerer -> A Bayesian-inspired deep learning method for semi-supervised domain adaptation designed for land cover mapping from satellite image time series (SITS). Paper
  • MSMatch -> Semi-Supervised Multispectral Scene Classification with Few Labels. Includes code to work with both the RGB and the multispectral (MS) versions of EuroSAT dataset and the UC Merced Land Use (UCM) dataset. Paper
  • Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised Learning with arxiv paper
  • Semi-supervised learning in satellite image classification -> experimenting with MixMatch and the EuroSAT data set
  • ScRoadExtractor -> code for 2020 paper: Scribble-based Weakly Supervised Deep Learning for Road Surface Extraction from Remote Sensing Images
  • ICSS -> code for 2022 paper: Weakly-supervised continual learning for class-incremental segmentation
  • es-CP -> code for 2022 paper: Semi-Supervised Hyperspectral Image Classification Using a Probabilistic Pseudo-Label Generation Framework
  • Flood_Mapping_SSL -> code for 2022 paper: Enhancement of Urban Floodwater Mapping From Aerial Imagery With Dense Shadows via Semisupervised Learning
  • MS4D-Net-Building-Damage-Assessment -> code for 2022 paper: MS4D-Net: Multitask-Based Semi-Supervised Semantic Segmentation Framework with Perturbed Dual Mean Teachers for Building Damage Assessment from High-Resolution Remote Sensing Imagery

Active learning

Supervised deep learning techniques typically require a huge number of annotated/labelled examples to provide a training dataset. However labelling at scale take significant time, expertise and resources. Active learning techniques aim to reduce the total amount of annotation that needs to be performed by selecting the most useful images to label from a large pool of unlabelled images, thus reducing the time to generate useful training datasets. These processes may be referred to as Human-in-the-Loop Machine Learning

Federated learning

Federated learning is an approach to distributed machine learning where a central processor coordinates the training of an individual model in each of its clients. It is a type of distributed ML which means that the data is distributed among different devices or locations and the model is trained on all of them. The central processor aggregates the model updates from all the clients and then sends the global model parameters back to the clients. This is done to protect the privacy of data, as the data remains on the local device and only the global model parameters are shared with the central processor. This technique can be used to train models with large datasets that cannot be stored in a single device, as well as to enable certain privacy-preserving applications.

Transformers

Vision transformers are state-of-the-art models for vision tasks such as image classification and object detection. They differ from CNNs as they use self-attention instead of convolution to learn global relations between all pixels in the image. Vision transformers employ a transformer encoder architecture, composed of multi-layer blocks with multi-head self-attention and feed-forward layers, enabling the capture of rich contextual information for more accurate predictions.

Adversarial ML

Efforts to detect falsified images & deepfakes

  • UAE-RS -> dataset that provides black-box adversarial samples in the remote sensing field
  • PSGAN -> code for paper: Perturbation Seeking Generative Adversarial Networks: A Defense Framework for Remote Sensing Image Scene Classification
  • SACNet -> code for 2021 paper: Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification

Image registration

Image registration is the process of registering one or more images onto another (typically well georeferenced) image. Traditionally this is performed manually by identifying control points (tie-points) in the images, for example using QGIS. This section lists approaches which mostly aim to automate this manual process. There is some overlap with the data fusion section but the distinction I make is that image registration is performed as a prerequisite to downstream processes which will use the registered data as an input.

Terrain mapping, Disparity Estimation, Lidar, DEMs & NeRF

Measure surface contours & locate 3D points in space from 2D images. NeRF stands for Neural Radiance Fields and is the term used in deep learning communities to describe a model that generates views of complex 3D scenes based on a partial set of 2D images

Thermal Infrared

Thermal infrared remote sensing is a technique used to detect and measure thermal radiation emitted from the Earth’s surface. This technique can be used to measure the temperature of the ground and any objects on it and can detect the presence of different materials. Thermal infrared remote sensing is used to assess land cover, detect land-use changes, and monitor urban heat islands, as well as to measure the temperature of the ground during nighttime or in areas of limited visibility.

SAR

SAR (synthetic aperture radar) is used to detect and measure the properties of objects and surfaces on the Earth's surface. SAR can be used to detect changes in terrain, features, and objects over time, as well as to measure the size, shape, and composition of objects and surfaces. SAR can also be used to measure moisture levels in soil and vegetation, or to detect and monitor changes in land use.

NDVI - vegetation index

Normalized Difference Vegetation Index (NDVI) is an index used to measure the amount of healthy vegetation in a given area. It is calculated by taking the difference between the near-infrared (NIR) and red (red) bands of a satellite image, and dividing by the sum of the two bands. NDVI can be used to identify areas of healthy vegetation and to assess the health of vegetation in a given area.

General image quality

Image quality describes the degree of accuracy with which an image can represent the original object. Image quality is typically measured by the amount of detail, sharpness, and contrast that an image contains. Factors that contribute to image quality include the resolution, format, and compression of the image.

  • Convolutional autoencoder network can be employed to image denoising, read about this on the Keras blog
  • jitter-compensation -> Remote Sensing Image Jitter Detection and Compensation Using CNN
  • DeblurGANv2 -> Deblurring (Orders-of-Magnitude) Faster and Better
  • image-quality-assessment -> CNN to predict the aesthetic and technical quality of images
  • Convolutional autoencoder for image denoising -> keras guide
  • piq -> a collection of measures and metrics for image quality assessment
  • FFA-Net -> Feature Fusion Attention Network for Single Image Dehazing
  • DeepCalib -> A Deep Learning Approach for Automatic Intrinsic Calibration of Wide Field-of-View Cameras
  • PerceptualSimilarity -> LPIPS is a perceptual metric which aims to overcome the limitations of traditional metrics such as PSNR & SSIM, to better represent the features the human eye picks up on
  • Optical-RemoteSensing-Image-Resolution -> code for 2018 paper: Deep Memory Connected Neural Network for Optical Remote Sensing Image Restoration. Two applications: Gaussian image denoising and single image super-resolution
  • Hyperspectral-Deblurring-and-Destriping
  • HyDe -> Hyperspectral Denoising algorithm toolbox in Python, with paper
  • HLF-DIP -> code for 2022 paper: Unsupervised Hyperspectral Denoising Based on Deep Image Prior and Least Favorable Distribution
  • RQUNetVAE -> code for 2022 paper: Riesz-Quincunx-UNet Variational Auto-Encoder for Satellite Image Denoising
  • deep-hs-prior -> code for 2019 paper: Deep Hyperspectral Prior: Denoising, Inpainting, Super-Resolution
  • iquaflow -> from Satellogic, an image quality framework that aims at providing a set of tools to assess image quality by using the performance of AI models trained on the images as a proxy.

Synthetic data

Training data can be hard to acquire, particularly for rare events such as change detection after disasters, or imagery of rare classes of objects. In these situations, generating synthetic training data might be the only option. This has become quite sophisticated, with 3D models being use with open source games engines such as Unreal.


About

Techniques for deep learning with satellite & aerial imagery

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published