Forest fire detection on-board satellites – CNIA 2022

On Wednesday, June 29, 2022, Houssem Farhat (data scientist), Lionel Daniel (research engineer), Michael Benguigui (research engineer) and Adrien Girard (project leader) presented at the CNIA (National Conference on Artificial Intelligence) the article “Model and dataset for multi-spectral detection of forest fires on-board satellites”. This work is the result of the CIAR project.


ABOUT THE ARTICLE

“Model and dataset for multi-spectral detection of forest fires on-board satellites”

Abstract

The number of wildfires will likely increase by +50% before the end of the century. In this article, we focus on detecting wildfires onboard satellites to raise early warnings directly from space. In this context, we trained a neural network UNetMobileNetV3 to segment multispectral Sentinel-2* images, which we annotated semi-automatically and verified manually. We then deployed this network on a GPU** that can be embedded into a low-orbit satellite.


More about the publication: here

LaTex/BibTex citation

@inproceedings{houssemfarhat:hal,

TITLE = {Mod{`e}le et jeu de donn{‘e}es pour la d{‘e}tection multi-spectrale de feux de forêt {`a} bord de satellites},
AUTHOR = {Farhat, Daniel, Benguigui, Girard},
BOOKTITLE = {PFIA 2022 – CNIA},
ADDRESS = {Saint-Etienne, France},
YEAR = {2022},
MONTH = Jun, 
KEYWORDS = {System Design ; semantic parsing ; abstract representation ; formalization},
}

*Constellation of scientific satellites | **Graphics processing unit that allows to accelerate tensorial calculations concerning embedded Artificial Intelligence


ABOUT THE S2WDS (SENTINEL-2 WILDFIRE DATASET)

In this research, we implemented a set of 90 scenes of 13 spectral bands from the Sentinel-2 satellite, with a Ground Sampling Distance from 40 to 80m, where the active fires were automatically pre-annotated and then manually corrected.”

These L1C image processing level images were downloaded from Sentinel-Hub’s OGC API WCS (Web Coverage Service), then partitioned into 256×256 pixel images. Then, the same images were partitioned into trainset, valset and testset in order to train, to validate and to test an Artificial Intelligence reaching a 94% IoU quality.


Download more: here
These results are reproducible via the code available: here

Two examples of 256×256 pixel images and their associated annotations (=ground truth) that indicate where the active fires are. The “False Color” images represent the same scene as the RGB image, but the red-green-blue channel of your screen displays the spectral bands B12, B11 and B04 acquired by a Sentinel-2 satellite. The B12 and B11 bands being sensitive to near-infrared, the active fires appear in orange.

ABOUT THE ENVIRONMENTAL IMPACT OF THIS STUDY

“The environmental impact of this study is difficult to measure, but it is important to know the order of magnitude.”

By evaluating only the carbon impact of the power consumption driven by the training of our Artificial Intelligence, the authors estimate to have emitted about 2kgCO2. This is about 3000 times lower than their personal carbon footprint, which shall be divided by 5 by 2050 to cope with the +2°C of the Paris Agreement. Even if the entire team of authors is aware of the current environmental situation, it is important to know that this project aims to contribute to the environmental balance of our planet. Indeed, by quickly detecting wildfires from space, a satellite could directly alert nearby humans and thus save lives, fauna and flora.

For more details, consult the publication.


ABOUT CIAR PROJECT

“The CIAR (Autonomous and Reactive Image Chain) project studies technologies to help the deployment of Artificial Intelligence for image processing on embedded systems (satellites, delivery drones, etc.).”

For this, three interdependent challenges are addressed:

  • The definition of the use case and the constitution of image databases;
  • The design of efficient AI adapted to the constraints of embedded systems;
  • The optimization and hardware implementation of the selected algorithms.

For several years, the CIAR team has been collaborating with the ESA (European Space Agency) on the OPS-SAT mission. On March 22, 2021, the team accomplished two space premieres:

  • The remote update, from a ground station, of an artificial neural network embedded on a satellite;
  • The use of a FPGA (Field-Programmable Gate Array) to deploy and use this neural network in orbit.

More about the CIAR project: here

KEY INFORMATION
Key numbers

Duration of the project: 5 years (June 2018 – June 2023)

Budget: 6.5M € (partially financed by the PIA)

Members: 9

Industrial members

Activeeon, Avisto, Elsys, Geo4i, MyDataModels, Thales Alenia Space, TwinswHeel

Academic members

Inria, Leat/CNRS

Governmental actors

PIA

Forest fire detection on-board satellites – CNIA 2022
Scroll to top