{"id":20515564,"url":"https://github.com/ignf/flair-2","last_synced_at":"2025-09-24T23:31:22.151Z","repository":{"id":169006580,"uuid":"630460573","full_name":"IGNF/FLAIR-2","owner":"IGNF","description":"Semantic segmentation from multi-source optical data (baseline for the FLAIR#2 challenge)","archived":false,"fork":false,"pushed_at":"2024-07-14T09:01:02.000Z","size":16926,"stargazers_count":69,"open_issues_count":0,"forks_count":11,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-01-09T01:37:37.251Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://ignf.github.io/FLAIR/index.html","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/IGNF.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-04-20T12:35:14.000Z","updated_at":"2025-01-06T08:53:31.000Z","dependencies_parsed_at":"2023-11-30T01:25:58.242Z","dependency_job_id":"4fba4aa7-0344-4d77-9f34-807f52c58223","html_url":"https://github.com/IGNF/FLAIR-2","commit_stats":null,"previous_names":["ignf/flair-2-ai-challenge","ignf/flair-2"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IGNF%2FFLAIR-2","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IGNF%2FFLAIR-2/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IGNF%2FFLAIR-2/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IGNF%2FFLAIR-2/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/IGNF","download_url":"https://codeload.github.com/IGNF/FLAIR-2/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":234138529,"owners_count":18785431,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-15T21:22:51.796Z","updated_at":"2025-09-24T23:31:22.145Z","avatar_url":"https://github.com/IGNF.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\n# FLAIR #2:\n# multi-source optical imagery and semantic segmantation\n\n\n\n\n\n\n![Static Badge](https://img.shields.io/badge/Code%3A-lightgrey?color=lightgrey) [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/IGNF/FLAIR-1-AI-Challenge/blob/master/LICENSE) \u003ca href=\"https://pytorch.org/get-started/locally/\"\u003e\u003cimg alt=\"PyTorch\" src=\"https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch\u0026logoColor=white\"\u003e\u003c/a\u003e\n\u003ca href=\"https://pytorchlightning.ai/\"\u003e\u003cimg alt=\"Lightning\" src=\"https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning\u0026logoColor=white\"\u003e\u003c/a\u003e \u0026emsp; ![Static Badge](https://img.shields.io/badge/Dataset%3A-lightgrey?color=lightgrey) [![license](https://img.shields.io/badge/License-IO%202.0-green.svg)](https://github.com/etalab/licence-ouverte/blob/master/open-licence.md)\n\n\n\nParticipate in obtaining more accurate maps for a more comprehensive description and a better understanding of our environment! Come push the limits of state-of-the-art semantic segmentation approaches on a large and challenging dataset. Get in touch at ai-challenge@ign.fr\n\n\n![Alt bandeau FLAIR-IGN](images/flair_bandeau.jpg?raw=true)\n\u003cimg width=\"100%\" src=\"images/flair-2_logos-hd.png\"\u003e\n\u003c/div\u003e\n\n\n\u003cdiv style=\"border-width:1px; border-style:solid; border-color:#d2db8c; padding-left: 1em; padding-right: 1em; \"\u003e\n\n\n\u003cbr\u003e\n\n## 📢 New Repository Available\n\nA more recent FLAIR project is now available here: \u003cbr\u003e\n👉 [FLAIR-HUB](https://github.com/IGNF/FLAIR-HUB)\n\u003cbr\u003e\u003cbr\u003e\n\n\n  \n\u003ch2 style=\"margin-top:5px;\"\u003eLinks\u003c/h2\u003e\n\n\n- **Datapaper : https://arxiv.org/pdf/2305.14467.pdf** \n\n- **Dataset links :** https://ignf.github.io/FLAIR/#FLAIR2\n\n- **Challenge page : https://codalab.lisn.upsaclay.fr/competitions/13447** [🛑 closed!]\n\n\u003c/div\u003e\n\u003cbr\u003e\u003cbr\u003e\n\n\n\n\n\n## Context \u0026 Data\n\nThe FLAIR #2 dataset is sampled countrywide and is composed of over 20 billion annotated pixels of very high resolution aerial imagery at 0.2 m spatial resolution, acquired over three years and different months (spatio-temporal domains). Aerial imagery patches consist of 5 channels (RVB-Near Infrared-Elevation) and have corresponding annotation (with 19 semantic classes or 13 for the baselines). Furthermore, to integrate broader spatial context and temporal information, high resolution Sentinel-2 1-year time series with 10 spectral band are also provided. More than 50,000 Sentinel-2 acquisitions with 10 m spatial resolution are available.\n\u003cbr\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"40%\" src=\"images/flair-2-spatial.png\"\u003e\n  \u003cbr\u003e\n  \u003cem\u003eSpatial definitions of the FLAIR #2 dataset.\u003c/em\u003e\n\u003c/p\u003e\n\n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"85%\" src=\"images/flair-2-patches.png\"\u003e\n  \u003cbr\u003e\n  \u003cem\u003eExample of input data (first three columns are from aerial imagery, fourth from Sentinel-2) and corresponding supervision masks (last column).\u003c/em\u003e\n\u003c/p\u003e\n\n\u003cbr\u003e\u003cbr\u003e\n## Baseline model \n\nA two-branch architecture integrating a U-Net \u003ca href=\"https://github.com/qubvel/segmentation_models.pytorch\"\u003e\u003cimg src=\"https://img.shields.io/badge/Link%20to-SMP-f4dbaa.svg\"/\u003e\u003c/a\u003e with a pre-trained ResNet34 encoder and a U-TAE \u003ca href=\"https://github.com/VSainteuf/utae-paps\"\u003e\u003cimg src=\"https://img.shields.io/badge/Link%20to-U--TAE-f4dbaa.svg\"/\u003e\u003c/a\u003e encompassing a temporal self-attention encoder is presented. The U-TAE branch aims at learning spatio-temporal embeddings from the high resolution satellite time series that are further integrated into the U-Net branch exploiting the aerial imagery. The proposed _U-T\u0026T_ model features a fusion module to extend and reshape the U-TAE embeddings in order to add them towards the U-Net branch.   \n\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"100%\" src=\"images/flair-2-network.png\"\u003e\n  \u003cbr\u003e\n  \u003cem\u003eOverview of the proposed two-branch architecture.\u003c/em\u003e\n\u003c/p\u003e\n\n\u003cbr\u003e\u003cbr\u003e\n\n## Usage \n\nThe `flair-2-config.yml` file controls paths, hyperparameters and computing ressources. The file `requirement.txt` is listing used libraries for the baselines.\n\nTo launch a training/inference/metrics computation, you can either use : \n\n- ```\n  main.py --config_file=flair-2-config.yml\n  ```\n\n-  use the `./notebook/flair-2-notebook.ipynb` notebook guiding you through data visualization, training and testing steps.\n\nA toy dataset (reduced size) is available to check that your installation and the information in the configuration file are correct.\n\n\u003cbr\u003e\u003cbr\u003e\n\n## Leaderboard\n\nPlease note that for participants to the FLAIR #2 challenge on CodaLab, a certain number of constraints must be satisfied (in particular, inference time). All infos are available on the _Overview_ page of the competion.\n\n| Model|Input|mIoU \n------------ | ------------- | -------------\n| baseline U-Net (ResNet34) | aerial imagery | 0.5470\n| baseline U-Net (ResNet34) + _metadata + augmentation_ | aerial imagery | 0.5593\n|||\n| baseline U-T\u0026T | aerial and satellite imagery | 0.5594\n| baseline U-T\u0026T + _filter clouds + monthly averages + data augmentation_ | aerial and satellite imagery | 0.5758\n\nIf you want to submit a new entry, you can open a new issue.\n\u003cb\u003e Results of the challenge will be reported after the end of the challenge early October! \u003c/b\u003e\n\nThe baseline U-T\u0026T + _filter clouds + monthly averages + data_augmentation_ obtains the following confusion matrix: \n\n\u003cbr\u003e\u003cbr\u003e\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"50%\" src=\"images/flair-2-confmat.png\"\u003e\n  \u003cbr\u003e\n  \u003cem\u003eBaseline confusion matrix of the test dataset normalized by rows.\u003c/em\u003e\n\u003c/p\u003e\n\n\n\u003cbr\u003e\u003cbr\u003e\u003cbr\u003e\n\n## Reference\nPlease include a citation to the following article if you use the FLAIR #2 dataset:\n\n```bibtex\n@inproceedings{ign2023flair2,\n      title={FLAIR: a Country-Scale Land Cover Semantic Segmentation Dataset From Multi-Source Optical Imagery}, \n      author={Anatol Garioud and Nicolas Gonthier and Loic Landrieu and Apolline De Wit and Marion Valette and Marc Poupée and Sébastien Giordano and Boris Wattrelos},\n      year={2023},\n      booktitle={Advances in Neural Information Processing Systems (NeurIPS) 2023},\n      doi={https://doi.org/10.48550/arXiv.2310.13336},\n}\n```\n\n## Acknowledgment\nThis work was performed using HPC/AI resources from GENCI-IDRIS (Grant 2022-A0131013803). This work was supported by the project \"Copernicus / FPCUP” of the European Union, by the French Space Agency (CNES) and by Connect by CNES.\u003cbr\u003e\n\n\n\n\n\n## Dataset license\n\nThe \"OPEN LICENCE 2.0/LICENCE OUVERTE\" is a license created by the French government specifically for the purpose of facilitating the dissemination of open data by public administration. \nIf you are looking for an English version of this license, you can find it on the official GitHub page at the [official github page](https://github.com/etalab/licence-ouverte).\n\nAs stated by the license :\n\n### Applicable legislation\n\nThis licence is governed by French law.\n\n### Compatibility of this licence\n\nThis licence has been designed to be compatible with any free licence that at least requires an acknowledgement of authorship, and specifically with the previous version of this licence as well as with the following licences: United Kingdom’s “Open Government Licence” (OGL), Creative Commons’ “Creative Commons Attribution” (CC-BY) and Open Knowledge Foundation’s “Open Data Commons Attribution” (ODC-BY).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fignf%2Fflair-2","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fignf%2Fflair-2","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fignf%2Fflair-2/lists"}