Skip to content

[IEEE GRSS DFC 2025 Track II] BRIGHT: A globally distributed multimodal dataset for all-weather disaster response

Notifications You must be signed in to change notification settings

ChenHongruixuan/BRIGHT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

☀️BRIGHT☀️

BRIGHT: A globally distributed multimodal VHR dataset for all-weather disaster response

Hongruixuan Chen1,2, Jian Song1,2, Olivier Dietrich3, Clifford Broni-Bediako2, Weihao Xuan1,2, Junjue Wang1
Xinlei Shao1, Yimin Wei1,2, Junshi Xia3, Cuiling Lan4, Konrad Schindler3, Naoto Yokoya1,2 *

1 The University of Tokyo, 2 RIKEN AIP, 3 ETH Zurich, 4 Microsoft Research Asia

arXiv paper Codalab Leaderboard Zenodo Dataset HuggingFace Dataset visitors

Overview | Start DFC25 | Common Issues | Others

🛎️Updates

  • Jan 13th, 2025: The benchmark code for IEEE GRSS DFC 2025 Track II is now available. Please follow the instruction to use it!!

🔭Overview

  • BRIGHT is the first open-access, globally distributed, event-diverse multimodal dataset specifically curated to support AI-based disaster response. It covers five types of natural disasters and two types of man-made disasters across 12 regions worldwide, with a particular focus on developing countries (9 of 12 events are used for training and validation of IEEE GRSS DFC 2025).

accuracy

🗝️Let's Get Started with DFC 2025!

A. Installation

Note that the code in this repo runs under Linux system. We have not tested whether it works under other OS.

Step 1: Clone the repository:

Clone this repository and navigate to the project directory:

git clone https://github.com/ChenHongruixuan/BRIGHT.git
cd BRIGHT

Step 2: Environment Setup:

It is recommended to set up a conda environment and installing dependencies via pip. Use the following commands to set up your environment:

Create and activate a new conda environment

conda create -n bright-benchmark
conda activate bright-benchmark

Install dependencies

pip install -r requirements.txt

B. Data Preparation

Please download the BRIGHT from Zenodo or HuggingFace and make them have the following folder/file structure:

${DATASET_ROOT}   # Dataset root directory, for example: /home/username/data/dfc25_track2_trainval
│
├── train
│    ├── pre-event
│    │    ├──bata-explosion_00000000_pre_disaster.tif
│    │    ├──bata-explosion_00000001_pre_disaster.tif
│    │    ├──bata-explosion_00000002_pre_disaster.tif
│    │   ...
│    │
│    ├── post-event
│    │    ├──bata-explosion_00000000_post_disaster.tif
│    │    ... 
│    │
│    └── target
│         ├──bata-explosion_00000000_building_damage.tif 
│         ...   
│   
└── val
     ├── pre-event
     │    ├──bata-explosion_00000003_pre_disaster.tif
     │   ...
     │
     └── post-event
          ├──bata-explosion_00000003_post_disaster.tif
         ...

C. Model Training & Tuning

The following commands show how to train and evaluate UNet on the BRIGHT dataset using our split set in [dfc25_benchmark/dataset/splitname]:

python script/train_baseline_network.py  --dataset 'BRIGHT' \
                                          --train_batch_size 16 \
                                          --eval_batch_size 4 \
                                          --num_workers 1 \
                                          --crop_size 640 \
                                          --max_iters 800000 \
                                          --learning_rate 1e-4 \
                                          --model_type 'UNet' \
                                          --train_dataset_path '<your dataset path>/train' \
                                          --train_data_list_path '<your project path>/dfc25_benchmark/dataset/splitname/train_setlevel.txt' \
                                          --holdout_dataset_path '<your dataset path>/train' \
                                          --holdout_data_list_path '<your project path>/dfc25_benchmark/dataset/splitname/holdout_setlevel.txt' 

D. Inference & Submission

For current development stage and subsequent test stage, you can run the following code to generate raw and visualized prediction results

python script/infer_using_baseline_network.py  --val_dataset_path '<your dataset path>/val' \
                                               --val_data_list_path '<your project path>/dfc25_benchmark/dataset/splitname/val_setlevel.txt' \
                                               --existing_weight_path '<your trained model path>' \
                                               --inferece_saved_path '<your inference results saved path>'

Then, you can go to the official Leaderboard in CodaLab to submit your results.

  • Keep the prediction name consistent with label name, i.e., turkey-earthquake_00000001_building_damage.png, hawaii-wildfire_00000003_building_damage.png, and so on.
  • All png files should be submitted in zip file format. Zip all prediction files directly without any folders containing them.
  • Using the raw prediction results instead of visualized ones.

🤔Common Issues

Based on peers' questions from issue section, here's a quick navigate list of solutions to some common issues.

Issue Solution
Abnormal accuracy (like 0 or -999999) given by leaderboard Keep the prediction name and label name consistent / Zip all prediction files directly, not the folder containing them.

📜Reference

If this dataset or code contributes to your research, please kindly consider citing our paper (will be on arXiv soon) and give this repo ⭐️ :)

🤝Acknowledgments

The authors would also like to give special thanks to Sarah Preston of Capella Space, Capella Space's Open Data Gallery, Maxar Open Data Program and Umbra Space's Open Data Program for providing the valuable data.

🙋Q & A

For any questions, please feel free to leave it in the issue section or contact us.

About

[IEEE GRSS DFC 2025 Track II] BRIGHT: A globally distributed multimodal dataset for all-weather disaster response

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published