Skip to content
/ EP-BEV Public

[ECCV 2024] About The official implementation of the paper "Cross-view image geo-localization with Panorama-BEV Co-Retrieval Network“.

Notifications You must be signed in to change notification settings

yejy53/EP-BEV

Repository files navigation

Cross-view image geo-localization with Panorama-BEV Co-Retrieval Network

arXiv Model Dataset

This repository contains the official implementation of the paper: Cross-view image geo-localization with Panorama-BEV Co-Retrieval Network. It is a very effective cross-view retrieval framework by adding an additional street view BEV retrieval branch. It achieves leading performance on multiple datasets, including VIGOR, CVUSA to CVACT retrieval.

method

📢 News

  • 2024-12 We also published a new work on cross-view retrieval based on Natural Language text: Where am i ? CVG-Text here
  • 2024-10 The code for Street View-BEV Co-retrieval inference is now available. If there is any missing code or abnormality, you can report it to me in the issue.
  • 2024-09 The training and testing code for the BEV branch on CVACT has been released.
  • 2024-08 Source code of BEV transformation is released(CVACT/CVUSA).
  • 2024-07-1 EP-BEV is accepted to ECCV 2024.

Installation

Clone this repo to a local folder:

git clone https://github.com/yejy53/EP-BEV.git
cd EP-BEV

Environment Setup

conda create -n EP-BEV python=3.9 -y
conda activate EP-BEV
pip install -r requirements.txt

If huggingface cannot download the weights successfully, you can add export HF_ENDPOINT="https://hf-mirror.com" at the end of .bashrc and reactivate it.

Data Preparation

The publicly available datasets used in this paper can be obtained from the following sources:

Preparing CVUSA Dataset. The dataset can be downloaded here.

Preparing CVACT Dataset. The dataset can be downloaded here.

Preparing VIGOR Dataset. The dataset can be downloaded here.

Preparing CVGlobal Dataset. The dataset can be downloaded here.

ECCV2

Data Structure:

├─ CVACT
  ├── ACT_data.mat
  ├── ANU_data_small/
    ├── bev/
    ├── satview_polish/ 
    ├── streetview/	
  └──ANU_data_test/

Use our pre-trained model for retrieval

  1. You can download a pre-trained model (e.g. cvact) from huggingface and place it in ckpt folder.
  2. You need to organize the generated BEV images into the above dataset format. You can download the generated BEV images directly from the following huggingface link to get consistent results, or generate BEV images yourself and then retrain.
  3. When performing Street View-BEV Co-Retrieval, you only need to add the similarity of using a pure Street View image to the similarity of using a BEV image. The weights for using Street View search can be obtained from the following huggingface link. The method and weights for using Street View search also can be found in the Sample4G.

❤️ Acknowledgements

Our code is built on top of Sample4G and Boosting3DoF. We appreciate the previous open-source works.

BibTeX 🙏

If you have any questions, be free to contact with me!

@inproceedings{ye2025cross,
  title={Cross-view image geo-localization with Panorama-BEV Co-Retrieval Network},
  author={Ye, Junyan and Lv, Zhutao and Li, Weijia and Yu, Jinhua and Yang, Haote and Zhong, Huaping and He, Conghui},
  booktitle={European Conference on Computer Vision},
  pages={74--90},
  year={2025},
  organization={Springer}
}

About

[ECCV 2024] About The official implementation of the paper "Cross-view image geo-localization with Panorama-BEV Co-Retrieval Network“.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages