Ze Shi* ·
Hao Shi* ·
Kailun Yang ·
Zhe Yin ·
Yining Lin ·
Kaiwei Wang
- [2023-11] ⚙️ Code Release
- [2023-08] 🎉 PanoVPR is accepted to 26th IEEE International Conference on Intelligent Transportation Systems (ITSC-2023).
- [2023-03] 🚧 Init repo and release arxiv version
We propose a Visual Place Recognition framework for retrieving Panoramic database images using perspective query images, dubbed PanoVPR. To achieve this, we adopt sliding window approach on panoramic database images to narrow the model's observation range of the large field of view panoramas. We achieve promising results in a derived dataset Pitts250K-P2E and a real-world scenario dataset YQ360.
For more details, please check our arXiv paper.
You need to first create an environment from file environment.yml
using Conda, and then activate it.
conda env create -f environment.yml --prefix /path/to/env
conda activate PanoVPR
If you want to train the network, you can change the training configuration and the dataset used
by specifying parameters such as --backbone
, --split_nums
, and --dataset_name
in the command line.
Meanwhile, adjust other parameters according to the actual situation.
By default, the output results are stored in the ./logs/{save_dir}
folder.
Please note that the --title
parameter must be specified in the command line.
# Train on Pitts250K
python train.py --title swinTx24 \
--save_dir clean_branch_test \
--backbone swin_tiny \
--split_nums 24 \
--dataset_name pitts250k \
--cache_refresh_rate 125 \
--neg_sample 100 \
--queries_per_epoch 2000
For the inference process, you need to specify the absolute path where the best_model.pth
is stored in the --resume
parameter.
# Val and Test On Pitts250K
python test.py --title test_swinTx24 \
--save_dir clean_branch_test \
--backbone swin_tiny \
--split_nums 24 \
--dataset_name pitts250k \
--cache_refresh_rate 125 \
--neg_sample 100 \
--queries_per_epoch 2000 \
--resume <absoulate path containing best_model.pth>