Xingguang Zhong · Liren Jin · Marija Popović · Jens Behley · Cyrill Stachniss
| Bonn | WildGS |
|---|---|
person.mp4 |
umbrella.mp4 |
- Release the SLAM code and weights of π³mos.
- Evaluation scripts for camera tracking.
- Evaluation scripts for moving object segmentation.
- Evaluation scripts for video depth prediction.
- Training code of π³mos.
We tested the code on Ubuntu 22.04 with CUDA 12.1. You may need to adjust the PyTorch version in environment.yml according to your CUDA version. Refer to the PyTorch documentation for compatible versions.
Clone the repo
git clone https://github.com/PRBonn/Pi3MOS-SLAM.git --recursive
cd Pi3MOS-SLAM
Create and activate the Anaconda environment
conda env create -f environment.yml
conda activate pi3mos
Download Eigen and install π³mos-SLAM
wget https://gitlab.com/libeigen/eigen/-/archive/3.4.0/eigen-3.4.0.zip
unzip eigen-3.4.0.zip -d thirdparty
# install... It may take a while.
pip install . --no-build-isolationDownload the weights of the DPVO network
mkdir checkpoints
bash scripts/download_model.sh
If download_model.sh doesn't work, you can also download dpvo.pth here and then copy it into the checkpoints folder manually.
Finally, download the weights of the π³mos model here and copy it into the checkpoints folder.
Download a demo sequence from the Wild-SLAM Mocap Dataset
bash scripts/download_wild_slam_mocap_umbrella.sh
Then run the demo code with:
python demo.py --imagedir datasets/Wild_SLAM_Mocap/scene1/umbrella/rgb --calib calib/wildgs/umbrella.txt --viz
We downsample the point cloud to reduce the computational burden of visualization. If you run the demo without --calib, the code will automatically estimate the intrinsic parameters from π³mos's depth prediction.
For camera tracking performance, we evaluate our method on three datasets: Wild-SLAM Mocap Dataset, Bonn RGB-D Dataset, and Sintel Dataset.
Download all sequences from the Wild-SLAM Mocap Dataset. You can use the download scripts from WildGS-SLAM.
bash scripts/download_wild_slam_mocap_scene1.sh
bash scripts/download_wild_slam_mocap_scene2.sh
Then run the evaluation with: (change the dataset_root first)
python evaluation/evaluate_wild.py --dataset_root /path/to/Wild_SLAM_Mocap
Download the data from our website 😉. To keep consistent with baselines, we only report results on 8 sequences, which you can find in evaluation/evaluate_bonn.py. Once the data preparation is done, change the path and run:
python evaluation/evaluate_bonn.py --dataset_root /path/to/rgbd_bonn_dataset
Download the data here (This link is from MegaSam) and unzip it to your desired location, then change the path and run:
python evaluation/evaluate_sintel.py --dataset_root /path/to/Sintel
We test our code on one NVIDIA RTX A6000. Different environments and hardware may lead to slightly different results. We welcome testing and discussions 🤝.
We built our system upon π³ and DPVO. Our GUI is modified from MonoGS. We thank the authors for open-sourcing such great projects.
If you have any questions, feel free to contact:
- Xingguang Zhong [email protected]
- Liren Jin [email protected]
If you use π³mos-SLAM for your academic work, please cite:
@article{zhong2025arxiv,
title = {{Dynamic Visual SLAM using a General 3D Prior}},
author = {Zhong, Xingguang and Jin, Liren and Popovi{\'c}, Marija and Behley, Jens and Stachniss, Cyrill},
journal = arxiv,
volume = {arXiv:2512.06868},
year = {2025}
}

