Baowen Zhang, Chenxing Jiang, Heng Li, Shaojie Shen, Ping Tan
The code has been tested on Ubuntu 24.04 with CUDA 13.0 (driver 580.126.09), and on Ubuntu 20.04 with CUDA 12.8 (driver 570.133.07).
git clone https://github.com/HKUST-SAIL/Geometry-Grounded-Gaussian-Splatting.git --recursive
cd Geometry-Grounded-Gaussian-Splattingconda create -n gggs python=3.12
conda activate gggspip install torch torchvision --index-url https://download.pytorch.org/whl/cu130
pip install -r requirements.txtpip install submodules/diff-gaussian-rasterization --no-build-isolation
pip install submodules/warp-patch-ncc --no-build-isolation
pip install submodules/simple-knn --no-build-isolation
pip install git+https://github.com/rahul-goel/fused-ssim/ --no-build-isolationconda install -y conda-forge::cgal
pip install submodules/tetra_triangulation --no-build-isolationWe train on the preprocessed DTU dataset from 2DGS:
https://surfsplatting.github.io/
For geometry evaluation, download the official DTU point clouds and place them under:
dtu_eval/Offical_DTU_Dataset
DTU dataset page: https://roboimagedata.compute.dtu.dk/?page_id=36
Please follow PGSR to preprocess the TnT dataset. For evaluation, download the GT point clouds, camera poses, alignments, and crop files from:
https://www.tanksandtemples.org/download/
Expected structure:
GT_TNT_dataset/
Barn/
images/
000001.jpg
000002.jpg
...
sparse/
0/
...
Barn.json
Barn.ply
Barn_COLMAP_SfM.log
Barn_trans.txt
Caterpillar/
...
Below are example commands for training, mesh extraction, rendering, and evaluation.
# Training
python train.py -s <path_to_dtu> -m <output_dir> -r 2 --use_decoupled_appearance 3
# Mesh extraction
python mesh_extract.py -m <output_dir>
# Evaluation
python evaluate_dtu_mesh.py -m <output_dir># Training
python train.py -s <path_to_preprocessed_tnt> -m <output_dir> -r 2 --use_decoupled_appearance 3
# Mesh extraction
# Tips:
# - Add --move_cpu to reduce GPU memory usage (if needed).
# - Add --export_color to export the mesh with vertex colors.
python mesh_extract_tetrahedra.py -m <output_dir>
# python mesh_extract_tetrahedra.py -m <output_dir> --move_cpu --export_color
# Evaluation
python eval_tnt/run.py \
--dataset-dir <path_to_gt_tnt> \
--traj-path <path_to_COLMAP_SfM.log> \
--ply-path <output_dir>/recon_post.ply \
--out-dir <output_dir>/mesh# Training
python train.py -s <path_to_dataset> -m <output_dir> --eval
# Rendering
python render.py -m <output_dir>
# Evaluation
python metrics.py -m <output_dir>python train.py -s <path_to_dataset> -m <output_dir> --eval --sh_degree 2 --sg_degree 7
python render.py -m <output_dir>
python metrics.py -m <output_dir>The viewer is based on the original 3D Gaussian Splatting (SIBR) viewer, with minor updates for newer library versions and for loading 3D Gaussian models.
The current version uses embree4 and a newer version of Boost; if you need to build with older library versions, please refer to SIBR_viewers/SIBR_viewers.patch for the required changes.
Build and use it the same way as:
https://github.com/graphdeco-inria/gaussian-splatting
This project builds on Gaussian Splatting and RaDe-GS:
We refer to gsplat for its use of warp-level reduction to accelerate the backward pass.
https://github.com/nerfstudio-project/gsplat
We integrate:
- Loss terms from 2DGS and PGSR:
- Densification strategy from GOF:
https://github.com/autonomousvision/gaussian-opacity-fields - Filters from Mip-Splatting: https://github.com/autonomousvision/mip-splatting
- Spherical Gaussian appearance model from RayGauss and RayGaussX
- Appearance models from Gaussian Splatting, GOF, and PGSR.
Evaluation toolkits: - DTU: https://github.com/jzhangbs/DTUeval-python
- Tanks and Temples: https://github.com/isl-org/TanksAndTemples/tree/master/python_toolbox/evaluation
