Skip to content

OpenDriveLab/OpenScene

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Important

🌟 Stay up to date at opendrivelab.com!

OpenScene: Autonomous Grand Challenge Toolkits

Description

OpenScene is a compact redistribution of the large-scale nuPlan dataset, retaining only relevant annotations and sensor data at 2Hz. This reduces the dataset size by a factor of >10. We cover a wide span of over 120 hours, and provide additional occupancy labels collected in various cities, from Boston, Pittsburgh, Las Vegas to Singapore.

OpenScene is also the large-scale dataset used for the End-to-End Driving and Predictive World Model tracks for the CVPR 2024 Autonomous Grand Challenge, and the NAVSIM-v2 End-to-End Driving track at CVPR 2025 Autonomous Grand Challenge. Please check the challenge docs for more details.

The stats of the dataset are summarized here.

Dataset Original Database Sensor Data (hr) Flow Semantic Categories
MonoScene NYUv2 / SemanticKITTI 5 / 6 10 / 19
Occ3D nuScenes / Waymo 5.5 / 5.7 16 / 14
Occupancy-for-nuScenes nuScenes 5.5 16
SurroundOcc nuScenes 5.5 16
OpenOccupancy nuScenes 5.5 16
SSCBench KITTI-360 / nuScenes / Waymo 1.8 / 4.7 / 5.6 19 / 16 / 14
OccNet nuScenes 5.5 16
OpenScene nuPlan 💥 120 ✔️ TODO
  • The time span of LiDAR frames accumulated for each occupancy annotation is 20 seconds.
  • Flow: the annotation of motion direction and velocity for each occupancy grid.

Getting Started

License and Citation

Our dataset is based on the nuPlan Dataset and therefore we distribute the data under Creative Commons Attribution-NonCommercial-ShareAlike license and nuPlan Dataset License Agreement for Non-Commercial Use. You are free to share and adapt the data, but have to give appropriate credit and may not use the work for commercial purposes. All code within this repository is under Apache License 2.0.

Please consider citing our paper if the project helps your research with the following BibTex:

@inproceedings{yang2024vidar,
  title={Visual Point Cloud Forecasting enables Scalable Autonomous Driving},
  author={Yang, Zetong and Chen, Li and Sun, Yanan and Li, Hongyang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

@misc{openscene2023,
  title={OpenScene: The Largest Up-to-Date 3D Occupancy Prediction Benchmark in Autonomous Driving},
  author={OpenScene Contributors},
  howpublished={\url{https://github.com/OpenDriveLab/OpenScene}},
  year={2023}
}

@article{sima2023_occnet,
  title={Scene as Occupancy}, 
  author={Chonghao Sima and Wenwen Tong and Tai Wang and Li Chen and Silei Wu and Hanming Deng  and Yi Gu and Lewei Lu and Ping Luo and Dahua Lin and Hongyang Li},
  year={2023},
  eprint={2306.02851},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}

(back to top)

Related Resources

Awesome

(back to top)

About

3D Occupancy Prediction Benchmark in Autonomous Driving

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published