nuScenes

The nuScenes dataset is multi-modal autonomous driving dataset that includes data from cameras, LiDARs, and radars, along with detailed annotations from Boston and Singapore. In total, the dataset contains 1000 driving logs, each of 20 second duration, resulting in 5.5 hours of data. All logs include ego-vehicle data, camera images, LiDAR point clouds, bounding boxes, and map data.

Overview

Papers

nuscenes: A multimodal dataset for autonomous driving

Download

nuscenes.org

Code

nuscenes-devkit

License

CC BY-NC-SA 4.0

nuScenes Terms of Use

Apache License 2.0

Available splits

nuscenes_train, nuscenes_val, nuscenes_test, nuscenes-mini_train, nuscenes-mini_val, nuscenes-mini_test

Available Modalities

Name

Available

Description

Ego Vehicle

State of the ego vehicle, including poses, dynamic state, and vehicle parameters, see EgoStateSE3.

Map

(✓)

The HD-Maps are in 2D vector format and defined per-location. For more information, see MapAPI.

Bounding Boxes

The bounding boxes are available with the NuScenesBoxDetectionLabel. For more information, see BoxDetectionWrapper.

Traffic Lights

X

Pinhole Cameras

nuScenes includes 6x PinholeCamera:

Fisheye Cameras

X

LiDARs

nuScenes has one LiDAR of type LIDAR_TOP.

Dataset Specific
class py123d.conversion.registry.NuScenesBoxDetectionLabel[source]

Semantic labels for nuScenes bounding box detections. [1] https://github.com/nutonomy/nuscenes-devkit/blob/master/docs/instructions_nuscenes.md#labels

VEHICLE_CAR = 0
VEHICLE_TRUCK = 1
VEHICLE_BUS_BENDY = 2
VEHICLE_BUS_RIGID = 3
VEHICLE_CONSTRUCTION = 4
VEHICLE_EMERGENCY_AMBULANCE = 5
VEHICLE_EMERGENCY_POLICE = 6
VEHICLE_TRAILER = 7
VEHICLE_BICYCLE = 8
VEHICLE_MOTORCYCLE = 9
HUMAN_PEDESTRIAN_ADULT = 10
HUMAN_PEDESTRIAN_CHILD = 11
HUMAN_PEDESTRIAN_CONSTRUCTION_WORKER = 12
HUMAN_PEDESTRIAN_PERSONAL_MOBILITY = 13
HUMAN_PEDESTRIAN_POLICE_OFFICER = 14
HUMAN_PEDESTRIAN_STROLLER = 15
HUMAN_PEDESTRIAN_WHEELCHAIR = 16
MOVABLE_OBJECT_TRAFFICCONE = 17
MOVABLE_OBJECT_BARRIER = 18
MOVABLE_OBJECT_PUSHABLE_PULLABLE = 19
MOVABLE_OBJECT_DEBRIS = 20
STATIC_OBJECT_BICYCLE_RACK = 21
ANIMAL = 22
to_default()[source]

Inherited, see superclass.

class py123d.conversion.registry.NuScenesLiDARIndex[source]

NuScenes LiDAR Indexing Scheme.

X = 0
Y = 1
Z = 2
INTENSITY = 3
RING = 4

Download

You need to download the nuScenes dataset from the official website. From there, you need the following parts:

  • CAN bus expansion pack

  • Map expansion pack (v1.3)

  • Full dataset (v1.0)

    • Mini dataset (v1.0-mini) (for quick testing)

    • Train/Val split (v1.0-trainval) (for the complete dataset)

    • Test split (v1.0-test) (for the complete dataset)

The 123D conversion expects the following directory structure:

$NUSCENES_DATA_ROOT
  ├── can_bus/
  │   ├── scene-0001_meta.json
  │   ├── ...
  │   └── scene-1110_zoe_veh_info.json
  ├── maps/
  │   ├── 36092f0b03a857c6a3403e25b4b7aab3.png
  │   ├── ...
  │   ├── 93406b464a165eaba6d9de76ca09f5da.png
  │   ├── basemap/
  │   │   └── ...
  │   ├── expansion/
  │   │   └── ...
  │   └── prediction/
  │       └── ...
  ├── samples/
  │   ├── CAM_BACK/
  │   │   └── ...
  │   ├── ...
  │   └── RADAR_FRONT_RIGHT/
  │       └── ...
  ├── sweeps/
  │   └── ...
  ├── v1.0-mini/
  │   ├── attribute.json
  │   ├── ...
  │   └── visibility.json
  ├── v1.0-test/
  │   ├── attribute.json
  │   ├── ...
  │   └── visibility.json
  └── v1.0-trainval/
      ├── attribute.json
      ├── ...
      └── visibility.json

Lastly, you need to add the following environment variables to your ~/.bashrc according to your installation paths:

export NUSCENES_DATA_ROOT=/path/to/nuplan/data/root

Or configure the config py123d/script/config/common/default_dataset_paths.yaml accordingly.

Installation

For nuScenes, additional installation that are included as optional dependencies in py123d are required. You can install them via:

pip install py123d[nuscenes]
pip install -e .[nuscenes]

Conversion

You can convert the nuScenes dataset (or mini dataset) by running:

py123d-conversion datasets=["nuscenes_dataset"]
# or
py123d-conversion datasets=["nuscenes_mini_dataset"]

Dataset Issues

  • Map: The HD-Maps are only available in 2D.

Citation

If you use nuPlan in your research, please cite:

@article{Caesar2020CVPR,
  title={nuscenes: A multimodal dataset for autonomous driving},
  author={Caesar, Holger and Bankiti, Varun and Lang, Alex H and Vora, Sourabh and Liong, Venice Erin and Xu, Qiang and Krishnan, Anush and Pan, Yu and Baldan, Giancarlo and Beijbom, Oscar},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  year={2020}
}