Argoverse 2 - Sensor

Argoverse 2 (AV2) is a collection of three datasets. The Sensor Dataset includes 1000 logs of ~20 second duration, including multi-view cameras, LiDAR point clouds, maps, ego-vehicle data, and bounding boxes. This dataset is intended to train 3D perception models for autonomous vehicles.

Overview

Paper

Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting

Download

argoverse.org

Code

argoverse/av2-api

License

CC BY-NC-SA 4.0

Argoverse Terms of Use

MIT License

Available splits

av2-sensor_train, av2-sensor_val, av2-sensor_test

Available Modalities

Name

Available

Description

Ego Vehicle

State of the ego vehicle, including poses, and vehicle parameters, see EgoStateSE3.

Map

(✓)

The HD-Maps are in 3D, but may have artifacts due to polyline to polygon conversion (see below). For more information, see MapAPI.

Bounding Boxes

The bounding boxes are available with the AV2SensorBoxDetectionLabel. For more information, BoxDetectionWrapper.

Traffic Lights

X

n/a

Pinhole Cameras

Includes 9 cameras, see PinholeCamera:

Fisheye Cameras

X

n/a

LiDARs

Includes 2 LiDARs, see LiDAR:

Dataset Specific
class py123d.conversion.registry.AV2SensorBoxDetectionLabel[source]

Argoverse 2 Sensor dataset annotation categories.

ANIMAL = 0
ARTICULATED_BUS = 1
BICYCLE = 2
BICYCLIST = 3
BOLLARD = 4
BOX_TRUCK = 5
BUS = 6
CONSTRUCTION_BARREL = 7
CONSTRUCTION_CONE = 8
DOG = 9
LARGE_VEHICLE = 10
MESSAGE_BOARD_TRAILER = 11
MOBILE_PEDESTRIAN_CROSSING_SIGN = 12
MOTORCYCLE = 13
MOTORCYCLIST = 14
OFFICIAL_SIGNALER = 15
PEDESTRIAN = 16
RAILED_VEHICLE = 17
REGULAR_VEHICLE = 18
SCHOOL_BUS = 19
SIGN = 20
STOP_SIGN = 21
STROLLER = 22
TRAFFIC_LIGHT_TRAILER = 23
TRUCK = 24
TRUCK_CAB = 25
VEHICULAR_TRAILER = 26
WHEELCHAIR = 27
WHEELED_DEVICE = 28
WHEELED_RIDER = 29
to_default()[source]

Inherited, see superclass.

Return type:

DefaultBoxDetectionLabel

class py123d.conversion.registry.AV2SensorLiDARIndex[source]

Argoverse 2 Sensor LiDAR Indexing Scheme.

X = 0
Y = 1
Z = 2
INTENSITY = 3

Download

You can download the Argoverse 2 Sensor dataset from the Argoverse website. You can also use directly the dataset from AWS. For that, you first need to install s5cmd:

pip install s5cmd

Next, you can run the following bash script to download the dataset:

DATASET_NAME="sensor" # "sensor" "lidar" "motion-forecasting" "tbv"
AV2_SENSOR_ROOT="/path/to/argoverse/sensor"

mkdir -p "$AV2_SENSOR_ROOT"
s5cmd --no-sign-request cp "s3://argoverse/datasets/av2/$DATASET_NAME/*" "$AV2_SENSOR_ROOT"
# or: s5cmd --no-sign-request sync "s3://argoverse/datasets/av2/$DATASET_NAME/*" "$AV2_SENSOR_ROOT"

The downloaded dataset should have the following structure:

$AV2_SENSOR_ROOT
├── train
│   ├── 00a6ffc1-6ce9-3bc3-a060-6006e9893a1a
│   │   ├── annotations.feather
│   │   ├── calibration
│   │   │   ├── egovehicle_SE3_sensor.feather
│   │   │   └── intrinsics.feather
│   │   ├── city_SE3_egovehicle.feather
│   │   ├── map
│   │   │   ├── 00a6ffc1-6ce9-3bc3-a060-6006e9893a1a_ground_height_surface____PIT.npy
│   │   │   ├── 00a6ffc1-6ce9-3bc3-a060-6006e9893a1a___img_Sim2_city.json
│   │   │   └── log_map_archive_00a6ffc1-6ce9-3bc3-a060-6006e9893a1a____PIT_city_31785.json
│   │   └── sensors
│   │       ├── cameras
│   │       │   └──...
│   │       └── lidar
│   │           └──...
│   └── ...
├── test
│   └── ...
└── val
    └── ...

Installation

No additional installation steps are required beyond the standard py123d` installation.

Conversion

To run the conversion, you either need to set the environment variable $AV2_DATA_ROOT or $AV2_SENSOR_ROOT. You can also override the file path and run:

py123d-conversion datasets=["av2_sensor_dataset"] \
dataset_paths.av2_data_root=$AV2_DATA_ROOT # optional if env variable is set

Dataset Issues

  • Ego Vehicle: The vehicle parameters are partially estimated and may be subject to inaccuracies.

Citation

If you use this dataset in your research, please cite:

@article{Wilson2023NEURIPS,
  author = {Benjamin Wilson and William Qi and Tanmay Agarwal and John Lambert and Jagjeet Singh and Siddhesh Khandelwal and Bowen Pan and Ratnesh Kumar and Andrew Hartnett and Jhony Kaesemodel Pontes and Deva Ramanan and Peter Carr and James Hays},
  title = {Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting},
  booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021)},
  year = {2021}
}