Yifu Tao, Miguel Ángel Muñoz-Bañón, Lintong Zhang, Jiahao Wang, Lanke Frank Tarimo Fu, Maurice Fallon

Bodleian

We present the Oxford Spires Dataset, captured in and around well-known landmarks in Oxford using a custom-built multi-sensor perception unit as well as a millimetre-accurate map from a terrestrial LiDAR scanner (TLS). The perception unit includes three global shutter colour cameras, an automotive 3D LiDAR scanner, and an inertial sensor — all precisely calibrated.

To download the dataset, please go to the Download section. The code is available on Github. You can find our paper from Arxiv.

Handheld Perception Unit

Our perception unit, Frontier, has three cameras, an IMU, and a LiDAR. It is shown in the figure below. The three colour fisheye cameras face forward, left, and right. The LiDAR was mounted on top of the cameras. In the table below, we show the specifications for each sensor, while in the next table, we show the ROS topics provided for each sensor.

Device

Sensor Type Rate [Hz] Description
Hesai QT64 LiDAR 10 64 channels, 60m max. range, 104° vertical FoV
Alphasense Core Development Kit IMU 400 Cellphone-grade, synchronised with the cameras
Alphasense Core Development Kit 3 cameras 20 Colour fisheye, 126°×92.4° FoV, resolution 1440 × 1080, 36° overlap, synchronised with IMU
Topic Rate [Hz] Description
/hesai/pandar 10 LiDAR pointclouds
/alphasense_driver_ros/imu 400 IMU measurements
/alphasense_driver_ros/cam0/color/image/compressed 20 Front camera images
/alphasense_driver_ros/cam1/color/image/compressed 20 Left camera images
/alphasense_driver_ros/cam2/color/image/compressed 20 Right camera images

Calibration: The camera intrinsics and camera-IMU extrinsics were calibrated with Kalibr (Furgale et al., 2013). The camera-LiDAR extrinsics were calibrated with DiffCal (Fu et al., 2023). All of the intrinsic and extrinsic sensor calibration parameters are available in the dataset. LiDAR overlay in the images using this calibration is shown above.

Dataset Recording

The above-described Frontier device was mounted in a backpack (figure below). We carried this backpack through the different sites to record the sequences described in the Sites and Sequence section.

Backpack

Ground Truth

For ground-truthing, we used a Leica RTC360 TLS (figure below, left). It has a maximum range of 130 m and a Field-of-View of 360° × 300°. The final 3D point accuracy is 1.9 mm at 10 m and 5.3 mm at 40 m. The point clouds are coloured using 432 mega-pixel images captured by three cameras.

We scanned the dataset’s sites from different static locations (figure below, right). From each scan, we obtained a colourised 3D point cloud.

Leica

3D reference model (TLS map): We provide the individual scans recorded as commented before (figure above, right) for each site. Moreover, we provide a merged version for each site (1cm resolution), where the scans are registered using Leica’s Cyclone REGISTER 360 Plus software. The average cloud-to-cloud error in our sites ranges from 3 to 7 mm.

Trajectory GT: The ground truth trajectory is computed by ICP registering each Frontier’s undistorted LiDAR point cloud to the TLS map described before. We do this similarly for Newer College (Ramezani et al. 2020b) and Hilti-2022 (Zhang et al. 2022). The accuracy of the ground truth trajectory is approximately 1-2 cm.

Sites and Sequence

The table below provides information about the data recorded in the different sites. We provide the dates, the number of sequences recorded and the sum of the lengths for all sequences in each site. Additionally, we provide information about if the site contains indoor parts in sequences.

Site Date Sequences Length (km) Out-In
Bodleian Library 2024-03-15 2024-05-20 2024-10-29 2 1.29 Outdoor
Blenheim Palace 2024-03-14 5 2.18 Outdoor-indoor
Christ Church College 2024-03-18 2024-03-20 6 4.12 Outdoor-indoor
Keble College 2024-03-12 5 2.87 Outdoor-indoor
Radcliffe Observatory Quarter 2024-03-13 2 0.79 Outdoor
New College 2024-07-09 4 1.66 Outdoor-indoor

Folder structure

In the figure below, we show the folder structure of the dataset. The data from the sensors described before is located in the raw folder. Apart from the raw, we provide processed data from VILENS-SLAM, including the undistorted point clouds, and COLMAP including the processed images. The trajectory folder contains those trajectories in TUM format. The ground truth for TLS map and trajectory is marked in red in the figure.

Folder

Code: Github

Download

Contact

We encourage you to pose any issue in Github Issues, but you can also contact us via email.