UPLOAD DATA
Indepth theory

Coordinate Systems

15min
scenes such as lidars and cameras sequence need to be able to combine the data from different sensors and from different instants in time this is done by transforming the recordings with sensor calibrations overview docid 4mfc9atwcxupflne7v1iz and the ego motion data this section describes how this is done in 3d space and provides a summary about the coordinate systems that different kinds of data is expressed in for camera sensors, we also need to be able to map 3d points to pixel coordinates in 2d this is done using intrinsic parameters of the camera and these vary depending on the type of the camera refer to standard camera calibrations docid 0clkgytfe9bon5xu9zeal for more information about this the reference coordinate system and calibrations each sensor has its own coordinate system in 3d space that depends on its location and orientation on the ego vehicle being able to transform measurements between these sensor coordinate systems is important to do this, a reference coordinate system is defined which works as a middle man between the sensor coordinate systems the reference coordinate system can be chosen arbitrarily relative to the ego vehicle by defining a calibration function c i for sensor i we can map a point \vec{x i} to the reference coordinate system in the following way \vec{x r} = c i(\vec{x i}) in the same way we can map points from all other sensors to the reference coordinate system subsequently, we can also map a point from coordinate system i to coordinate system j by applying the inverse of the calibration \vec{x j} = c j^{ 1}(c i(\vec{x i})) the world coordinate system and ego motion data with this, we can now express points in coordinate systems local to the ego vehicle this is great, but sometimes it is also valuable to express points recorded at different times in the same coordinate system we call this the world coordinate system since it is static in time we can transform a point to the world coordinate system using ego motion data, which describes the location and orientation of the ego vehicle at any given time with the ego motion data we can transform a point \vec{x t} to the world coordinate system with \vec{x w} = e t(\vec{x t}) subsequently, we can also transform a point recorded at time t to the coordinate system at time t' by applying the inverse of the ego transformation function \vec{x {t'}} = e {t'}^{ 1}(e t(\vec{x t})) this can be used to compensate each lidar point for the motion of the ego vehicle, a process also known motion compensation docid\ rhbsetwlntflfo6fmd 1e it is highly recommended to motion compensate point clouds since lidar points are recorded at different instants in time this can be done by providing high frequency ego motion data (imu data) when creating a scene single lidar case the image below displays how the different sensors relate to each other in 3d space in the single lidar case note that the ego motion data should be expressed in the lidar coordinate system multi lidar case in the multi lidar case (see image below) there are multiple point clouds, each in their own lidar coordinate system these are merged into one point cloud in the reference coordinate system during scene creation since it's more efficient to annotate one point cloud rather than several if imu data is available, we can also compensate for the ego motion so that each point is transformed to the reference coordinate system at the frame timestamp this is done by applying \vec{x w} = e t(c i(\vec{x {i,t}})) \vec{x {t'}} = e {t'}^{ 1}(\vec{x {w}}) where \vec{x {i,t}} is the point expressed in the lidar coordinate system of lidar i at time t and \vec{x {t'}} is the point expressed in the reference coordinate system at the frame time t' it is recommended to provide imu data so that motion compensation can be utilized since the merged point cloud is expressed in the reference coordinate system we also expect any ego motion data to be expressed in the reference coordinate system different coordinate systems for different kinds of data different kinds of data are expressed in different coordinate systems depending on whether it's single lidar or multi lidar this is summarized in the table below where we can see that ego motion data should be expressed in the lidar coordinate system in the single lidar case but in the reference coordinate system in the multi lidar case for example type of data single lidar multi lidar lidar point clouds lidar lidar ego poses & imu data lidar reference openlabel export 3d geometries lidar reference openlabel export 2d geometries pixel pixel pre annotations 3d geometries lidar reference pre annotations 2d geometries pixel pixel