GETTING STARTED
Indepth theory
Data requirements
5min
the kognic platform supports multiple types of data and information to enable an efficient annotation process, mainly images , point clouds , calibrations and ego vehicle poses/imu data in this section we describe the supported formats for each type images we currently support the following image formats png , jpg , jpeg , webp and avif point clouds kognic uses a potree format internally to represent and present point clouds, this means that uploaded point cloud data needs to be converted into this format before it can be used as scene in the system we currently support automatic conversion of the following formats pcd , csv and las the converter does not however exhaustively support all possible versions of these formats, see below for details of each format a timestamp field must always be present in point clouds, both in single frame and sequence scenes, but the values are irrelevant if motion compensation docid\ rhbsetwlntflfo6fmd 1e is not enabled an intensity field may be provided in point clouds and will be preserved during conversion if omitted, the intensity for all points will be zero color and other auxiliary data that is not used in the platform is currently discarded in the conversion to potree supported file formats docid\ rjcfqbi 5abkcad j7mcw calibrations scenes with 2d and 3d data across various coordinate systems docid\ xqf6uaqsofwvavmtepdec need calibrations to align sensors by location and orientation both an extrinsic calibration that maps the position and rotation in 3d relative to the reference system and an intrinsic camera calibration that projects the 3d points to camera's image plane all extrinsic calibrations shall represent the transformation from the sensor to the reference system types of calibrations all calibrations detail a sensor’s 3d position and orientation relative to the reference system, the calibrations shall map the transformation from the sensor to the reference system they also map 3d points to the camera’s image plane for lidar/radar, there is only one type of calibration available, read more here for cameras, we support different types of standard camera calibrations , where you have to provide the intrinsic parameters of the camera all camera calibration are implemented using the opencv coordinate system unsupported camera model if your camera model is not supported, you can also provide a custom camera calibration where you provide the implementation in the form of a webassembly module ego vehicle poses an ego vehicle pose can optionally be added to each frame which describes the relative pose it is highly recommended for 3d sequence annotations as it enables more efficient workflows and functions in the kognic platform, especially for static objects the pose is represented using a 3d position and a quaternion in the local coordinate system for a single lidar input the poses shall be on the lidar coordinate system for multi lidar inputs the poses shall be on the reference coordinate system key value parameters rotation quaternion a rotationquaternion object w , x , y , z position a position object x , y , z in addition to the frame poses, there is also the option to upload higher frequency imu data to enable motion compensation more details on motion compensation can be found motion compensation docid\ rhbsetwlntflfo6fmd 1e faq how do i check the if the calibration is correct follow the instructions view an uploaded scene docid\ p 1iodjpvbrchu ukvpus to check your calibration you can also quickly check the orientation of your camera quickly by opening a scene and go to the 3d viewer at the ego vehicle there is a small yellow circle representing the position and a red arrow representing the orientation of the currently selected camera check that the arrow is pointing in the direction that you expect, for example in front of the ego vehicle