UPLOAD DATA
...
Indepth theory
Scenes

Motion compensation

4min
an inherent problem with labeling any lidar setup is that the resulting point cloud is not a snapshot from a single instant in time but a time interval in which the lidar sweep was made this causes a problem during labeling since the objects can move during the lidar sweep, and if you try to label a car with e g a 3d box that box would not represent the actual size of that car this issue can be mitigated with the help of motion compensation, where we synchronize the timestamp of all points in the point cloud by including data from the inertial measurement unit (imu) of the ego vehicle we get an exact trajectory of how the car is moving during the lidar sweeps this allows us to perform motion compensation, adjusting the points in the point cloud so that they represent the same instant in time additionally, each point in the provided point clouds need to have a unix timestamp specified (in nanoseconds), so that the motion compensation can work what instant in time to motion compensate the points to can be specified with the unix timestamp parameter if this is not specified then, for each frame, the median time of all points in the frame will be used instead motion compensation is of particular importance when annotation is to be performed on multiple lidar sweeps at once, e g in multi lidar setups and when point clouds are aggregated across frames all unix timestamps need to be in nanoseconds in order for the motion compensation to work correctly it is important with a consistent unit of time therefore, all unix timestamps needs to be provided in nanoseconds note that all timestamps (in point clouds and the provided unix timestamp ) must be encompassed by the timestamps in the imu data otherwise, the scene creation will fail imu data is provided as a list of imudata objects in the root of the object in the following way from kognic io model ego import imudata from kognic io model calibration import position, rotationquaternion from kognic io model scene lidars and cameras sequence import lidarsandcamerassequence, frame from kognic io client import kognicioclient imu data = \[ imudata( position=position(x= 10 44, y=126 06, z=78 817), rotation quaternion=rotationquaternion(x= 1 0, y=0 5, z=1, w=0), timestamp=1665997200597027072 # ns ), ] frames = \[ frame( , unix timestamp = 1665997358832901120), frame( , unix timestamp = 1665997503951270144), ] lidars and cam seq = lidarsandcamerassequence( , imu data = imu data, frames = frames, ) client = kognicioclient() client lidars and cameras sequence create( lidars and cam seq, project="project ext id", dryrun=true, ) use dryrun to validate setting dryrun parameter to true in the method call, will validate the scene using the api but not create it enable/disable motion compensation by default motion compensation is performed for scenes with lidar pointclouds when imu data is provided whether motion compensation is enabled or not is controlled by scene feature flags docid\ o snljd9uvm4cnf9eabmv by default it is enabled but it can be disabled by providing an empty feature flag from kognic io model scene feature flags import featureflags client lidars and cameras sequence create( , feature flags=featureflags() ) it may be desirable to disable motion compensation in cases where pointclouds are already motion compensated outside of the kognic platform