UPLOAD DATA
Guides
Upload your first scene
4min
when uploading raw data to the kognic platform, you need to do so in the form of a scene a scene is a collection of data from different sources, such as images, point clouds, and other sensor data this guide will walk you through the process of uploading your first scene, either in 2d (camera only) or 3d (camera and lidar/radar) prerequisites you have successfully followed the quickstart docid\ b6im5u kiwn9ukybzjx6d guide and have the kognic io library installed for users with access to multiple workspaces you need to advanced setup docid\ xmgm2k teox9jtn otqbm code examples uploading a 2d scene to upload a 2d scene, you need to have the raw images available on your local machine (or create a overview docid\ yunpnpwuhzlgg9wb9qnk8 for remote data) it is a two step process build the scene object in python upload the scene object to the kognic platform below follows examples for a few different cases from kognic io client import kognicioclient from kognic io model scene cameras import cameras, frame from kognic io model scene resources import image \# 1 build scene object scene = cameras( external id="my first scene", frame=frame(images=\[image(filename="path/to/image jpg")]) ) \# 2 upload scene client = kognicioclient() scene uuid = client cameras create(scene) scene uuid print("scene uploaded, got uuid ", scene uuid)from kognic io client import kognicioclient from kognic io model scene cameras import cameras, frame from kognic io model scene resources import image \# 1 build scene object scene = cameras( external id="my first scene", frame=frame( images=\[ \# sensor names must be unique image(sensor name = "cam1", filename="path/to/image1 jpg"), image(sensor name = "cam2", filename="path/to/image2 jpg") ], ) ) \# 2 upload scene client = kognicioclient() scene uuid = client cameras create(scene) scene uuid print("scene uploaded, got uuid ", scene uuid)from kognic io client import kognicioclient from kognic io model scene cameras sequence import camerassequence, frame from kognic io model scene resources import image \# 1 build scene object scene = camerassequence( external id="my first scene", frames=\[ \# relative timestamps must be unique and strictly increasing frame( relative timestamp=0, frame id="1", images=\[image(filename="path/to/image1 jpg")], ), frame( relative timestamp=100, frame id="2", images=\[image(filename="path/to/image2 jpg")], ), frame( relative timestamp=200, frame id="3", images=\[image(filename="path/to/image3 jpg")], ), ] ) \# 2 upload scene client = kognicioclient() scene uuid = client cameras sequence create(scene) scene uuid print("scene uploaded, got uuid ", scene uuid) uploading a 2d/3d scene to upload a 2d/3d scene, you need to have the raw images and point clouds available on your local machine (or create a overview docid\ yunpnpwuhzlgg9wb9qnk8 for remote data) in addition you need to have calibration data available it is a three step process create a calibrations overview docid 4mfc9atwcxupflne7v1iz build the scene object in python, referencing the calibration from the previous step upload the scene object to the kognic platform below follows examples for a few different cases from kognic io client import kognicioclient from kognic io model calibration import sensorcalibration, pinholecalibration, lidarcalibration from kognic io model scene lidars and cameras import lidarsandcameras, frame from kognic io model scene resources import image, pointcloud client = kognicioclient() \# 1 create calibration (see calibration section for more details) sensor calibration = sensorcalibration( external id = "my first calibration", calibration = { "cam" pinholecalibration( ), "lidar" lidarcalibration( ) } ) created calibration = client calibration create calibration(sensor calibration) \# 2 build scene object scene = lidarsandcameras( external id=f"my first scene", calibration id = created calibration id, frame=frame( images=\[image(sensor name = "cam", filename="path/to/image jpg")], point clouds=\[pointcloud(sensor name = "lidar", filename="path/to/pointcloud pcd")] ) ) \# 3 upload scene scene uuid = client lidars and cameras create(scene) scene uuid print("scene uploaded, got uuid ", scene uuid)from kognic io client import kognicioclient from kognic io model calibration import sensorcalibration, pinholecalibration, lidarcalibration from kognic io model scene lidars and cameras import lidarsandcameras, frame from kognic io model scene resources import image, pointcloud client = kognicioclient() \# 1 create calibration (see calibration section for more details) sensor calibration = sensorcalibration( external id = "my first calibration", calibration = { "cam1" pinholecalibration( ), "cam2" pinholecalibration( ), "lidar" lidarcalibration( ) } ) created calibration = client calibration create calibration(sensor calibration) \# 2 build scene object scene = lidarsandcameras( external id="my first scene", calibration id = created calibration id, frame=frame( images=\[ image(sensor name = "cam1", filename="path/to/image1 jpg"), image(sensor name = "cam2", filename="path/to/image2 jpg"), ], point clouds=\[pointcloud(sensor name = "lidar", filename="path/to/pointcloud pcd")] ) ) \# 3 upload scene client = kognicioclient() scene uuid = client lidars and cameras create(scene) scene uuid print("scene uploaded, got uuid ", scene uuid)from kognic io client import kognicioclient from kognic io model calibration import sensorcalibration, pinholecalibration, lidarcalibration from kognic io model scene lidars and cameras sequence import lidarsandcamerassequence, frame from kognic io model scene resources import image, pointcloud client = kognicioclient() \# 1 create calibration (see calibration section for more details) calibration = { "cam" pinholecalibration( ), "lidar" lidarcalibration( ) } sensor calibration = sensorcalibration( external id = "my first calibration", calibration = { "cam" pinholecalibration( ), "lidar" lidarcalibration( ) } ) created calibration = client calibration create calibration(sensor calibration) \# 2 build scene object scene = lidarsandcamerassequence( external id="my first scene", calibration id = created calibration id, frames=\[ \# relative timestamps must be unique and strictly increasing frame( relative timestamp=0, frame id="1", images=\[image(sensor name = "cam", filename="path/to/image1 jpg")], point clouds=\[pointcloud(sensor name = "lidar", filename="path/to/pointcloud1 pcd")] ), frame( relative timestamp=100, frame id="2", images=\[image(sensor name = "cam", filename="path/to/image2 jpg")], point clouds=\[pointcloud(sensor name = "lidar", filename="path/to/pointcloud2 pcd")] ), frame( relative timestamp=200, frame id="3", images=\[image(sensor name = "cam", filename="path/to/image3 jpg")], point clouds=\[pointcloud(sensor name = "lidar", filename="path/to/pointcloud3 pcd")] ), ] ) \# 3 upload scene scene uuid = client lidars and cameras sequence create(scene) scene uuid print("scene uploaded, got uuid ", scene uuid) multiple point clouds is also supported, but not shown in the examples above since that requires a bit more data see the motion compensation docid\ rhbsetwlntflfo6fmd 1e section for more details uploading using zod data we have exemplar code and a tutorial for uploading scenes using zenseact open dataset (zod) https //zod zenseact com/ data, including 2d, 3d, and aggregated 3d scenes upload zod data docid\ cz51x 3 lqonbcpftohmq if you have the zod data downloaded, and have kognic api credentials, the examples will run out of the box to create functional scenes! to use the below install koginc io version >=2 5 1 the model and method used for creating a scene this way is slightly different from the above all scenes are considered sequences and there is no need to use a specific model for different scene types creating a 2d scene from bucket to use this feature need to have configured a integrations docid\ l93 pnieahqpk 6j ouef if you have your data in a kognic supported format on bucket that you have set up a data integration for, then there is no need for you to download you data locally to then upload it to kognic instead it's sufficent to point out the files in your bucket from kognic io client import kognicioclient from kognic io model scene import scenerequest, frame, imageresource scene = scenerequest( workspace id="\<workspace id>", external id="my first scene from external resources", frames=\[ frame( frame id="1", timestamp ns=1742225790, # absolute unix timestamp of frame images=\[ imageresource( external resource uri="\<uri>", # matching your data integration, e g s3 //my bucket/some image jpg sensor name="my camera", local file=none ) ], ) ], postpone external resource import=false # set true if you want to import at a later point in time ) client = kognicioclient() scene uuid = client scene create scene(scene) scene uuid print("scene uploaded, got uuid ", scene uuid) creating a 2d/3d scene from buckets to use this feature need to have configured a integrations docid\ l93 pnieahqpk 6j ouef if you have your data in a kognic supported format on bucket that you have set up a data integration for, then there is no need for you to download you data locally to then upload it to kognic instead it's sufficent to point out the files in your bucket from kognic io client import kognicioclient from kognic io model scene import scenerequest, frame, imageresource, sensorresource, resource, egovehiclepose scene = scenerequest( workspace id="\<workspace id>", external id="my first scene from external resources", frames=\[ frame( frame id="1", timestamp ns=1742225790, pointclouds=\[ sensorresource( external resource uri="\<uri>", sensor name="my lidar", local file=none, ) ], images=\[ imageresource( external resource uri="\<uri>", sensor name="my camera", start shutter timestamp ns=1742225789, end shutter timestamp ns=1742225790, local file=none, ) ], ego vehicle pose=egovehiclepose( x=0 0, y=0 0, z=0 0, rotation x=0 0, rotation y=0 0, rotation z=0 0, rotation w=1 0 ) ) ], calibration id="\<calibration id>", imudata resource=resource( external resource uri="\<uri>", local file=none ), sensor specification=none, should motion compensate=false, postpone external resource import=false, # set true if you want to import at a later point in time ) client = kognicioclient() scene uuid = client scene create scene(scene) scene uuid print("scene uploaded, got uuid ", scene uuid) this model also allows you to upload imu data, which is expected as a json file containing the following format \[ { "postion" { "x" 0 0, "y" 0 0, "z" 0 0 }, "rotationquaternion" { "w" 0 0, "x" 0 0, "y" 0 0, "z" 0 0 } "timestamp" \<unix timestamp in nano seconds> }, ]