UPLOAD DATA
...
Guides
More examples

Upload ZOD data

2min
this tutorial will guide you through uploading different overview docid\ yunpnpwuhzlgg9wb9qnk8 using the zenseact open dataset (zod) https //zod zenseact com/ the purpose of this page is to show you some of the steps that might be needed to convert recordings into kognic scenes prerequisites & dependencies 1 to follow along in this guide you need to download the data from zenseact open dataset https //zod zenseact com/download/ the data should be structured like this zod ├── sequences │ ├── 000000 │ ├── 000002 │ ├── └── trainval sequences mini json 2 you will also need to install the zod python package from pypi, which provides some abstractions for reading the data pip install zod 3 you need to have a kognic account and the kognic python client installed if you have not done this yet, read the quickstart docid\ b6im5u kiwn9ukybzjx6d guide this guide follows the process of uploading scenes using zod data, using the example code from the kognic io zod examples repository https //github com/annotell/kognic io examples python/blob/30c725ad38a1e5a163c28f10163022d4d522acc8/examples/zod which contains the complete source files for all of the snippets in this page the examples are runnable, if you have the data available and have kognic authentication set up our example code initialises a kognic io client at the top level, then creates the scene from zod data for (potentially) multiple scenes at once using a function if name == " main " client = kognicioclient() upload cameras sequence scenes( zod path=path("/path/to/zod"), # change me zod version="mini", client=client, max nr scenes=1, max nr frames=10, dryrun=false, ) this example follows the same broad structure as the cameras only sequence with the addition of a lidar sensor a calibrations overview docid 4mfc9atwcxupflne7v1iz , to allow projection between 2d and 3d coordinate systems docid\ xqf6uaqsofwvavmtepdec conversion of point clouds from zod's packed numpy arrays conversion of ego poses for each frame as before we initialise a kognic io client at the top level, then create the scene from zod data for (potentially) multiple scenes at once using a function this example follows the same structure as the lidar and camera sequence example aggregated lidars and cameras sequence docid\ wk30fzm6vjvdctdkx mgh are a special case of lidar + camera sequence scenes where the lidar data is aggregated across frames into a single pointcloud this gives a dense, static pointcloud that represents the entire scene across all frames aggregated scenes may be created by providing a pointcloud on every frame and allowing the kognic platform to handle aggregation, or, they may be pre aggregated and uploaded by specifying a pointcloud on the first frame, then nothing on subsequent frames in the case of zod data, we only have per frame pointclouds, so the example uploads a pointcloud on every frame and leaves aggregation to the platform as such it is very similar to the lidar and camera sequence example, except that 1 the scene type is different aggregatedlidarsandcamerassequence instead of lidarsandcamerassequence def convert scene(zod sequence zodsequence, external id str, max nr frames int) > aggregatedlidarsandcamerassequence frames = convert frames(zod sequence, max nr frames) return aggregatedlidarsandcamerassequence(external id=external id, frames=frames, calibration id="\<to be set later>") 2\ the frames are of an aggregated scene specific type frames append( alcsframe( relative timestamp=ns to ms(frame ts ns) start ts ms, frame id=str(frame ts ns), images=\[convert zod camera frame to image(camera frame)], point clouds=\[point cloud], ego vehicle pose=convert to ego vehicle pose(ego pose), unix timestamp=frame ts ns, ) ) 3\ ego pose data is required otherwise the two approaches are very similar refer to the lidars and cameras sequence tab