TEAM MANAGEMENT
Productivity Insights
15min
this view is available to anyone with the team manager role it surfaces your organization’s annotation and review productivity in one place, updating automatically every 10 minutes note “annotation points” is a project level metric see our annotation points & targets https //docs kognic com/annotation points and target guide for details a screenshot of the productivity insights view, detailing the data produced in a particular project 2\ annotate productivity tab path campus ▶︎ productivity insights ▶︎ annotate productivity track how your annotators perform against targets, with quality checks in one view 2 1 summary cards total productivity overall annotation productivity (% of target) — shown only if review tasks are not selected total tasks completed sum of all completed tasks in the interval team ap total annotation points awarded total objects sum of all objects created total shapes sum of all shapes (e g polygons, boxes) created 2 2 annotators table a pageable table listing each user’s metrics column description name user’s name and email click to drill into their personal user productivity page ongoing tasks number of tasks currently in progress productivity (ap) % of target ap achieved (green if ≥ 100%, red if < 100%) actual ap in parentheses tasks completed count of tasks finished quality estimated quality (%) computed as 100 − (feedback items / reviewed objects × 100) shows n/a if no reviewed work in the interval tasks reviewed number of tasks that received at least one review—higher values increase confidence in the quality metric working time stopwatch reported time falls back to annotation time if no stopwatch data (asterisk ) time allocation ratio of annotation time vs total working time (colored red/yellow/white to indicate too low/high allocation) objects produced net objects created (can be negative if users remove objects in correct/review tasks) shapes produced net shapes created (can be negative if shapes are removed in correct/review tasks) 2 3 columns (detailed explanation) this section describes the different columns present in the table name this is the name and email of the given user you are able to click on the name of a user to be taken to another view to gain even more insights into their performance ongoing tasks the number of currently ongoing tasks that the users are assigned to productivity (ap) this column details how productive the the listed users are in the selected timespan in comparison with the expected productivity this metric is displayed as a percentage which is colored green if the user is exceeding expectations, and red if the user is not inside the parenthesis the actual number of produces annotation points is displayed as an example, if a user has worked 2h in a selected time period, they have produced 100aps in this period, and the target is 100ap/h, the user will have a productivity of 50% the productivity metric is computed using the stopwatch time reporting feature, is this feature is not used this view will not contain productivity metrics, instead a n/a will be displayed tasks completed this is how many tasks that the user has completed during the selected timespan quality in order to be able to understand if a user is producing data with an acceptable quality (not just at an expected rate), this column estimates the quality of the users' work the quality estimate is presented as a percentage and it's computed by subtracting the ratio of the number of reported feedback items and the number of reviewed objects from 100 for example, if a user has produced 100 objects in 2 tasks, a reviewer has examined both of these tasks and reported 20 issues using the feedback tooling, the quality estimate will be 80% if no tasks that the user has completed in the selected timespan, no quality metric will be available this is a crude metric that should not be used for other purposes than ensuring that users are not working at an unsustainable pace please note that this column will not contain any valuable data if the users' work is not being reviewed in a way that utilize the feedback tooling to report issues tasks reviewed this is the number of tasks that the user has completed within the selected timespan that has been reviewed the higher this number is, the more trustworthy the quality estimate from the previous section is working time how much time has been reported from stopwatch during the selected timespan if no time has been reported, we will fall back to showing the time spent annotating in this case, a red asterisk will indicate that there is no stopwatch time available, and this will also make it impossible to compute a productivity metric time allocation the ratio of time allocated to annotation, and the total time reported during the selected timespan this metric is in this table in order to verify that users are not working too much or too little if a user does not take any breaks during a workday, this probably indicates a problem the time allocation is colored according to the following in order to indicate is a value is acceptable, too high or too low ratio indicator ≤ 0 70 🔴 too low 0 70–0 75 🟡 border 0 75–0 85 ⚪️ optimal 0 85–0 90 🟡 border ≥ 0 90 🔴 too high objects produced this is how many objects that the user has created during the selected timespan note, that this might be a negative number, if the user is doing correction tasks and is removing objects the user is doing review tasks and is removing objects the user started the day by removing objects from a scene started on a previous day, and the timespan only accounts for today shapes produced this is how many shapes has created during the selected timespan note, that this might be a negative number, if the user is doing correction tasks and is removing shapes the user is doing review tasks and is removing shapes the user started the day by removing shapes from a scene started on a previous day, and the timespan only accounts for today for a distinction between objects and shapes, see objects and shapes 3\ review productivity tab path campus ▶︎ productivity insights ▶︎ review productivity gauge how your reviewers are keeping up with and finding errors in annotation work 3 1 summary cards review coverage % of annotation points that have been reviewed / annotated total tasks completed number of review tasks submitted found errors % of errors detected by reviewers compared to a follow up reviewer total follow up reviews count of second pass reviews performed image showing the reviewer productivity table 3 2 reviewers table column description name reviewer’s name and email review coverage % of annotation points reviewed / annotation points created tasks completed number of review tasks finished found errors % of errors the reviewer flagged versus those found by the follow up review follow up reviews count of follow up reviews on this reviewer’s work review time time spent in the annotation tool performing reviews review ratio review time / annotation time ratio feedback created number of feedback items the reviewer submitted 4\ faqs q why is working time not showing correctly? a make sure the task category is including training and onboarding q why is working time not showing correctly i've toggled the additional filters? a if working time is not showing correctly and time utilization is >1 0, it means that the annotator accidentally turned of the stopwatch while working to get an accurate measurement of annotation time please check the graphs in user productivity q how often does this view update? a every 10 minutes—no manual refresh needed q who can see this view? a only users with the team manager role q why is my quality metric n/a? a quality is based on feedback in reviewed tasks if none of your completed tasks were reviewed, quality cannot be computed