Productivity Insights
This view is available for all people given the role of Team Manager. It shows the productivity of your organization's annotators. It also shows the productivity of your organization's reviewers in a separate tab. The view automatically updates every 10 minutes.
There are several ways of filtering the users shown in either of these overviews, and which data that is included. You can select the time interval when the data shown was produced, which projects and requests to include.
You can also select which types of tasks that are included (Task type: Annotate, Review, or Correct, this is not available in the Reviewers' tab) and in which context they were produced (Task category: Production, Training and examination, or Demo).

This view (as well as some other views) use the term Annotation Points, which is a metric used to describe the value of annotation data.
This is configured on a project level, a project determine the value of the shapes, properties and objects that uploaded data will be annotated with. It also discuss expectations or annotation point targets, which is a number that details how many annotation points a user is expected to produce per hour.
This tab details the productivity of the annotators in the selected data.
This section summarize a few key metrics that are visible in the current selection:
- Total productivity: This is the total productivity in the current selection, only visible if review tasks are not selected
- Total tasks completed: The total number of tasks completed in the current selection
- Team AP: The total number of Annotation Points awarded in the current selection
- Total object count: The total number of objects produced in the current selection
- Total number of shapes: The total number of shapes produced in the current selection
This section describes the different columns present in the table.
This is the name and email of the given user. You are able to click on the name of a user to be taken to another ο»Ώviewο»Ώ to gain even more insights into their performance.
The number of currently ongoing tasks that the users are assigned to.
This column details how productive the the listed users are in the selected timespan in comparison with the expected productivity. This metric is displayed as a percentage which is colored green if the user is exceeding expectations, and red if the user is not. Inside the parenthesis the actual number of produces annotation points is displayed. As an example, if a user has worked 2h in a selected time period, they have produced 100APs in this period, and the target is 100AP/h, the user will have a productivity of 50%. The productivity metric is computed using the stopwatch time reporting feature, is this feature is not used this view will not contain productivity metrics, instead a N/A will be displayed.
This is how many tasks that the user has completed during the selected timespan.
In order to be able to understand if a user is producing data with an acceptable quality (not just at an expected rate), this column estimates the quality of the users' work. The quality estimate is presented as a percentage and it's computed by subtracting the ratio of the number of reported feedback items and the number of reviewed objects from 100. For example, if a user has produced 100 objects in 2 tasks, a reviewer has examined both of these tasks and reported 20 issues using the feedback tooling, the quality estimate will be 80%. If no tasks that the user has completed in the selected timespan, no quality metric will be available. This is a crude metric that should not be used for other purposes than ensuring that users are not working at an unsustainable pace. Please note that this column will not contain any valuable data if the users' work is not being reviewed in a way that utilize the feedback tooling to report issues.
This is the number of tasks that the user has completed within the selected timespan that has been reviewed. The higher this number is, the more trustworthy the Quality estimate from the previous section is.
How much time has been reported from Stopwatch during the selected timespan. If no time has been reported, we will fall back to showing the time spent annotating. In this case, a red asterisk will indicate that there is no stopwatch time available, and this will also make it impossible to compute a productivity metric.
The ratio of time allocated to annotation, and the total time reported during the selected timespan. This metric is in this table in order to verify that users are not working too much or too little. If a user does not take any breaks during a workday, this probably indicates a problem.
The time allocation is colored according to the following in order to indicate is a value is acceptable, too high or too low:
ο»Ώ0.70 or lower - RED 0.70 - 0.75 - YELLOW 0.75 - 0.85 - WHITE 0.85 - 0.90 - YELLOW 0.90 or higher - REDο»Ώ
This is how many objects that the user has created during the selected timespan. Note, that this might be a negative number, if:
- The User is doing Correction tasks and is removing objects
- The User is doing Review Tasks and is removing objects
- The user started the day by removing objects from a scene started on a previous day, and the timespan only accounts for today.
This is how many shapes has created during the selected timespan. Note, that this might be a negative number, if:
- The User is doing Correction tasks and is removing shapes
- The User is doing Review Tasks and is removing shapes
- The user started the day by removing shapes from a scene started on a previous day, and the timespan only accounts for today.
For a distinction between objects and shapes, see Objects and Shapes.
This tab exists in order to allow guaging the productivity and quality of reviewers.

- Review coverage: The percentage of the annotated data that has been reviewed in the current selection
- Total tasks completed: The total number of review tasks that has been submitted in the current selection
- Found errors: Based on follow up reviews, this percentage details the amount of produced errors that are detected by the reviewers in the current selection
- Total follow-up reviews: The number of follow-up reviews that have been performed on the tasks reviewed in this selection
This section describes the columns avialable in the table.
This number describes the pace of the reviewer during the selected time span. It is based on the amount of annotation points that has been reviewed compared to the configured target.
The amount of tasks completed in the selected time span.
This column compares the amount of errors found by a reviewer with the amount of errors found by a follow-up reviewer. If the follow-up reviewer finds no errors, the reviewer is considered to have found all errors that have been produced.
The number of follow-up reviews that has been performed on a reviewers work.
The amount of time spent in the annotation tool reviewing produced data.
The amount of time spent producing data compared to the amount of time it took to review the produced data.
The amount of feedback items created by the reviewer during the selected time span.