Peer productivity
This page provides an overview of your team's overall performance.
This column displays the users' productivity in comparison to the expected productivity. The percentage will be green if the user exceeds expectations and red if the user does not.
The productivity metric uses the stopwatch time; if this feature is not used, this view will not contain productivity metrics. In this case, N/A will be displayed instead of a percentage number.
Inside the parenthesis, the actual number of produced annotation points is displayed.
We use annotation points to help you measure the value of your work and ensure that you are performing as expected.
Example one
If a user works 2 hours in a selected time period and produces 100APs in this period, and the target is 100AP/h, the user's productivity will be 50%.
Example two
Let's say you get points for each
- Task: 10p
- Cuboid object: 4p
- 2D Bounding box object: 2p
If you annotate 10 cuboid objects and 7 2D Bounding box objects in a task, the annotation points will be 10p * 1 + 4p * 10 + 2p * 7 = 64.
Points are not generated for review tasks but only when objects are added in correction tasks. This is by design.
A target that you are expected to reach is needed to measure your or someone on your team's productivity.
The target you see in the app is your hourly target.
The hourly target has already taken into account that you only spend about 80% of your working time actively annotating. For example, if we think a reasonable hourly target is 300, we set the daily target to 1800 (300*6).
We know there are differences between tasks (some are trickier than others), and we are most interested in average performance over time.
It is important to remember that productivity needs to be balanced with quality.
We estimate the quality of work to determine whether you're producing data of acceptable quality and not just at an expected rate.
The estimate is presented as a percentage in the Quality column. The higher the Tasks Reviewed column number is, the more trustworthy the Quality estimation is.
Example
You have produced 100 objects in two tasks; a reviewer has examined both of these tasks and reported 20 issues using the feedback tooling; the quality estimate will be 80%.
This is a metric that will not be used for purposes other than ensuring that you are working at a sustainable pace.
In this column we display how much time has been reported from Stopwatch during the selected timespan.
We will show the time spent annotating if no time has been reported. In this case, a red asterisk will indicate there is no stopwatch time available, and no productivity metric will be available.
This metric is in this table to verify that you are not working too much or too little. If a user does not take any breaks during a workday, this indicates a problem.
The time allocation is colored according to the following in order to indicate is a value is acceptable, too high, or too low:
ο»Ώ0.70 or lower - RED 0.70 - 0.75 - YELLOW 0.75 - 0.85 - WHITE 0.85 - 0.90 - YELLOW 0.90 or higher - REDο»Ώ
These columns show how many objects the user has created during the selected timespan. This might be a negative number if:
- The user is doing Correction tasks and removing objects
- The user is doing Review Tasks and is removing objects
- The user started the day by removing objects from a scene started on a previous day, and the timespan only accounts for today.
ο»Ώ