Review monitoring
In a request where the review phase has been added to the request workflow, you can view the tab Review. Here you can monitor the current progress and task acceptance/rejecting ratio in the review phase.
In this section, you can monitor the current progress of the review phase. The information is intended to help you answer questions such as:
- How many of the inputs have been reviewed at least once?
- How many inputs with annotations have been accepted and are, therefore, delivery ready?
- How many preliminary annotations are currently in correction or re-review?
- How many preliminary annotations are in round 5 of the review?
In this card, you can view a summary of the review phase's current progress. The progress is calculated in relation to the total amounts of inputs that currently have left the annotation phase and have entered the review phase.
The percentage of annotated inputs (within the review phase) which have been reviewed one or more times. In the progress bar below the percentage, you can see how many of these inputs that are accepted, in correction or in re-review.
Shows how many inputs that have been accepted in review and therefore are delivery ready.
Shows how many inputs that have been rejected in review and currently are in or are waiting for a correction.
Shows how many inputs that have been rejected in review, already been corrected and currently are in or are waiting for re-review.
Shows how many inputs that not yet have been through their first review.
In the review phase an annotated input can pass through multiple review rounds - depending how many reviews and corrections it takes for it to become of acceptable quality. Each round contains a cycle of correction and review, with the exception of round 1 which only contains a review step. When the annotated input is rejected it moves to a new round.
In the card Distribution of inputs in rounds you can view how the different inputs currently are spread throughout different states and rounds. Per round you can see how many annotated inputs that currently are in correction, in review (first review or re-review) or have been accepted and therefore are delivery ready.
This section and its chart aim to help you answer questions such as:
- Of what quality is the annotated dataset when it leaves the annotation flow?
- What is the ratio between accepted and rejected review in round 3?
- How many annotated input got rejected in round 2?
The chart visualizes the absolute number of accepted and rejected review tasks per round using a tornado chart. You can also see the accepted/rejected absolute numbers as well as a percentage ratio next to the bar.
This section helps you to get an aggregated overview of the Correction Requests and answer questions like:
- Which error type has the most correction requests?
- Which are the top sources of errors?
- For Properties errors - which properties are the most affect?
The shown numbers include both resolved and unresolved Correction Requests.
No of Correction Requests
The number of items which where categorized as "Correction Requests". This is the sum of all errors show in the "Error Type Distribution" to the right.
No of Feedback Items
The number of items which where categorized as (general) Feedback. These won't be shown here.
Error Type Distribution
Shows the absolute count and relative share of all feedback items categorized as "Correction Requests", grouped by their error type.
Suggested Properties
For those items with error Properties, this shows the distribution of properties that we affected. Each error indicated as Properties has single property connected to it.
This section helps you to get an overview of all given feedback, to answer questions such as:
- How detailed and critical is the feedback of my colleague reviewers?
- Are the reviewers giving valid feedback given the current guideline?
- What is feedback where reviewer and annotator are discussing in the comments?
- What is feedback of type "MissingObject"?
The items are split up by their feedback type.
In this section you see feedback items given as Correction Requests. These are things that the reviewer wants to be fixed before accepting the review.
You can filter the feedback items by their Resolved status, the Error type, whether a discussion thread exists, or whether the overall Review of the input has been accepted yet.
Status Whether this specific item was marked as "Resolved", "Unresolved" or "Invalid". An item can be "Unresolved" even if the overall Review was accepted, or the other way around.
An item can be marked as "Invalid" by the user if they think it's not accurate with respect to the guidelines of the task, this can be due to mistakes in machine-generated feedback or from human reviewers.
Error Type The type of error that was selected in the Correction Request.
Suggested property If the Error type is "Properties", this shows which property and value was suggested by the reviewer.
Comment Shows the description that the reviewer might have given
Thread exists Yes, if there was any reply to the item, i.e. a discussion thread has been started in relation to the item.
External Input ID The input ID of the reviewed annotation.
Input's Current Round The review round in which the input of this feedback item currently is. All inputs start in round 1. With each rejected review, they progress 1 round forward.
Accepted Review Whether the overall Review was accepted.
In this section you see feedback items given as (general) Feedback. As the underlying data has less structure, the table has less columns and filtering options but otherwise looks the same as the one above.
ο»Ώ