PROJECT MANAGEMENT
...
Kognic Standard Workflow
Quality Review
22 min
limited access this phase is currently under development and is only available to a limited number of organizations the quality review (qr) phase is part of the docid\ seukel bw5wxsob2z2tuk , which aims to enable the delivery of sufficient quality annotations on time and at the expected cost goal the phases' main goal is to ensure that the quality of the labels produced by annotators align with the expectations of the quality managers (qms) working in the request users and actions the qr phase contains two actions, review and correct, which should be configured to contain different groups of users annotators to ensure the phase functions as expected, configure users who annotate data in annotate phase with the correct action in the qr phase this ensures they are available for monitoring and configuration and that they receive correction tasks for the inputs they have labeled reviewers qms responsible for data quality should be configured with the review action they will receive review tasks that allow them to assess the quality of labels created by annotators in the request monitoring the primary monitoring tool for the qr phase is the annotator sampling table it contains some basic user information (name, e mail, organization and workforce) as well as some phase specific metrics and controls annotator samping table inside the qr phase r1 (round 1) acceptance the primary measure of annotator label quality is the percentage of their sampled annotations accepted on first review by a qm during this phase current review sample this shows the percentage and absolute number of an annotator's work that has been reviewed and used to calculate the r1 acceptance ratio the absolute numbers help you determine if the sample size is large or small for this specific request alignment score the alignment score indicate how well an annotator's work aligns with the established quality standards of the project these scores help in assessing the annotator's performance and identifying areas for improvement read more about tooling various controls and tools are available to reviewers in the qr phase these are mainly concerned with modifying how labels are sampled for review depending on the users' performance and the overall state of the request sampled review label quality is primarily estimated based on feedback items generated in review tasks performed by the qms the allocation of review tasks are team based if possible if the annotator is part of an annotation team which has an qm who is present in the request, review task will assigned to that qm if possible the team column in the annotator sampling table will give you information on the current allocation situation for that user when review is done these items are then aggregated, and various metrics are calculated and provided for the qms to evaluate and coach annotators based on their performance sampling rate all users undergo continuous performance evaluation through "baseline sampling " this ensures at least one task is sampled for every annotator in the phase, regardless of their performance or other settings, allowing metrics to be calculated for everyone the annotator sampling table allows you to set custom sampling rates for individual users this is particularly useful for more thoroughly evaluating annotators who may be producing lower quality labels the increased sampling provides more accurate performance measurements and enables quality managers to give direct feedback annotators then receive correction tasks where they can learn from this feedback while improving their labels before they proceed through the pipeline you can leverage the alignment score to understand how well the annotators initial work aligns with the reviewers expectation for the specific phase and in general for the project a high alignment score indicates that the reviewer don't need to review the annotators work before accepting it read more docid\ pgncji7eztsd4pekq2pia send non selected inputs for review depending on the sampling rate configured for an annotator at different times, only a subset of their labels may have been selected for review in cases where performance is sub par, it is possible to later create review tasks for all or a subset of labels created by a specific user this feature is available in the annotator sampling table, under the menu on the right it is also possible to send individual non selected inputs for review from the phase inputs table gate to allow for manual review selection of inputs, the gate is a mechanism for preventing inputs from moving out of the phase before adequate assessments of label quality can be made two actions can be taken in the gate controls, and they have different effects on currently waiting inputs and future inputs arriving in the phase open gate for non sampled inputs opening the gate will forward all currently waiting inputs to the next phase and ensure that any future inputs entering the quality review phase are sent forward to the expert verification phase without review review everything if this option is selected, review tasks will be created for all currently waiting unselected inputs any future inputs entering the quality review phase will automatically be selected for review and have tasks created when this option has been selected, sampling levels for individual users will no longer be relevant, and it will not be possible to change them since they are not longer relevant phase inputs in the phase input table, you can see all inputs that are currently inside the phase for each input, you can see when it changed workflow stage, how many tasks have been done on it in the current phase, and what the state and type is of any current task the table includes an estimated quality column that shows a conservative quality estimate for each input, based on the lower bound of the confidence interval this helps reviewers prioritize which inputs to review first the table is sorted by estimated quality ascending by default, so the worst quality inputs appear at the top these estimates use the same quality metrics shown in docid\ p gtk2pqoxxgmxhxliflm , surfaced here for convenience the actions quick accept and quick reject are available for inputs with unstarted review tasks you can read more about them https //docs kognic com/review#fviud error summary with the error summary you get insight into what issues reviewers have found and commented on during the phase's review tasks it helps you understand the most common and less frequent identified issues the error summary insights are based on feedback items written by the phase's reviewers absolute numbers represent actual feedback items, not the edits made in response to the feedback no of correction requests the number of feedback items of the type "correction requests" this is the sum of all errors shown in the "error type distribution" to the right no of feedback items the number of feedback items of the type "advice" these are excluded from the chart "error type distribution" and "suggested properties" error type distribution shows the absolute count and relative share of all feedback items categorized as "correction requests", grouped by their error type suggested properties for those items with the error type properties, this shows the distribution of properties that we affected each error indicated as properties has a single property connected to it individual feedback items this section helps you to get an overview of all given feedback, to answer questions such as how detailed and critical is the feedback of my colleague reviewers? are the reviewers giving valid feedback given the current guideline? what is feedback where the reviewer and annotator are discussing in the comments? how does feedback of type "missingobject" look? what type of feedback is marked as "invalid"? the items are split up by their feedback type correction requests in this section, you see feedback items for the type correction request these are things that the reviewer wants to get corrected before accepting the review you can filter the feedback items by their resolved status, the error type, whether a discussion thread exists, or whether the overall review of the input has been accepted yet below is a description of what information is available for each correction request status status comment unresolved when created by the reviewer, and not yet approved corrected when the annotator has fixed the issue mentioned in the correction request resolved when the reviewer has approved the annotators fix invalid an item can be marked as "invalid" by the user if they think it's not accurate with respect to the guidelines of the task, this can be due to mistakes in machine generated feedback or from human reviewers an item can be "unresolved" even if the overall review was accepted, or the other way around error type the type of error that was selected in the correction request suggested property if the error type is "properties", this column shows which property and value was suggested by the reviewer comment shows the description that the reviewer might have given thread exists will say "yes", if there was any reply to the item, i e a discussion thread has been started in relation to the item external scene id the scene id of the reviewed annotation current round the review round in which the input of this feedback item currently is all inputs start in round 1 with each rejected review, they progress 1 round forward accepted review whether the overall review was accepted or not feedback in this section you see feedback items of the type advice as the underlying data has less structure, the table has fewer columns and filtering options, but otherwise, it looks the same as the one above
