PROJECT MANAGEMENT
...
Request details
Alignment Analytics
15 min
alignment analytics prerequisites to access alignment analytics, you need one of the following a role with access to request management in your organization kognic internal user access pilot access for specific projects you can find the page by navigating to a request in request management and clicking view alignment analytics on the request summary tab overview alignment analytics shows you how well annotation work aligns with review standards it gives you a quality picture for a specific request β covering reviewed work, partially reviewed work, and estimates for unreviewed work use this page to understand the quality level of delivered annotations identify annotators whose work may need additional review track whether review rounds are catching and correcting issues effectively how it works each time work passes through a review phase, the system compares the version before and after that phase the reviewer may accept the work as is or make corrections β either way, the system records every difference shapes that were added, removed, or modified, and properties that were changed these differences are turned into measurable quality metrics think of it as a before and after comparison before the work as it entered the review phase after the work as it left the review phase (after the reviewer accepted or corrected it) the smaller the difference, the higher the quality quality metrics quality is broken down into four sub metrics metric what it measures precision how accurately annotators identify the correct objects recall how completely all objects in the scene are covered geometry how accurately shape boundaries match the ground truth properties how accurately object properties are set each metric is displayed as a percentage a score near 100% means almost no corrections were needed quality phases the page shows three quality cards, each measuring quality at a different stage of the review workflow initial quality "how much did the first review round need to correct?" this compares the annotator's original work to the output after the first review a low score means the reviewer had to make significant corrections a high score means the annotation was already close to the desired result post review quality "how good is the work after the first review round?" this compares the first review round output to the output after a second review if this score is near 100%, the first review round produced work that needed almost no further corrections overall quality "what is the estimated quality of everything delivered in this request?" this combines all available data into a single quality picture it is the best estimate of the actual quality your customer receives see for how this is calculated understanding the overall quality estimate not every assignment gets reviewed the overall quality metric provides a complete picture by combining three types of data fully reviewed work (reviewed twice) β treated as the ground truth for quality measurement partially reviewed work (reviewed once) β quality inferred from available review data unreviewed work β quality estimated using the annotator's track record how unreviewed work is estimated the system estimates quality for unreviewed assignments by looking at reviewed work from the same annotator estimation type how it works request sampling uses reviewed work from the same annotator in this request project sampling uses reviewed work from the same annotator in other requests within the project no sampling available no review data exists for this annotator anywhere in the project β quality cannot be estimated if some assignments cannot be estimated, a warning is displayed on the overall quality card confidence intervals every metric comes with a confidence interval that tells you how reliable the number is the interval gets tighter as more data becomes available what to watch for wide intervals (e g , 60%β95%) mean the metric could shift significantly as more reviews come in treat the score as an early indicator, not a definitive result narrow intervals (e g , 88%β92%) mean the metric is stable and reliable unreviewed inputs below the quality cards, the page shows the distribution of unreviewed work unreviewed inputs β total number of inputs not yet reviewed in this request unreviewed in request β annotators with no reviews in this request but reviewed elsewhere in the project (number of users and inputs) unreviewed in project β annotators never reviewed anywhere in this project these are flagged with a warning because their quality cannot be estimated unreviewed inputs table the table lists every unreviewed assignment with column description scene id the input identifier name the annotator who completed the work estimated quality the estimated quality score (hover for sub metric breakdown) estimation type how the score was calculated β request sampling, project sampling, or no sampling available you can filter the table by estimation type to focus on specific groups right click any row and select open in task view\ to see the assignment in the annotation tool faq why does the overall quality card show a warning? some annotators have no reviewed work anywhere in the project the system cannot estimate their quality, so those inputs are excluded from the overall score the warning tells you how many inputs are affected why is an estimated quality score showing "n/a"? this means there is no review data available to estimate quality for that annotator β neither in this request nor elsewhere in the project what's the difference between "unreviewed in request" and "unreviewed in project"? "unreviewed in request" means the annotator has been reviewed in other requests within the same project β we can still estimate their quality using project sampling "unreviewed in project" means the annotator has never been reviewed in this project at all β we cannot estimate their quality how can i improve the reliability of quality scores? review more assignments as the number of reviewed samples increases, confidence intervals narrow and estimates become more stable prioritize reviewing annotators who appear under "unreviewed in project" since those are your biggest blind spots related https //docs kognic com/request management β where you access alignment analytics from https //docs kognic com/annotation workflows β how review phases work https //docs kognic com/roles and permissions β access control details docid\ pgncji7eztsd4pekq2pia analyze individual users alignment
