TEAM MANAGEMENT
Productivity Insights
Quality Alignment
6 min
background quality alignment indicates how well annotators work aligns with expected quality standards it measures the accuracy of annotation work by comparing it against review corrections and ground truth data this page helps quality managers and team leads monitor individual annotator performance across key quality metrics identify training opportunities by spotting consistent patterns in errors (missing shapes, incorrect classifications, geometry inaccuracies, property mistakes) track quality trends over time to see if performance is improving or decreasing make data driven decisions about team assignments, training needs and quality processes how it works quality alignment metrics are calculated by analyzing the edits made during the review phase when an annotator's work is corrected , these corrections are tracked across four key dimensions precision are annotators labeling shapes correctly, or are they frequently mislabeling things? formula (correct identifications) / (total identifications made) recall are annotators finding all the shapes they should, or are they missing important items? formula (correct identifications) / (total shapes that should have been found) geometry accuracy are shapes, sizes, positions, and orientations correct? formula 1 (edited shapes / reviewed shapes) property accuracy are shape attributes and properties assigned correctly? formula 1 (edited properties / reviewed properties) the alignment score provides a single overall quality indicator by taking the minimum of these four metrics this ensures that weaknesses in any dimension are reflected in the overall score shape based calculations we are using a shape based method instead of an object based method to evaluate each individual geometric shape independently, providing granular accuracy measurements benefits higher precision captures partial errors within multi shape objects fairer evaluation accuracy reflects actual geometric work, not just object level decisions better training data identifies specific geometric issues rather than broad object level failures when to use this use quality alignment when you need to evaluate annotator readiness for production work investigate quality issues reported by reviewers or clients compare performance across team members track the effectiveness of training programs identify which types of errors (class confusion, geometry issues, property mistakes) are most common components alignment score trend trend graph for alignment score with the currently selected filters shows average value per day can be further filtered on specific users table