LogoLogo
RedBrick AIGuides
  • Quick Start
    • Walkthrough Guides
    • Get Started with a Project
    • Get Started with Workspace
      • Cohort Creation
      • Datapoint Classification
      • Configuring Metadata Schema
    • Creating a RedBrick AI Account
  • Organizations
    • Organization and Project Roles
    • Inviting Your Team
      • Single Sign-on
  • Dashboard
    • Account Settings
    • User Preferences
    • Worklist
    • Preview Tool
    • Integrations
    • Taxonomies
    • Boost
      • Auto Annotator
    • Home
      • Sections
  • Importing Data
    • Uploading Data to RedBrick
    • Import Cloud Data
      • Configuring AWS s3
      • Configuring Azure Blob
      • Configuring GCS
      • Configuring AltaDB
      • Creating an Items List
    • Troubleshooting
  • Projects
    • Tasks & Assignment
    • Comments & Raise Issue
    • Reference Standards
    • Project & Task Analytics
    • Labeler Evaluation
  • Project Pages
    • Data Page
    • Settings Page
      • Custom Label Validation
      • Custom Hanging Protocol
      • Webhooks
    • Multiple Labeling
      • Consensus
        • Agreement calculation
      • Task duplication
  • Annotation & viewer
    • Viewer Basics
      • Document Viewer
      • Multiple Modalities
      • Intellisync
      • Annotation Mirroring
    • Creating, Editing and Deleting Annotations
    • Visualization and Masking
    • Segmentation
      • Segmentation Tools
      • Instance vs. Semantic
      • Overlapping Segmentations
    • Heat maps
  • Python SDK & CLI
    • Full documentation
    • Installation & API Keys
    • SDK Overview
      • Importing Data & Annotations
      • Programmatic Label & Review
      • Assigning & Querying Tasks
      • Exporting Annotations
    • CLI Overview
      • Creating & Cloning Projects
      • Import Data & Annotations
      • Exporting Annotations
    • Importing Annotations Guide
    • Formats
      • Full Format Reference
      • Export Structure
  • Useful Links
    • Privacy Policy
Powered by GitBook
On this page
  • Calculating Labeler Quality Scores
  • Blinded vs. Non-blinded Annotations

Was this helpful?

  1. Projects

Labeler Evaluation

PreviousProject & Task AnalyticsNextData Page

Last updated 1 year ago

Was this helpful?

Calculating Labeler Quality Scores

RedBrick AI allows you to upload a Ground Truth annotation file alongside any image or volume file for the purposes of evaluating labeler quality.

This can be useful when you'd like to have RedBrick AI calculate a score that you can use to compare a specific labeler's performance against a known Ground Truth label set.

Blinded vs. Non-blinded Annotations

When evaluating labeler quality using these Evaluation Tasks, you have the option of allowing your labelers to visually reference the Ground Truth annotations that you are using as a baseline or keeping them invisible to the labeler.

This feature is referred to as either Non-blinded Annotations or Blinded Annotations, respectively.

To create an Evaluation Task in RedBrick AI, you can take the following steps:

  1. After your Task has been created, determine whether you would like your labelers to see the Baseline while working. Navigate to your Project Settings and enable or disable the Show reference annotations toggle.

    1. With the toggle enabled, labelers will be able to see the Baseline Annotations while working. With the toggle disabled, the Baseline Annotations will be invisible to the labeler.

    2. Please note that we also generally recommend disabling Automatic Task Assignment when testing labeler quality.

  1. Assign the Task to your labeler for completion.

  2. After your labeler finalizes the Task, an agreement score will be displayed on the Data Page.

Upload your image/volume alongside your Ground Truth annotation file ("Baseline Annotations"). A walkthrough of how to do so can be found in our .

documentation for importing annotations
Sample Flow
Relevant toggles in Project Settings
A completed Evaluation Task and score