LogoLogo
RedBrick AIGuides
  • Quick Start
    • Walkthrough Guides
    • Get Started with a Project
    • Get Started with Workspace
      • Cohort Creation
      • Datapoint Classification
      • Configuring Metadata Schema
    • Creating a RedBrick AI Account
  • Organizations
    • Organization and Project Roles
    • Inviting Your Team
      • Single Sign-on
  • Dashboard
    • Account Settings
    • User Preferences
    • Worklist
    • Preview Tool
    • Integrations
    • Taxonomies
    • Boost
      • Auto Annotator
    • Home
      • Sections
  • Importing Data
    • Uploading Data to RedBrick
    • Import Cloud Data
      • Configuring AWS s3
      • Configuring Azure Blob
      • Configuring GCS
      • Configuring AltaDB
      • Creating an Items List
    • Troubleshooting
  • Projects
    • Tasks & Assignment
    • Comments & Raise Issue
    • Reference Standards
    • Project & Task Analytics
    • Labeler Evaluation
  • Project Pages
    • Data Page
    • Settings Page
      • Custom Label Validation
      • Custom Hanging Protocol
      • Webhooks
    • Multiple Labeling
      • Consensus
        • Agreement calculation
      • Task duplication
  • Annotation & viewer
    • Viewer Basics
      • Document Viewer
      • Multiple Modalities
      • Intellisync
      • Annotation Mirroring
    • Creating, Editing and Deleting Annotations
    • Visualization and Masking
    • Segmentation
      • Segmentation Tools
      • Instance vs. Semantic
      • Overlapping Segmentations
    • Heat maps
  • Python SDK & CLI
    • Full documentation
    • Installation & API Keys
    • SDK Overview
      • Importing Data & Annotations
      • Programmatic Label & Review
      • Assigning & Querying Tasks
      • Exporting Annotations
    • CLI Overview
      • Creating & Cloning Projects
      • Import Data & Annotations
      • Exporting Annotations
    • Importing Annotations Guide
    • Formats
      • Full Format Reference
      • Export Structure
  • Useful Links
    • Privacy Policy
Powered by GitBook
On this page

Was this helpful?

  1. Python SDK & CLI
  2. SDK Overview

Programmatic Label & Review

PreviousImporting Data & AnnotationsNextAssigning & Querying Tasks

Last updated 1 year ago

Was this helpful?

It may be useful to programmatically add labels to your uploaded data or perform a review on queued tasks. This scenario may arise if you have an automated way of reviewing data or if you want to bulk-process tasks.

Please see the detailed reference documentation for .

You can only use put_tasks on Tasks assigned to your API key.

Please consult our to learn more about how to assign Tasks to your API key.

First, perform the to create a project object.

project = redbrick.get_project(org_id, project_id, api_key)

Next, you need to get a list of Tasks you want to label/review. You can do this by:

  1. Searching for the task_id through the RedBrick AI UI.

  2. Retrieving the task_id from your filename/custom name from the Items List using .

  3. Retrieving tasks assigned to your API key using list_tasks.

Programmatically Label Tasks

Add your annotations within the series field, along with the task_id. Please refer to the reference documentation for the .

The corresponding Task must be queued in the Label Stage and assigned to your API key.

tasks = [
    {
        "taskId": "...",
        "series": [{...}]
    },
]

# Submit tasks with new labels
project.labeling.put_tasks("Label", tasks)

# Save tasks as draft with new labels
project.labeling.put_tasks("Label", tasks, finalize=False)

Programmatically Review Tasks

Add your review decision in the review_result argument, along with the task_id. The corresponding Task must be queued in the Review stage that you specify in stage_name and must be assigned to your API key.

# Set review_result to True if you want to accept the tasks
project.review.put_tasks("Review_1", [{taskId: "..."}], review_result=True)

# Set review_result to False if you want to reject the tasks
project.review.put_tasks("Review_1", [{taskId: "..."}], review_result=False)

# Add labels if you want to accept the tasks with correction
project.review.put_tasks("Review_1", [{taskId: "...", series: [{...}]}])

Re-annotate Ground Truth Tasks

Once your Task goes through all of the stages in your workflow, it will be stored in the Ground Truth Stage. If you notice issues with one or more of your Ground Truth Tasks, you can either modify them manually within the UI while the Tasks are still in the Ground Truth Stage or send them back to the Label Stage for correction.

task_ids = ["...", "..."]
project.labeling.move_tasks_to_start(task_ids=task_ids)

All corresponding Tasks need to be in the Ground Truth Stage. This function will not work for Tasks queued in Review.

First, get a list of the task_ids you want to send back to Label. You can do this by and filtering them. Then, use move_tasks_to_start to send them back to Label.

put_tasks here
documentation
format of the annotations in Series
exporting only Ground Truth Tasks
search_tasks
standard RedBrick AI SDK set-up