top of page

Micro-tasking for Tissue Image Labelling

Product Designer, User Researcher

Jan 2020 - August 2020

9 Months

Design brief

 

During my time PathAI, I was tasked with envisioning the design strategy for the Microtasking platform to collect labels/annotations to support machine learning (ML) model development.

I was the lead designer for the platform responsible for the end to end product design. My responsibilities included requirement gathering, defining the problem space, user research, interaction design, visual design, copy design, and usability testing with board-certified pathologists. 

 

In this case study, I will deep dive into the task dashboard experience that our expert pathologists interacted with to label tissue samples. These labels play a vital role in evaluating and improving ML algorithms that fuel the AI developed by PathAI.

This experience was launched in Sep 2020 (~4yrs since its first release) and has led to an improved user experience for the users. The overarching design problem was –

How might we improve the information hierarchy and layout of the annotator dashboard to improve engagement.

Dashboard1.gif

Available Tasks on dashboard - card and list view

What do we mean by annotations?

Every ML model at PathAI needs to be "taught" how to identify biological substances on a pathology slide. Annotations or labels are a layer of information on top of the raw data (Microscopic Pathology Slide) that help ML models identify what the tissue sample actually means in terms of diagnosis (as identified by an expert pathologist).

Annotations are important for the business because they are the "fuel" for our ML models, if the ML models was a "car". Thus, for a car to function as desired, the quality of the fuel must be acceptable to avoid any breakdowns. Similarly, the quality of annotations or labels will determine the outcomes of our ML algorithms, making it critical for the business.

Screen Shot 2021-09-16 at 11.57.53 PM.png

Zoomed out view of Non-Alchololic Steatohepatitis (NASH) tissue

Screen Shot 2021-09-16 at 11.58.56 PM.png

Zoomed in view of an annotated "Frame" (Algorithm identified subsection of slide)

Screen Shot 2021-09-16 at 11.59.14 PM.png

Maximum zoom showing the annotated (red polygons) nuclei (membrane bound organelle containing genetic materials like DNA )

Who is the Annotator persona? What are their motivations?
290bf3ab.png

Illustration by Design Intern Chorong Park

Our annotators are board certified pathologists who provide their expertise to PathAI, one of which is in the form of annotating/labelling our pathology slides.

They have various motivations to work with PathAI Including,

  • Flexible and remote nature of the jobs

  • Keeping up-to date with the upcoming technologies in their field

  • Comparable hourly pay with their day jobs

  • Being part of something bigger and transformative in AI powered pathology


End-End Annotation Task workflow

In this section, I illustrate the complete workflow of the annotation platform and various personas interacting with the system. In addition to the core contributing pathologist, our eco-system includes  –
 

  • Scientific Program Manager – Responsible for defining, tracking, managing scientific ML model development services for Pharma Clients

  • Internal Pathologist – Internal PathAI pathologists who provide their pathology expertise throughout model development projects

  • Machine Learning Engineers – Responsible for developing algorithms that identify specific biomarkers to in pathology slides

  • Community Experience Manager – The liaison between our internal stakeholders and external annotator pathologists. Responsible for overall health and relationships between PathAI and our pathologist network

Workflow Laying out the the various users/stakeholders and their interaction points

In the workflow you can see that there are three key flows –

1. Internal stakeholders define the task definition based on requirement – create, assign a task

2. The annotator pathologists will receive this task – work on it  – submit task – receive payment and feedback (if any)

3. The results of the task will be used for ML model development by MLEs and might result in revised versions of the same task or an entirely new task

This case study dives deep on these key use cases on step '2' –

  • If an annotator pathologist logs into the platform they have to make a decision on which task they should do (if at all)

  • If they have completed a task, what is the status on it's payment

  • If they want to know what type of tasks they can expect from PathAI in near future and can plan their time accordingly

With this premise, let's revisit our design problem –

How might we improve the information hierarchy and layout of the Annotator dashboard to improve engagement.
Frame 3612.png

Previous Dashboard Design

Let's understand why a redesign of the dashboard was important for the user and the business.

The previous dashboard had some known pain points including –

1. The dashboard was not easy to scan for the most important information. There was scope to improve the layout and IA.
 

2. All tasks did not have similar visual treatment on the dashboard. There was the confusing concept of "Active" task and "Non Active Task". This was counter intuitive to user behavior. We learnt from research that users might "test drive" a few slides before deciding to complete the entire slide.

3. Important information like payment details was buried deep in task instructions. We would often get customer queries regarding "payment details" before and after completing an annotation task. 

4. Tasks that the user could no longer work on also showed up on the dashboard taking up precious space.

 

Design

The design process for this project involved both strategic planning and tactical delivery in the agile sprint cycle. I used some common design patterns from our design system (Anodyne 2.0) for visual alignment with other PathAI products.​

The design process was highly collaborative and iterative with the product, eng. team and the design team. I regularly organized and led a co creation design jam (remotely) with our engineering team to get their design inputs early in the design process.

In addition to internal feedback, I also benchmarked other microtasking products like Amazon Mechanical Turk, User Testing.com, task management products like GitHub, Jira, Monday.com and Airtable, and data labelling platforms like Centaur Labs platform.

Some of my key takeaways from secondary research included -
1. Centaur Labs app uses medical images and tags that makes the card visual and easy to scan.

2. Amazon M Turn uses overview and earnings on their home page for turkers

3.User testing.com highlights the $ amount and the device the user needs (pre requisite). It also has screener tasks that can "qualify" you if meet the criteria.

4.Use color sparingly, only to highlight something of importance.

competetors.png

Secondary Research : 1. M Turk, 2. User Testing.com, Monday.com, Jira, GitHub

Design Brainstorm with the squad

Interaction and Visual Design

One of the most important element on the task dashboard was the task "card" or task "row" (in table view). Below you will see various explorations for this card and its final design and information layout.

Task card design and explorations. 

Dashboard Screens:​

Quick view, filtered by task status

  • AvailableTasks: are all tasks that are available for the annotator to do
     

  • Completed Tasks: are all completed by annotator that are either under review or payment has been processed for them
     

  • Upcoming Tasks : Tasks that are in the pipeline, to help the annotator anticipate upcoming work from PathAI

task status3.gif

Available, Complete and Upcoming Tasks

CTA on available tasks.gif

View Task instructions + Contact Support

tablet 1.gif

Tablet Experience

Components, variants and screens

Designing for Scale

One of the key design principles for this dashboard experience was to think about the future use cases and scenarios that will help PathAI scale for 10X more annotation tasks in future.

1. Number of Tasks

The first use case includes thinking about how might the task dashboard evolve as the number of tasks increases as we move towards more micro tasks with more data.

 

To address for this, I designed the task in the form of a table that could allow for advance sorting, filtering in future. It would also enable the user to see a larger number of tasks above the fold along with pagination. 
 

card_table.gif


2. Type of Tasks

 

One of the type of tasks we have been exploring is a "qualification task", which is a task that is sent to an annotator before the actual job to asses their ability to annotate a particular biomarker.
 

User Research 
All Tasks.png
Frame 3680.png

Visual styles for the card exploring the qualification task as a special task

Locked tasks.png

Concept exploring "locked tasks", which can be unlocked after doing qualification tasks

Additional Concept Explorations

 

Here I explore two concepts that did not make it to the final design for V2 but are worth discussing –

  • User Stats and impact: How can we motivate the user by showing them their performance so far and how does that fit into the big picture of model development.

  • Gamification and social performance as motivators: Data labelling can be a highly mundane, repetitive and laborious task for a pathologist. The hypothesis for the concept was that adding some fun elements of gamification might improve user engagement on the product.