2019 in Review: Achieving Quality at Scale

parked cars

As 2019 comes to a close and we reflect on the events that took place over the year, we realised that “Quality at Scale” was a major theme that would come to define us. Since our refocus from B2C to B2B in 2017, annotation work has been a key part of our service which led us to place a higher emphasis on high-quality work. 

We launched the first version of SupaAnnotator early this year, and with it, we worked on our first annotation project. We initially thought our annotation platform was good enough to execute all types of annotation work with high precision, but it wasn’t. 

Quality at Scale is defined as the ability to execute growing volumes of work at speed without sacrificing quality while maintaining a high accuracy rate. 

We understand just how important the quality of our work is to our clients, as Machine Learning and Computer Vision algorithms are heavily dependant on the quality of the training dataset.

But really, what does Quality at Scale really mean to us at Supahands?

Quality at Scale transcends beyond the quality of our work, the projects that we execute for our clients and the results we deliver. It ties in with the performance of our clients’ business. At the very core, Quality at Scale stems from the empathy that we have for our clients and an understanding of how our projects contribute to their growth.

Put simply, at Supahands, our client’s business success matters to us.

Clients’ success = Our success 

How does the lack of quality affect us as a business?

Every quarter, Min, our Key Account Manager would collect Net Promoter Scores (NPS) from our clients. When we started noticing a decline in our NPS, we knew something was amiss and we were at risk of losing some of our biggest clients.

Internally, business units were also struggling to be on the same page in terms of communication and processes. That’s when we realise that we needed to change many things internally in order to improve our quality of work. 

How did we achieve Quality at Scale? 

Self-reflection is critical to the success of a business. Hitting these roadblocks led us to take a step back and we couldn’t help but ask ourselves these questions: 

  • How did we get here?
  • What was working and what was not? 
  • Where do we want to be? 
  • How can we help our clients achieve quality? 

We had to shift our focus as a company on a whole, which meant deprioritising client outreach in favour of fixing and improving what we lacked internally so that we can achieve Quality at Scale.

Dedicating a quarter to improving and fixing 

To achieve Quality at Scale, we needed first to fix SupaAnnotator, our in house annotation platform. We dedicated a whole quarter (Q3) in fixing and improving its features, birthing SupaAnnotator v2.0. Firstly, we realised that the accuracy of labelled data is directly tied to how well our SupaAgents use our products. So we made a few UX improvements: 

  1. Optimised layout and simpler interface. 

We expanded and optimised the layout and built a simpler interface which makes full use of the “canvas” (i.e. the browser window). We’ve also minimised distractions while creating room for larger images, leading to more accurate annotation.

SupaAnnotator v2. Layout
SupaAnnotator v1. Layout

2. Shortcut keys for labels

We added shortcut keys so that our “power users” and most efficient SupaAgents can annotate at speed.

3. Drawing Flow

We added a new crosshair feature which allows SupaAgents to annotate more accurately and smoothly. 

“Click-and-click” drawing mechanism increases annotation speed, and lowers turnaround times for each image.

We also knew that quality data isn’t confined to accurately labelled annotations, Quality Assurance and Quality Control play a significant role in it as well. At Supahands, we implemented two new Quality Control methods – Ground Truth and Consensus; and SupaTutorial for Quality Assurance. 

Quality Control

Ground Truth

We implemented Ground Truth with Intersection Over Union (IoU) in SupaAnnotator as a method for Quality Control. With it, SupaAnnotator will statistically deliver tasks with known answers to SupaAgents while they work on a live project to measure their accuracy in real-time. 

IoU is used to evaluate the accuracy of agents’ annotations against the Ground Truth set of annotations.

Consensus

Another method of Quality Control that we implemented is consensus-based decision making. Consensus works where a unique task is sent to three or more SupaAgents (our managed remote workforce) to work on. By sending one task to multiple SupaAgents, any discrepancies would be magnified and allow us to isolate them for Quality Control. This method is often used in sentiment tagging and simple object detection or projects that carry a certain level of subjectivity. 

Quality Assurance

SupaTutorial

As for Quality Assurance, we implemented SupaTutorial which allows us to evaluate  SupaAgents on their project knowledge before they are allowed to start working. SupaTutorial also serves as a playground for SupaAgents to test their knowledge and familiarise themselves with the platform without affecting real projects.

What’s to come in 2020?

SupaAnnotator v2.0 was built with our users in mind, and quality ties directly to how well our SupaAgents use our product. For 2020, we plan on more user research trips to better understand our users and quality control measures.


Find out how you can use SupaAnnotator for your project!

0 Shares:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.