Business

Application of Computer Vision in 21st Century Technology

And Man said, “let there be sight,” and there was facial recognition, self-driving cars and more!

As the chapter of 2018 unfolds, in retrospect, it’s incredible to see how far artificial intelligence has advanced just within one year. Tesla announced their third autonomous beast-of-a-car the Model 3, as well as their autopilot equipped semi. Also, Microsoft introduced their ‘Seeing AI’ designed to help the visually impaired by describing handwriting, faces and even color. Although these achievements are cemented in the mainstream, there have also been promising furtherance in niche areas which have broad implications for a larger public.

Machine, Medicine and Our Environment

Besides the AI topics rooted in mainstream media there has been promising progress of machine learning in the field of medicine and even the preservation of the environment.

Machine learning has found an integral purpose in the medicine industry, highlighting its extensive real world use-cases. Primarily, melanomas are diagnosed visually which is currently a task which falls under the responsibility of a dermatologist – this is where machine learning can help. A journal article published early last year outlined that a deep convolutional neural network (CNN) could potentially help with the early detection of melanomas. Using pixels and disease labels as inputs, the machine could be trained to classify skin cancer with a level of competency comparable to that of board certified dermatologists, if not better.

The implications of this are vast given the 5-year survival rate is over 99% if a melanoma is detected in its early stages, a figure which drops drastically to 14% if it is detected in its late stages. With the increasing pervasiveness of smartphones, the article further goes on to suggest a potential “low-cost universal access to vital diagnostic care.”

Zooniverse projects

Zooniverse projects

 

Similarly, Zooniverse is the world’s largest and most popular web platform for citizen science where volunteers from around the world come together to assist professionals with meaningful research. Of their many ventures, one of their active projects, ‘Floating Forests’, is an ongoing attempt to track the kelp ecosystem which is an integral species for the survival of deep water herbivores. However, since the Landsat series of satellites are not designed to recognise kelp, they can sometimes mistake the tint of floating kelp as the sun’s reflection off a passing wave. As such, human volunteers play an important role in manually annotating the kelp to keep better track of it whilst training the machine.

 

How much can we really depend on visually equipped AI?

So, as is the common question: how much can we really depend on visually equipped AI? Fundamentally, machine learning is the process of creating an artificial intelligence which can act or adapt without explicitly being hand-programmed with instructions to do so; and this is is made possible through algorithms which allow the machine to iteratively learn from data that is inputted. As such, the answer inevitably circles back to the quantity and more importantly, the quality of data from which a machine ‘learns’.

I think we all remember the infamous Hotdog/Not Hotdog scene from HBO’s Silicon Valley as a great example of how important quality training data is for computer vision or any machine learning algorithm in general.

God damnit, Jian Yang!

What kind of data do you feed a machine?

To provide an example, for a machine to be able to accurately discern and identify the pug within the image below, it must already possess data which tells it what a pug looks like.

Image annotation pug

For any average person, the task of identifying the blanket, the Macbook on the left corner, the two dogs and maybe even their breed is a relatively menial one and takes a few seconds of observation. On the other hand, a machine does this through measuring the pixels in a given image and cross-checking that information against its internal data, so ultimately its object-recognition capabilities are only as good as its database.

In an interesting study conducted in Japan, researchers found that changing just one pixel in a standard training image of a dog caused the machine to misrecognise it as an airplane – this was the case for nearly two-thirds of all standard training data. As such, going back to our example, hundreds of thousands of annotated images of pugs need to be fed so that the machine can get a wholesome-as-possible idea of what a pug looks like – this is where Supahands can help!

Quality Assurance in Machine Learning

The sheer size of our 2000-strong workforce easily covers the quantity part, but what about the data quality? The quality of the data we provide is an integral part of our service offering and as such, we take every necessary measure to ensure that we deliver a minimum 90% accuracy rate.

Quality assurance and quality control are fundamental to the design of our service structure.

Since our inception, we realised that the quality of the outsourced services Supahands offers can only be as good as the workforce providing them. Accordingly, all SupaAgents must pass assessments before they can begin working on tasks in each of the four categories. Our operations team are constantly updating these assessments to mirror real assignments in order to keep a high standard of accuracy and quality.

Supahands Workplace also helps us manage our SupaAgents

Supahands Workplace also helps us manage our SupaAgents

Subsequently, September of last year saw the introduction of DIANE, our proprietary state-of-the-art auto-routing technology which helps our project managers create the perfect team around clients and projects. The algorithms in place allow these tasks to be assigned to agents who are most suitable based on their previous experiences, qualifications and performance. Nevertheless, if the task falls under a new category, our project managers are also able to quickly hand-pick a team of suitable and qualified SupaAgents based on their ratings, availability and accuracy rate.

Real-time tracking is another feature we have implemented to keep our accuracy rates at a stable peak; in contrast to the traditional method of compiling an accuracy rate based on expected values, through inserting test data in the working file, we are able to gauge the real-time accuracy rating of the SupaAgent. As such, if the accuracy falls below an acceptable threshold, our project managers are able to take immediate action by withdrawing the SA from the project and evaluating the work done.

The tasks are also integrated with Smart Assign, this means that the work and test data are sliced into and securely distributed for the assigned agents to work on which limits the chances of error. After the results from the distributed tasks have been collected, they are queued for evaluation through a peer review system.

Here, our clients also have the liberty to set parameters for accepting or rejecting results according to their preferences: results can either be accepted/rejected on a majority-says-true/false basis or can be adjusted to be accepted only if all peers say that the value is true. Currently, through this system Supahands can guarantee a minimum 94% accuracy rate which can be adjusted to 99.9%.

Supahands Self-Service Vertical

Our tech team at Supahands are also hard at work to introduce a self-service vertical within our platform in the near future. The implementation of this platform will remove the inefficiencies of miscommunication and time-zones, our clients will be offered the flexibility to kickstart a project instantly by simply filling out their details on our website and providing a standard operating procedure for SupaAgents to follow. For now, you can start or accelerate your AI project by reaching us through our dedicated webpage, or feel free to drop us an email at hello@supahands.com.

Amongst the vast use-cases, hopefully this article has also managed to highlight the myriad of possibilities which can be achieved through machine learning and image annotation. Supahands can make these plausible by fueling the machine with large volumes of clean and quality data.

If your team is in the process of training an AI, we can help.

You Might Also Like

No Comments

Leave a Reply