80-80 precision recall with 1500+ classes!

craxy_titles

Some crazy titles we had to deal with!

What is your job title/role/designation? If I ask this question to everyone in the world, probably I will get a million different answers. But, there are not a million different job roles, probably 2500-3500 of them, called by various names. We want to classify these many job titles into our internal taxonomy of 1500+ job roles. Why? We want to be able to tell what skills they need for each of the many jobs listed on the web and provide this information to the labor market. We want to tell jobseekers what jobs in the open market match them based on their AMCAT scores and what skills they need to improve for particular jobs. Similarly, we want to tell companies what candidates are suitable for their open job roles. This will not only lead to more efficient matching, but illumine training needs in the labor market.

Well, this becomes a 1500+ class classification problem! For every job role title and its job description (which may not exist for every title), we need to come up with one of our 1500+ job titles or just say we cannot do it. This is a challenging problem – large number of classes means needing huge amounts of labeled data for supervised learning; getting accurate expert ratings isn’t easy. Imagine doing a naive bayes on it – we will need at least 100 points per class, meaning a total requirement of 150K labeled data points. One thus needs to go with a mix of unsupervised and supervised learning to tackle a problem such as this.

We took this challenge up six months back with the usual toolbox of unigram/bigram frequency counts, SVDs, stemming, various distance metrics and so on. Good news! We came back with an 80-80 precision recall on test set (and improving): We can tell the title for 80% of the jobs and 80% times we were correct. Did our toolbox help – yes indeed, but with a lot of other innovations. The key learning: for real world application, general machine learning techniques do work, but is usually coupled with a lot of other smart engineering and innovations to make it happen – the best algorithm is a mix of rules from human intuition and statistical techniques. One needs to understand the problem domain well – to save oneself us from the no free lunch theorem by throwing a generic ML technique. So let us look at few of these innovations:

a. Vocabulary filter: Consider the titles “Junior CAD Operator-Northeast Commercial” or “PT M-F Summer Nanny for 4mo twins in Midtown West”. A lot of these words do not tell us about the job role, definitely not ‘Northeast Commercial’ or ‘Midtown West’. So could we just filter them out and save our ML algorithm the effort to figure out the right ‘features’? Second, we could tag the words semantically in two lists: one which tells us about the job function, say “CAD operator” and the other the level, junior/senior/manager etc. We do our distances separately on the function and level lists and then combine them in creative ways to get good results. Works wonders!

b. Title vs. job description: How do we use the two pieces of information optimally – the job title and job description. Interestingly we find that job descriptions help you save yourself from large mistakes; it does a good coarse comparison and gets you to the right set of jobs to consider. On the other hand, title comparisons can go grossly wrong at times (matching an attorney to a lawyer) but they do better on pinpointing the right title. A job description match has higher accuracy if we consider whether the right title was among the top 10 predicted titles, but a title match has higher accuracy if we look at the top title we could match. This makes intuitive sense. But, we can combine these two to better either!

c. Logical match vs. distance: We get one input from our various fancy projected distances between the title that’s queried and the set of titles we want to map to. Besides this, we also pull out a simple input from logical matching – is the query title a subset, exact match (has all the same words), superset or overlaps with one or more of our internal list of titles. This is useful information. It helps get some stuff absolutely right by a simple way and doesn’t let our creative statistical distances ruin it! For instance, if it is an exact match with a single internal title, we just choose it. More importantly, it helps create a decision tree on what to do with each ‘kind’ of query title according toss its logical match and use the matching titles as an input for further processing. Furthermore, it provides a guide on when to recall.

d. Crowdsourcing: Creating labeled data is a tough one here – it is subjective and needs some expert oversight. Crowd would not do a good job of selecting the matching title from a list of 1500 titles! Interestingly, we use the crowd, the expert and the ML predictor all feeding into each other creating a living system, which continuously improves itself (our previous work using the crowd innovatively). For instance, the ML predictor tells us the top 5 guesses, which we feed to the crowd. They tell us whether the title is one of these or not. If not, the expert jumps in. This helps us build a system where we continue to create new labeled data, benchmark our performance and improve the ML algorithm.

And this is the tip of the iceberg. One can solve seemingly very hard problems by smartly using machine learning techniques together with human intelligence — knowledge from the problem domain and the crowd. And this can create a lot of value – like in our case in the labor markets, if you look at USA, there are almost 4-5 million open positions and 8.5 million unemployed candidates. When one surveys jobseekers, 81% show lack of knowledge of skills needed for particular jobs and do not know the level of their skills. If we can fill this information gap credibly and automatically, there is hope! How do we map titles to skills scientifically – wait for another blog post!

-Varun
(Work done together with Vinay Shashidhar and Shashank Srikant)

Posted in Analytics, Data science, decision trees, Machine Learning.

One Comment

  1. Great work guys. We’ve been investigating some similar issues with multiclass classifiers, top-k predictions, and input from experts to improve the training data, e.g. described here: http://krvarshney.github.io/pubs/VarshneyCFWFM_kdd2014.pdf.

    Our work looks at data internal to an enterprise rather than the broader market workforce. Because of that, one problem we’ve started thinking about is how to calibrate an internal relative measure of expertise depth to an external absolute measure. I’d be interested in hearing your thoughts on that.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>