AM Research is a division of Aspiring Minds. Aspiring Minds aspires to build an assessment-driven job marketplace (a SAT/GRE for jobs) to drive accountability in higher education and meritocracy in labor markets. The products developed based on our research has impacted more than two million lives and the resulting data is a source of continuous new research.


A cocktail of assessment, HR, machine learning, data science, education, social impact with two teaspoons of common sense stirred in it.

AI can help you spot the right programmer!

The industry today is on a constant look-out for good programmers. In this new age of digital services and products, it’s a premium to possess programming skills. Whenever a friend asks me to refer a good programmer for his company, I tell him – why would I refer to you, I will hire her for my team! But what does having programming skills really mean? What do we look for when we hire programmers? An ability to write functionally correct programs – those that pass test cases? Nah..

A seasoned interviewer would tell you that there is much more to writing code than passing test cases! For starters, we really care for how well a candidate understands the problem and approaches a solution than being able to write functionally correct code. “Did the person get the logic?” is generally the question discussed among interviewers. We’re also interested in seeing whether the candidate’s logic (algorithm) is efficient – with low time and space complexity. Besides this, we also care for how readable and maintainable a candidate makes her code – a very frustrating problem for the industry today is to deal with badly written code that breaks under exceptions and which is not amenable to fixes.

So then why do we all use automated programming assessments in the market which base themselves on just the number of test cases passed? It probably is because it is believed that there’s nothing better. If AI can drive cars automatically these days, can it not grade programs like humans? It can. In our KDD paper in 2014 [1], we showed that by using machine learning, we could grade programs on all these parameters as well as humans do. The machine’s agreement with human experts was as high as 0.8-0.9 correlation points!

Why is this useful? We looked at a sample of 90,000 candidates who took Automata, our programming evaluation product, in the US. These were seniors graduating with a computer science/engineering degree and were interested in IT jobs. They were scored on four metrics – percent test-cases passed, correctness of logic as detected by our ML algorithm (scored on a scale of 1-5), run-time efficiency (on a scale of 1-3) and best coding practices used (on a scale of 1-4). We find answers to all that an interviewer cares for –

  • Is the logic/thought process right?
  • Is this going to be an efficient code?
  • Will it be maintainable and scalable?

Fig.1. Distribution of the different score metrics. Around 48% of the candidates who scored 4 on the code logic metric (as detected by our ML algorithm) had passed less than 50% of their test suite.

A clever man commits no minor blunders – Von Goethe

Typically, companies look for candidates who get the code nearly right; say, those who pass 80% test-cases in a test-suite. 36% of the seniors made it through such a criteria. In a typical recruiting scenario, the remaining 64% would have been rejected and not considered for further processes. We turned to our machine learning algorithm to see how many of these “left out” had actually written logically correct programs having only silly errors. What do we find? Another good 16% of these were scored 4 or above by our system, which meant they had written codes which had ‘correct control structures and critical data-dependencies but had some silly mistakes’. These folks should have been considered for the remainder of the process! Smart algorithms are able to spot what would be totally missed by test cases but which could have been spotted by human experts.

incorrect_codes

Sample codes which fail most of their testsuites but are scored high by our ML system (click to enlarge)

We find a lot of these candidates’ codes pass less than 50% test cases (see figure 1). Why would this be happening? We sifted through some examples and found some very interesting ways in which students made errors. For the problem requiring to remove vowels in a string, a candidate had missed what even the best in the industry fall prey to at times – an incorrect usage of ORs and ANDs with the negation operator! Another had implemented the logic well but messed up on the very last line. He lacked the knowledge of converting character arrays back to strings in Java. The lack of such specific knowledge is typical of those who haven’t spent enough time with Java; but this is easy to learn and shouldn’t cost them their performance in an assessment and shouldn’t stop them from differentiating themselves from those who couldn’t think of the algorithm.

 

Fig.2. Distribution of runtime efficiency scores and program practices scores in perspective of the nature of the attempted problems

The computing scientist’s main challenge is not to get confused by the complexities of his own making – E. W. Dijkstra

We identified those who had written nearly correct code; those who could think of algorithms and put them into a programming language. However, was this just some shabbily written code which no one would want to touch? Further, had they written efficient code or would it have resulted in the all so familiar “still loading” messages we see in our apps and software. We find that roughly 38% folks thought of an efficient algorithm while writing their codes and 30% wrote code which was acceptable to reside in professional software systems. Together, roughly 20% students write programs which are both readable and efficient. Thus, AI tells us that there are 20% programmers here whom we should really talk to and hire! If we are ready to train them and run with them a little, there are various other score cuts one could use to make an informed decision.

In all, AI can not only drive cars, but find programmers who can make a driverless car. Thinking how to find data scientists  by using data science? – coming soon.

Interested in using Automata to better understand the programming talent you evaluate? Do you have a different take on this? Tell us by writing to research@aspiringminds.com

Gursimran, Shashank and Varun

References

[1] Srikant, Shashank, and Aggarwal, Varun. “A system to grade computer programming skills using machine learning.” Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014.

80-80 precision recall with 1500+ classes!

craxy_titles

Some crazy titles we had to deal with!

What is your job title/role/designation? If I ask this question to everyone in the world, probably I will get a million different answers. But, there are not a million different job roles, probably 2500-3500 of them, called by various names. We want to classify these many job titles into our internal taxonomy of 1500+ job roles. Why? We want to be able to tell what skills they need for each of the many jobs listed on the web and provide this information to the labor market. We want to tell jobseekers what jobs in the open market match them based on their AMCAT scores and what skills they need to improve for particular jobs. Similarly, we want to tell companies what candidates are suitable for their open job roles. This will not only lead to more efficient matching, but illumine training needs in the labor market.

Well, this becomes a 1500+ class classification problem! For every job role title and its job description (which may not exist for every title), we need to come up with one of our 1500+ job titles or just say we cannot do it. This is a challenging problem – large number of classes means needing huge amounts of labeled data for supervised learning; getting accurate expert ratings isn’t easy. Imagine doing a naive bayes on it – we will need at least 100 points per class, meaning a total requirement of 150K labeled data points. One thus needs to go with a mix of unsupervised and supervised learning to tackle a problem such as this.

We took this challenge up six months back with the usual toolbox of unigram/bigram frequency counts, SVDs, stemming, various distance metrics and so on. Good news! We came back with an 80-80 precision recall on test set (and improving): We can tell the title for 80% of the jobs and 80% times we were correct. Did our toolbox help – yes indeed, but with a lot of other innovations. The key learning: for real world application, general machine learning techniques do work, but is usually coupled with a lot of other smart engineering and innovations to make it happen – the best algorithm is a mix of rules from human intuition and statistical techniques. One needs to understand the problem domain well – to save oneself us from the no free lunch theorem by throwing a generic ML technique. So let us look at few of these innovations:

a. Vocabulary filter: Consider the titles “Junior CAD Operator-Northeast Commercial” or “PT M-F Summer Nanny for 4mo twins in Midtown West”. A lot of these words do not tell us about the job role, definitely not ‘Northeast Commercial’ or ‘Midtown West’. So could we just filter them out and save our ML algorithm the effort to figure out the right ‘features’? Second, we could tag the words semantically in two lists: one which tells us about the job function, say “CAD operator” and the other the level, junior/senior/manager etc. We do our distances separately on the function and level lists and then combine them in creative ways to get good results. Works wonders!

b. Title vs. job description: How do we use the two pieces of information optimally – the job title and job description. Interestingly we find that job descriptions help you save yourself from large mistakes; it does a good coarse comparison and gets you to the right set of jobs to consider. On the other hand, title comparisons can go grossly wrong at times (matching an attorney to a lawyer) but they do better on pinpointing the right title. A job description match has higher accuracy if we consider whether the right title was among the top 10 predicted titles, but a title match has higher accuracy if we look at the top title we could match. This makes intuitive sense. But, we can combine these two to better either!

c. Logical match vs. distance: We get one input from our various fancy projected distances between the title that’s queried and the set of titles we want to map to. Besides this, we also pull out a simple input from logical matching – is the query title a subset, exact match (has all the same words), superset or overlaps with one or more of our internal list of titles. This is useful information. It helps get some stuff absolutely right by a simple way and doesn’t let our creative statistical distances ruin it! For instance, if it is an exact match with a single internal title, we just choose it. More importantly, it helps create a decision tree on what to do with each ‘kind’ of query title according toss its logical match and use the matching titles as an input for further processing. Furthermore, it provides a guide on when to recall.

d. Crowdsourcing: Creating labeled data is a tough one here – it is subjective and needs some expert oversight. Crowd would not do a good job of selecting the matching title from a list of 1500 titles! Interestingly, we use the crowd, the expert and the ML predictor all feeding into each other creating a living system, which continuously improves itself (our previous work using the crowd innovatively). For instance, the ML predictor tells us the top 5 guesses, which we feed to the crowd. They tell us whether the title is one of these or not. If not, the expert jumps in. This helps us build a system where we continue to create new labeled data, benchmark our performance and improve the ML algorithm.

And this is the tip of the iceberg. One can solve seemingly very hard problems by smartly using machine learning techniques together with human intelligence — knowledge from the problem domain and the crowd. And this can create a lot of value – like in our case in the labor markets, if you look at USA, there are almost 4-5 million open positions and 8.5 million unemployed candidates. When one surveys jobseekers, 81% show lack of knowledge of skills needed for particular jobs and do not know the level of their skills. If we can fill this information gap credibly and automatically, there is hope! How do we map titles to skills scientifically – wait for another blog post!

-Varun
(Work done together with Vinay Shashidhar and Shashank Srikant)

Data science camp for kids!

It is an open secret that data science is becoming pervasive. What was once the preserve of statisticians and computer scientists – deft at trudging through mountains of data – has found its tools and techniques percolating into every industry and every level. Peer into the crystal ball and you don’t need to suspend reality too much to imagine a future in which a factory manager looks at production data to predict what machine might break-down soon. A cab-operator analyzes his Uber receipts to figure out where he should drive to make the most money. A sales manager looks at what kinds of customers his sales agents are most successful with to ascertain who to deploy where. Decidedly, the future belongs to the data scientist. Where will these data scientists come from? Who is going to train them?

The very nature of the subject eschews traditional learning modes. The data scientist must have the ability to learn quickly the context of the dataData science camp!, build hypotheses, have the ability to use techniques to confirm his suspicions and then construct predictors or automated systems. It marries technology with knowledge; intuition with scientific rigor. Our education systems will be slow to adapt – they will have to devise new methodologies, develop syllabi and learn to simultaneously involve multiple teachers. In the meanwhile, a whole generation of students might graduate who do not have the skills that industry expects from them in a data rich environment.

At Aspiring Minds, we’re passionate about helping students reach their full potential. We plan to pursue a series of initiatives to help advance data science education in India and around the world. As a first step, we held a data science camp for elementary school students! The participants continuously surprised us – with their knowledge, their understanding and even their wit. Two things became clear quickly – a. kids seldom confront open-ended problems and it took some getting used-to the idea of there being no one correct, pre-decided answer and b. with some guidance, they learn astonishingly quickly.

Read more about our exciting and rewarding weekend here!

At the end of the camp, the participating kids blogged about their experiences and the plots/analysis that they came up with. Read about them here.

Our team got enthusiastically involved in mentoring the students through the exercise and ended up learning more about their own teaching styles in the process.

We’ve also put out the exercises and resources we used for the camp for you to replicate it in your school/university/workplace. If the thought of indulging high schoolers in data-science seems absurd to you, snap out of it! It is possible; we tried it and the kids had a fun time picking up these concepts.

Let us know what you thought of our data camp. Please do write to us if you go ahead and try this out with students around you. We’ll eagerly look forward to that!

Samarth Singal
Research Intern, Aspiring Minds
Class of 2017, Computer science, Harvard.

Paper accepts at ICML and KDD!

Some more good news!

Soon after our recent acceptance of our spoken English grading work at ACL, our work on learning models for job selection and personalized feedback gets accepted at the workshop Machine Learning for Education at ICML! Some results from this paper were discussed in one of our previous posts. The tool was built five years ago and has since helped a couple of million students get personalized feedback and aided 200+ companies hire better. I shall also be giving an invited talk at this workshop.

Earlier this month, we also got a paper at KDD accepted, which builds on our previous work in spontaneous speech evaluation. We find how well we can grade spontaneous speech of natives of different countries and also analyze the benefits the industry gets with such an evaluation system.

Busy year ahead it seems – paper presentations at France, Beijing, Australia and finally New Jersey, where we’re organizing the second edition of ASSESS, our annual workshop on data mining for educational assessment and feedback. It’s being organized at ICDM 2015 this winter. July 20th is the submission deadline for the workshop. Here is a list of submissions we saw in our workshop last year, at KDD. Spread the word!

– Varun

What we learn from patterns in test case statistics of student-written computer programs

Test cases evaluate whether a computer program is doing what it’s supposed to do. There are various ways to generate them – automatically based on specifications, say by ensuring code coverage [1] or by subject matter experts (SMEs) who think through conditions based on the problem specification.

We asked ourselves whether there was something we could learn by looking at how student programs responded to test cases. Could this help us design better test cases or find flaws in them? By looking at such responses from a data-driven perspective, we wanted to know whether we could .a. design better test cases .b. understand whether there existed any clusters in the way responses on test cases were obtained and .c.  whether we could discover salient concepts needed to solve a particular programming problem, which would then inform us of the right pedagogical interventions.

A visualization which shows how our questions cluster by the average test case score received on them. More info on this in another blog :)

A visualization which shows how our questions cluster by the average test case score received on them. More on this in another post :)

We built a cool tool which helped us look at statistics on over 2500 test cases spread across over fifty programming problems attempted by nearly 18,000 students and job-seekers in a span of four weeks!

We were also able to visualize how these test cases clustered for each problem, how they correlated with other cases across candidate responses and were also able to see what their item response curves looked like. Here are a couple of things we learnt in this process:

One of our problems required students to print comma-separated prime numbers starting from 2 till a given integer N. When designing test cases for this problem, our SMEs expected there to be certain edge cases (when N was less than 2) and some stress cases (when N was very large) while expecting the remainder of the cases to check the output for random values of N, without expecting them to behave any differently. Or so they thought. :) On clustering the responses obtained on each of the test cases for these problems (0 for failing a case and 1 for passing it), we found two very distinct clusters being formed (see figure below) besides the lone test case which checked for the edge condition. A closer look at some of the source codes helped us realize that values of N which were not prime numbers had to be handled differently – a trailing comma remained at the very end of the list and lots of students were not doing this right!

A dendogram depicting test case clustering for the prime-print problem

A dendogram depicting test case clustering for the prime-print problem

This was interesting! It showed that the problem’s hardness was not only linked to the algorithm of producing prime numbers till a given number, but also linked to the nuance of printing it in a specific form. In spite of students getting the former right, a majority of them did not get the latter right. There are several learnings from this. If the problem designer just wants to assess if students know the algorithm to generate primes till a number, s/he should drop the part to print them in a comma separated list – it adds an uncalled for impurity to the assessment objective. On the other hand, if both these skills are to be tested, our statistics is a way to confirm the existence of these two different skills – getting one right does not mean the other is doable (say, can this help us figure out dominant cognitive skills that are needed in programming?). By separating the test cases to check the trailing comma case and reporting a score on it separately, we could ideally give an assessor granular information on what the code is trying to achieve. Contrast this to when test cases were simply bundled together and it wasn’t clear what aspect the person got right.

More so, when we designed this problem, the assessment objective was to primarily check the algorithm for generating prime numbers. Unfortunately, the cases that did not handle the trailing comma went down on their test case scores in spite of having met our assessment criterion. The good news here was that our machine learning algorithm [2] niftily picked it up and was able to say by the virtue of their semantic features that they were doing the right job!

We also fit 3-PL models from Item Response Theory (more info) on each test case for some of our problems and have some interesting observations there on how we could relate item-parameters to test case design – more on this in a separate post!

Have ideas on how you could make use of such numbers and derive some interesting information? Write to us, or better, join our research group! :)

Kudos to Nishanth for putting together the neat tool to be able to visualize the clusters! Thanks to Ramakant and Bhavya for spotting this issue in their analysis.

– Shashank and Varun

 References -

[1] KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs, Cadar, Cristian, Daniel Dunbar, and Dawson R. Engler. OSDI. Vol. 8. 2008.

[2] A system to grade computer programming skills using machine learning, Srikant, Shashank, and Varun Aggarwal. Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014.

Tweets

Aspiring Minds National Employability Report triggers #AICTE to revise syllabus of technical courses in the country. knnindia.co.in/news/newsdetai…

We're launching Skill India #Fellowship to award Vocational Skill #Trainers. Let's Celebrate Excellence in Training… twitter.com/i/web/status/8…

AM Research conducted the #Datascience for #Kids camp at @SIGCSE_TS 2017 in Seattle this week! Learn more:… twitter.com/i/web/status/8…

What is the art or (data) science of #hiring ? Read to know how scientific #job assessment gets you the right fit.… twitter.com/i/web/status/8…

#startup to tech: College students ready for summer #internship drive 2017. @Letsintern -> bit.ly/2mFs3o4 👩‍🎓👨‍🎓👩‍💼👨‍💻

New internships in January 2017 show that the trend is firmly towards content & marketing. ecoti.in/W9V0La via @economictimes

Our CEO @himanshu0820 was part of the panel at @BrookingsInst Blum Roundtable in the US. Here's what he said about… twitter.com/i/web/status/8…

At #NEC in #Hyderabad, Manish Gupta, co-founder, Yen4Ken, talks on #online learning in educational institutions 2 i… twitter.com/i/web/status/8…

Siddique Ahamed, Associate Head - Training, @tech_mahindra delivers talk on #employability at National Employabilit… twitter.com/i/web/status/8…

Naveen K of @Mindtree_Ltd talks abt latest trends on #employability & addresses concerns over #input quality. Live… twitter.com/i/web/status/8…

⚡️ “National Employability Conclave 2017 - Hyderabad Chapter kicks off!” twitter.com/i/moments/8300…

National Employability Conclave in Hyderabad - Viswanathan Venkat, Head of Engineering Hiring, @Wipro discusses emp… twitter.com/i/web/status/8…

National Employability Conclave kicks off in #Hyderabad. Aspiring Minds sr. Vice President Sushant Dwivedy addresse… twitter.com/i/web/status/8…

Our CTO @varaggarwal wrote for @Analyticsindiam about #recruitment analytics & power of assessment science.… twitter.com/i/web/status/8…

Decoding #Recruitment analytics & assessment with Aspiring Minds CTO Varun Aggarwal - analyticsindiamag.com/interview-week… via @analyticsindiam

Our CEO @himanshu0820 writes for @businessinsider on why #startups need better #Recruiting results now more thn evr… twitter.com/i/web/status/8…

Proud to be associated with @thenudge_in in helping mentor & guide individuals working towards poverty alleviation. bit.ly/2krFk2x

.. (2/2) Nupur Jain, Wingify, Sonalika Prasad, Bajaj Capital, Amit Yadav, Josh Technology & Harvinder Singh, KMSG at #employability conclave

#NEC panel discussion on 'Bridging #employability gap' had varied panelists including Vanalakshmi, Head of operatio… twitter.com/i/web/status/8…

At #employability Conclave, Manish Gupta, CEO, @Yen4Ken speaks on using e-learning in educational institutions to i… twitter.com/i/web/status/8…

At #employability CONCLAVE, Ankush Minocha, Mktg Head, global campus hiring @Wipro delivers a talk on "From classro… twitter.com/i/web/status/8…

⚡️ “The Delhi Chapter: National #Employability Conclave - January 19, 2017” twitter.com/i/moments/8219…

We're back! NATIONAL #employability CONCLAVE has kicked off in #delhi & our CTO @varaggarwal opens the platform to… twitter.com/i/web/status/8…

19 Words to avoid in your next cover letter: ht.ly/F1Bo3084XrT

Let your feet lead your path to a favoured job. #LoveToTravel #TravellingJobs #FresherJobs ht.ly/Fcss3084XmP

Festive fervour in the air! On the joyful occasion of #Lohri, #Makar Sankranti, #Pongal & #Bihu, we extend warm wis… twitter.com/i/web/status/8…

What makes Indian #Millennials tick & stick at work? #innovation #workethic & more. Catch our survey results here… twitter.com/i/web/status/8…

At National Employability Conclave in #Chennai, we handed out Career #Guru Awards to the most deserving lot. Watch… twitter.com/i/web/status/8…

Our survey on job trends for Freshers in 2016 gets featured in @bsindia 🗞️📊 twitter.com/bhakt4ever/sta…

Aspiring Minds & @myamcat acknowledged as one of the key disruptors in #HRtech & #recruitment .… twitter.com/i/web/status/8…

At the National Employability Conclave in #Chennai, #career gurus discuss how best to enhance employability of… twitter.com/i/web/status/8…

"Upskill & Update yourself" advises Annesly Carvalho, AGM - HR, @kaartech at National Employability Enclave in… twitter.com/i/web/status/8…

Institutions must give students situational practical #training, says Krithivasan S, Lead, India Campus Hiring,… twitter.com/i/web/status/8…

Trained #Freshers more suitable for jobs than ones with experience: Mastan Vali Shaik, Technical Head, CaddyCode So… twitter.com/i/web/status/8…

Catch all the interesting moments & comments from the #NEC Conclave here - twitter.com/i/moments/8172… twitter.com/ashariss/statu…

#Coding jobs to be automated, says Rekha Mathews, Head, India Campus Relationships, @USTGlobal at National Employab… twitter.com/i/web/status/8…

⚡️ “National Employability Conclave by Aspiring Minds - Chennai Chapter” twitter.com/i/moments/8172…

We’re live! National Employability Enclave in #Chennai is on, & our first speaker is Mr Viswanathan Venkat – Head,… twitter.com/i/web/status/8…

What were the most popular jobs for freshers in 2016? Our @myamcat blog decodes the trend. #recruitmenttwitter.com/i/web/status/8…

Artificial Intelligence - A game changer in #recruitment ecosystem? Our CTO @varaggarwal discusses #AI & recruitmen… twitter.com/i/web/status/8…

Aspiring Minds-AMCAT survey on the top trends for #Freshers jobs in 2016 was published in @EconomicTimes .… twitter.com/i/web/status/8…

#Writers who can entice, ensnare and hook in readers with their sartorial wit - WE Need YOU!… twitter.com/i/web/status/8…

What's the one thing you'll NOT do at work in 2017? Tweet to us with hashtag #WhatNotToDo & stand to win a fun gift… twitter.com/i/web/status/8…

Is Data Science the future of recruitment? Our CTO @varaggarwal discusses scope of #datascience in India.… twitter.com/i/web/status/8…

Artificial Intelligence to lead the way for Smart Recruitment in 2017? Our CEO @himanshu0820 decodes the trend.… twitter.com/i/web/status/8…

Top #Intersnhips of 2016 by Aspiring Minds survey- bit.ly/2hpnX1K Who won the hottest internship tag?… twitter.com/i/web/status/8…

Aspiring Minds survey on Top #Intersnhips in 2016 featured in @EconomicTimes . So, which is the hottest internship?… twitter.com/i/web/status/8…

Know thyself: Aptitude tests and assessments in everyday life: ht.ly/ftWk306Y13k pic.twitter.com/PW8ImYUhSU

Twitter Media

Our CTO @varaggarwal talks about the world’s first automated Motor Skill Test – exploiting the power of touch table… twitter.com/i/web/status/8…