AI can help you spot the right programmer!

The industry today is on a constant look-out for good programmers. In this new age of digital services and products, it’s a premium to possess programming skills. Whenever a friend asks me to refer a good programmer for his company, I tell him – why would I refer to you, I will hire her for my team! But what does having programming skills really mean? What do we look for when we hire programmers? An ability to write functionally correct programs – those that pass test cases? Nah..

A seasoned interviewer would tell you that there is much more to writing code than passing test cases! For starters, we really care for how well a candidate understands the problem and approaches a solution than being able to write functionally correct code. “Did the person get the logic?” is generally the question discussed among interviewers. We’re also interested in seeing whether the candidate’s logic (algorithm) is efficient – with low time and space complexity. Besides this, we also care for how readable and maintainable a candidate makes her code – a very frustrating problem for the industry today is to deal with badly written code that breaks under exceptions and which is not amenable to fixes.

So then why do we all use automated programming assessments in the market which base themselves on just the number of test cases passed? It probably is because it is believed that there’s nothing better. If AI can drive cars automatically these days, can it not grade programs like humans? It can. In our KDD paper in 2014 [1], we showed that by using machine learning, we could grade programs on all these parameters as well as humans do. The machine’s agreement with human experts was as high as 0.8-0.9 correlation points!

Why is this useful? We looked at a sample of 90,000 candidates who took Automata, our programming evaluation product, in the US. These were seniors graduating with a computer science/engineering degree and were interested in IT jobs. They were scored on four metrics – percent test-cases passed, correctness of logic as detected by our ML algorithm (scored on a scale of 1-5), run-time efficiency (on a scale of 1-3) and best coding practices used (on a scale of 1-4). We find answers to all that an interviewer cares for –

  • Is the logic/thought process right?
  • Is this going to be an efficient code?
  • Will it be maintainable and scalable?

Fig.1. Distribution of the different score metrics. Around 48% of the candidates who scored 4 on the code logic metric (as detected by our ML algorithm) had passed less than 50% of their test suite.

A clever man commits no minor blunders – Von Goethe

Typically, companies look for candidates who get the code nearly right; say, those who pass 80% test-cases in a test-suite. 36% of the seniors made it through such a criteria. In a typical recruiting scenario, the remaining 64% would have been rejected and not considered for further processes. We turned to our machine learning algorithm to see how many of these “left out” had actually written logically correct programs having only silly errors. What do we find? Another good 16% of these were scored 4 or above by our system, which meant they had written codes which had ‘correct control structures and critical data-dependencies but had some silly mistakes’. These folks should have been considered for the remainder of the process! Smart algorithms are able to spot what would be totally missed by test cases but which could have been spotted by human experts.

incorrect_codes

Sample codes which fail most of their testsuites but are scored high by our ML system (click to enlarge)

We find a lot of these candidates’ codes pass less than 50% test cases (see figure 1). Why would this be happening? We sifted through some examples and found some very interesting ways in which students made errors. For the problem requiring to remove vowels in a string, a candidate had missed what even the best in the industry fall prey to at times – an incorrect usage of ORs and ANDs with the negation operator! Another had implemented the logic well but messed up on the very last line. He lacked the knowledge of converting character arrays back to strings in Java. The lack of such specific knowledge is typical of those who haven’t spent enough time with Java; but this is easy to learn and shouldn’t cost them their performance in an assessment and shouldn’t stop them from differentiating themselves from those who couldn’t think of the algorithm.

 

Fig.2. Distribution of runtime efficiency scores and program practices scores in perspective of the nature of the attempted problems

The computing scientist’s main challenge is not to get confused by the complexities of his own making – E. W. Dijkstra

We identified those who had written nearly correct code; those who could think of algorithms and put them into a programming language. However, was this just some shabbily written code which no one would want to touch? Further, had they written efficient code or would it have resulted in the all so familiar “still loading” messages we see in our apps and software. We find that roughly 38% folks thought of an efficient algorithm while writing their codes and 30% wrote code which was acceptable to reside in professional software systems. Together, roughly 20% students write programs which are both readable and efficient. Thus, AI tells us that there are 20% programmers here whom we should really talk to and hire! If we are ready to train them and run with them a little, there are various other score cuts one could use to make an informed decision.

In all, AI can not only drive cars, but find programmers who can make a driverless car. Thinking how to find data scientists  by using data science? – coming soon.

Interested in using Automata to better understand the programming talent you evaluate? Do you have a different take on this? Tell us by writing to research@aspiringminds.com

Gursimran, Shashank and Varun

References

[1] Srikant, Shashank, and Aggarwal, Varun. “A system to grade computer programming skills using machine learning.” Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014. Continue reading

Paper accepts at ICML and KDD!

Some more good news!

Soon after our recent acceptance of our spoken English grading work at ACL, our work on learning models for job selection and personalized feedback gets accepted at the workshop Machine Learning for Education at ICML! Some results from this paper were discussed in one of our previous posts. The tool was built five years ago and has since helped a couple of million students get personalized feedback and aided 200+ companies hire better. I shall also be giving an invited talk at this workshop.

Earlier this month, we also got a paper at KDD accepted, which builds on our previous work in spontaneous speech evaluation. We find how well we can grade spontaneous speech of natives of different countries and also analyze the benefits the industry gets with such an evaluation system.

Busy year ahead it seems – paper presentations at France, Beijing, Australia and finally New Jersey, where we’re organizing the second edition of ASSESS, our annual workshop on data mining for educational assessment and feedback. It’s being organized at ICDM 2015 this winter. July 20th is the submission deadline for the workshop. Here is a list of submissions we saw in our workshop last year, at KDD. Spread the word!

– Varun

The tree of program difficulty

What makes a programming problem hard?

Why are some programming problems solved by more number of students while others are not. The varying numbers we saw got us thinking on how the human brain responds to programming problems. This was also an important question for us to have an answer for when we designed an assessment or wanted guidance on pedagogy. Understanding what makes a programming problem hard would enable us to put questions into a programming assessment of a given difficulty where neither everyone would get a perfect score nor a zero and would also help us in creating equivalent testforms for the test.

We tried taking a jab at it by answering it empirically. We marked 23 programming problems on four different parameters on a 2 or 3 point scale — how hard is the data structure used in the implementation, what data structure is being returned from the target function, how hard it is to conceive the algorithm, the implementation of the algorithm and how hard is it to handle edge cases for the given problem. [See the attached PDF for more details on the metrics and the rubric followed]. There was some nuance involved in choosing these metrics – for instance, the algorithm to a problem could be hard to conceive if, say, it requires thinking through a dynamic programming approach, but its implementation can be fairly easy, involving a couple of loops. On the other hand, the algorithm to sort and then to merge a bunch of arrays can be simple in themselves but implementing such a requirement could be a hassle.

For these problems, we had responses from some 8000 CS undergraduates each. Each problem was delivered to a test-taker in a randomized testform. From this we pulled out how many people were able to write compilable code (this was as low as 3.6% :( to as high as 74% for different problems) and how many got all test cases right. We wanted to see how well we could predict this using our expert-driven difficulty metrics (our difficulties are relative and can change based on sample; for an absolute analysis we could have predicted the IRT parameters of the question — wanna try?)

So, what came out? Yes! we can predict. Here is the base correlations matrix. They are negative because a harder problem has a lower correct rate.

Correlations Data Structure Algorithm Implementation Edge-logic
Percent-pass all test cases -0.25 -0.42 -0.43 -0.05

program-difficulty-treeWe tried a first cut analysis on our data by building a regression tree with some simple cross-validation. We got a really cool, intuitive tree and a prediction accuracy of 0.81! This is our ‘Tree of Program Difficulty’ ;-) . So what do we learn?

The primary metric in predicting whether a good percentage of people are able to solve a problem right is the algorithmic difficulty. Problems for which the algorithm is easy to deduce (<1.5) immediately witness a high pass rate whereas those for which it is hard (>2.5) witness a very poor pass rate. For those that’re moderately hard algorithmically (between 1 and 2.5), the next criterion deciding the pass percentage is the difficulty in implementing the algorithm. If it’s easy to implement (<2), we see a high pass rate being predicted. For those that're moderately hard in implementation and algorithm, the difficulty of the data structures used in the problem then predicts the pass rate. If an advanced data structure is used, the rate falls to less than 6% and is around a moderate 11% otherwise.

So, what nodes do your problems fall on? Does it match our result? Tell us!

Thanks Ramakant for the nifty work with data!

-Shashank and Varun

March 2015