Scaling up machine learning to grade computer programs for 1000s of questions in multiple languages

Machine learning has helped solved many grading challenges – spoken english, essay grading, program grading and math problem grading to cite a few examples. However, there is a big impedance in using these methods in real world settings. This is because one needs to build an ML model for every question/prompt – for instance, in essay grading, a different model designed to grade an essay on ‘Socialism’ will be very different from one which can grade essays on ‘Theatre’. These models require a large number of expert rated samples and a fresh model building exercise each time. A real-world practical assessment works on 100s of questions which then translates to requiring 100s of graders and 100s of models. The approach doesn’t yield to be scalable, takes too much time and most of the times, is impractical.

In our KDD paper accepted today, we solve this challenge quite a bit for grading computer programs. In KDD 2014, we had presented the first machine learning approach to grade computer programs, but we had to build a model per problem. We have now invented a technique where we need no expert graded samples for a new problem and we don’t need to build any new models! As soon as we have around a few tens of ‘good’ codes for a problem (automatically identified using test case coverage and static analysis), our newly invented question-agnostic models automatically take charge. How will this help us? With this technology, our machine learning based models can scale, in an automated way, to grade 1000s of questions in multiple languages in a really short span of time. Within a couple of weeks of a new question being introduced into our question pool, the machine learning evaluation kicks in.

There were couple of innovations which led to this work, a semi-supervised approach to model building:

  • We can identify a subset of the ‘good’ set automatically. In the case of programs, the ‘good set’, codes which get a high grade, can be identified automatically using test cases. We exploit this to find other programs similar to these in a feature space that we define. To get a sense of this, think of a distance measure from programs identified as part of the ‘good set’. Such a ‘nearness’ feature would then correlate with grades across questions irrespective of whether it is a binary search problem or a tree traversal problem. Such features help us build generic models across questions.

  • We design a number of such features which are invariant to the question and correlate to the expert grade. These features are inspired by the grammar we proposed in our earlier work. For instance, one feature is how different is an unseen program from the set of keywords present in the ‘good set’; while another is the difference in the programs in the kind of computations they are doing. Using such features, we learn generic models for a set of problems using supervised learning. These generic models work super well for any new problem as soon as we get our set of good codes!

Check out this illustrative and easy-to-grasp video which demonstrates our latest innovation.

 

The table presents a snapshot of the results presented in the paper. As shown in the last two columns, the ‘question-independent’ machine learning model (ML Model) constantly outperforms the test suite based baseline (Baseline). The claim of ‘question-independence’ is corroborated by similar and encouraging results (depicted in last three rows) obtained on totally unseen questions, which were not used to train the model.

Metric
Question Set
#Questions
ML Model
Baseline
Correl
All questions
19
0.80
0.65
Bias
All questions
19
0.24
0.35
MAE
All questions
19
0.57
0.85
Correl
Unseen questions only
11
0.81
0.65
Bias
Unseen questions only
11
0.27
0.31
MAE
Unseen questions only
11
0.59
0.84

What does this all mean?

  • We can really scale ML based grading of computer programs. We can continue to add new problems and the models will automatically start working within a couple of weeks.
  • These set of innovations apply to a number of other problems where we can automatically identify a good set. For instance, in circuit solving problems, the ones with the correct final answer could be considered a good set; this can similarly be applied to mathematics problems or an automata design problem; problems where computer science techniques are mature to verify functional correctness of a solution. Machine learning can automatically then help grade other unseen responses using this information.

Hoping to see more and more ML applied to grading!

Varun

Work done with Gursimran Singh and Shashank Srikant

What AM Research told you in 2015 – the data science way?

As the year came to an end, we looked back on what we shared with the world in 2015. As data nerds, we pushed all our blog articles in to an NLP engine to cluster them to identify key themes. Given the small sample size and challenges to find semantic similarity in our specialized area, we waded through millions of unsupervised samples through deep learning with a Bayesian framework, ran it on a cluster of GPUs for a month…yada yada. Well, for some problems it is just that humans can do things easily and efficiently; so that is what we actually did.

The key themes were:

Grading of programs – 4 posts

We need to grade programs better to be able to give automated feedback to learners and help companies hire more efficiently and expand the pool considered for hiring. We at AM dream to have an automated teaching assistant – we think it is possible and will be disruptive. Thus we dedicated 4 of our posts on telling you about automatically grading programs and its impact.

The tree of program difficulty – We found that we could determine the empirical difficulty of a programming problem based on the data structures it uses, the control structures used and its return type, among other parameters. We used these features in a nice decision tree to predict how many test takers would answer the question correctly, and we predicted with a correlation of 0.81! This tells us about human cognition, helps improve pedagogy and also helps generate the right questions to have a balanced test. And this is just the tip of the iceberg. Second, we approached the same by looking at the difficulty of test-cases and their inter correlation. We understood what conceptual mistakes people make and also got a recipe to make better test cases for programs and had insights on how to score them. For instance, we found that a trailing comma in a test case can make it unnecessarily difficult!

Finding super good programmers – Given these thoughts on how to construct a programming test and score it, we showed you how all this intelligence put together with our super semantic machine learning algorithm, we can spot 16% good programmers missed by test case based measures. Additionally, we also found automatically the super good ones writing efficient and maintainable code. So please say a BIG NO to test case based programming assessment tools!

venn

Reproduced from “AI can help you spot the right programmers”. It shows a test case metric misses 16% good programmers. Furthermore AI can help spot 20% super good coders

Pre-reqs to learn programming - Stepping back, we tried determining who could learn programming through a short duration course. We found that it was a function of a person’s logical ability and English but not did not depend on her/his quantitative skills. Interestingly, we found that a basic exposure to programming language could compensate for lower logical ability in predicting a successful student who could learn programming. A data way to find course prerequisites!

Building a machine learning ecosystem – 3 posts

Catching them young! We designed a cognitively manageable hands-on supervised learning exercise for 5th-9th graders. We helped kids, in three workshops spread across different cities, make fairly accurate friend predictors with great success! We think data science is going to become a horizontal skill across job roles and want to find ways to get it into schools, universities and informal education.

“Exams. I would take my exam results, from the report card of every year. And then I will make it on excel and then I will remember the grades and the one I get more grades I will take a gift” [sic.]

flashcard

Reproduced from datasciencekids.org. Whom will you befriend? Can machine learning models devised by high school kids predict this?

The ML India ecosystem – Our next victims were those in universities. We launched ml-india.org to catalyse the Indian machine learning ecosystem. Given India’s very low research output in machine learning, we have put together a resource center and a mail list to promote machine learning. We also declared ourselves as self-styled evaluators of machine learning research in India and promise to share monthly updates.

Employment outcome data release – We recently launched AMEO, our employability outcome data set at CODS. This unique data set has assessment details, education and demographic details of close to 6000 students together with their employment outcomes – first job designation and salary. This can tell us so much about the labor market to guide students and also identify gaps – to guide policy makers. We are keenly looking forward to what wonderful insights we get from the crowd! Come, contribute!

Pat our back! – 3 posts 

blog4-image

Reproduced from “Work on spoken English grading gets accepted at ACL, AM-R&D going to Beijing!”. We describe our system that mixes machine learning with crowdsourcing to do spontaneous speech evaluation

We told you about our KDD and ACL papers on automatic spoken English evaluation – the first semi-automated automated grading of free speech. We loved mixing crowdsourcing with machine learning – a cross between peer and machine grading – to do super reliable automated evaluation.

And then our ICML workshop paper talked about how to build models of ‘employability’ – interpretable, theoretically plausible yet non-linear models which could predict outcome based on grades. More than 200 organizations have benefited by using this model in recruiting talent and they do way better than linear models!

Other posts

On the posts off these three clusters, we told you about –
Why we exist – why we need data science to promote labor market meritoracy

– The state of the art and goals for assessment research for the next decade (See ASSESS 2015)

Our work on classifying with 80-80 accuracy for 1500+ classes

It has been an interesting year at AM, learning from all our peers and contributing our bit to research, while using it to build super products. We promise to treat you with a lot more interesting stuff in open-response grading, labor market standardizing and understanding next year. Stay tuned to this space!

Varun

On automated assessments – State of the art and goals

In fall 2014, we organized ASSESS, the first workshop on data mining for educational assessment and feedback, at KDD 2014 [link]. The workshop brought together a total of 80 participants including education psychologists, computer scientists and practitioners under one roof and led to a thoughtful discussion. We have put together a white paper which captures our key discussions from the workshop. The paper primarily discusses why assessments are important, what is the state of the art and what goals should we pursue as a community. It is a brief exposition and serves as a starting point for a discussion to set the agenda for the next decade.

On automated assessments - State of the art and goals

Why are assessments important?

Automated and semi-automated assessments are a key to scaling learning, validating pedagogical innovations, and delivering socio-economic benefits of learning.

  • Practice and Feedback: Whether considering large-scale learning for vocational training or non-vocational education, automating delivery of high-quality content is not enough. We need to be able to automate or semi-automate assessments for formative purposes. Substantial evidence indicates that learning is enhanced through doing assignments and obtaining feedback on one’s attempts. In addition, the so-called “testing effect” demonstrates that repeated testing with feedback enhances students long-term retention of information. By automating assessments, students can get real-time feedback on their learning in a way that scales with the number of students. Automated assessments may become, in some sense, “automated teaching assistants”.
  • Education Pedagogy: There is a great need to understand which teaching/learning/delivery models of pedagogy are better than others, especially with new emerging modes and platforms for education. To understand the impact of and compare different pedagogies, we need assessments that can summatively measure learning outcomes precisely and accurately. Without valid assessments, empirical research on learning and pedagogy becomes questionable.
  • Learning to socio-economic mobility: For learners that seek vocational benefits, there need to be scalable ways of measuring and certifying learning so that they may garner socio-economic benefits from what they’ve learnt. There need to be scalable ways of measuring learning so as to predict the KSOAs (knowledge, skills and other abilities) of learners to do specific tasks. This will help both learners and employers by driving meritocracy in labor markets through reduced information asymmetries and transaction costs. Matching of people to jobs can become more efficient.

We look forward to hearing your thoughts on the paper! Do feel free to write to research@aspiringminds.com

This is an excerpt from the white paper ‘On Assessments – State of the art and goals’, which had contributions from Varun Aggarwal, Steven Stemler, Lav Varshney and Divyanshu Vats, co-organizers, ASSESS 2014 at KDD. The full paper can be accessed here.

AI can help you spot the right programmer!

The industry today is on a constant look-out for good programmers. In this new age of digital services and products, it’s a premium to possess programming skills. Whenever a friend asks me to refer a good programmer for his company, I tell him – why would I refer to you, I will hire her for my team! But what does having programming skills really mean? What do we look for when we hire programmers? An ability to write functionally correct programs – those that pass test cases? Nah..

A seasoned interviewer would tell you that there is much more to writing code than passing test cases! For starters, we really care for how well a candidate understands the problem and approaches a solution than being able to write functionally correct code. “Did the person get the logic?” is generally the question discussed among interviewers. We’re also interested in seeing whether the candidate’s logic (algorithm) is efficient – with low time and space complexity. Besides this, we also care for how readable and maintainable a candidate makes her code – a very frustrating problem for the industry today is to deal with badly written code that breaks under exceptions and which is not amenable to fixes.

So then why do we all use automated programming assessments in the market which base themselves on just the number of test cases passed? It probably is because it is believed that there’s nothing better. If AI can drive cars automatically these days, can it not grade programs like humans? It can. In our KDD paper in 2014 [1], we showed that by using machine learning, we could grade programs on all these parameters as well as humans do. The machine’s agreement with human experts was as high as 0.8-0.9 correlation points!

Why is this useful? We looked at a sample of 90,000 candidates who took Automata, our programming evaluation product, in the US. These were seniors graduating with a computer science/engineering degree and were interested in IT jobs. They were scored on four metrics – percent test-cases passed, correctness of logic as detected by our ML algorithm (scored on a scale of 1-5), run-time efficiency (on a scale of 1-3) and best coding practices used (on a scale of 1-4). We find answers to all that an interviewer cares for –

  • Is the logic/thought process right?
  • Is this going to be an efficient code?
  • Will it be maintainable and scalable?

Fig.1. Distribution of the different score metrics. Around 48% of the candidates who scored 4 on the code logic metric (as detected by our ML algorithm) had passed less than 50% of their test suite.

A clever man commits no minor blunders – Von Goethe

Typically, companies look for candidates who get the code nearly right; say, those who pass 80% test-cases in a test-suite. 36% of the seniors made it through such a criteria. In a typical recruiting scenario, the remaining 64% would have been rejected and not considered for further processes. We turned to our machine learning algorithm to see how many of these “left out” had actually written logically correct programs having only silly errors. What do we find? Another good 16% of these were scored 4 or above by our system, which meant they had written codes which had ‘correct control structures and critical data-dependencies but had some silly mistakes’. These folks should have been considered for the remainder of the process! Smart algorithms are able to spot what would be totally missed by test cases but which could have been spotted by human experts.

incorrect_codes

Sample codes which fail most of their testsuites but are scored high by our ML system (click to enlarge)

We find a lot of these candidates’ codes pass less than 50% test cases (see figure 1). Why would this be happening? We sifted through some examples and found some very interesting ways in which students made errors. For the problem requiring to remove vowels in a string, a candidate had missed what even the best in the industry fall prey to at times – an incorrect usage of ORs and ANDs with the negation operator! Another had implemented the logic well but messed up on the very last line. He lacked the knowledge of converting character arrays back to strings in Java. The lack of such specific knowledge is typical of those who haven’t spent enough time with Java; but this is easy to learn and shouldn’t cost them their performance in an assessment and shouldn’t stop them from differentiating themselves from those who couldn’t think of the algorithm.

 

Fig.2. Distribution of runtime efficiency scores and program practices scores in perspective of the nature of the attempted problems

The computing scientist’s main challenge is not to get confused by the complexities of his own making – E. W. Dijkstra

We identified those who had written nearly correct code; those who could think of algorithms and put them into a programming language. However, was this just some shabbily written code which no one would want to touch? Further, had they written efficient code or would it have resulted in the all so familiar “still loading” messages we see in our apps and software. We find that roughly 38% folks thought of an efficient algorithm while writing their codes and 30% wrote code which was acceptable to reside in professional software systems. Together, roughly 20% students write programs which are both readable and efficient. Thus, AI tells us that there are 20% programmers here whom we should really talk to and hire! If we are ready to train them and run with them a little, there are various other score cuts one could use to make an informed decision.

In all, AI can not only drive cars, but find programmers who can make a driverless car. Thinking how to find data scientists  by using data science? – coming soon.

Interested in using Automata to better understand the programming talent you evaluate? Do you have a different take on this? Tell us by writing to research@aspiringminds.com

Gursimran, Shashank and Varun

References

[1] Srikant, Shashank, and Aggarwal, Varun. “A system to grade computer programming skills using machine learning.” Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014. Continue reading

Paper accepts at ICML and KDD!

Some more good news!

Soon after our recent acceptance of our spoken English grading work at ACL, our work on learning models for job selection and personalized feedback gets accepted at the workshop Machine Learning for Education at ICML! Some results from this paper were discussed in one of our previous posts. The tool was built five years ago and has since helped a couple of million students get personalized feedback and aided 200+ companies hire better. I shall also be giving an invited talk at this workshop.

Earlier this month, we also got a paper at KDD accepted, which builds on our previous work in spontaneous speech evaluation. We find how well we can grade spontaneous speech of natives of different countries and also analyze the benefits the industry gets with such an evaluation system.

Busy year ahead it seems – paper presentations at France, Beijing, Australia and finally New Jersey, where we’re organizing the second edition of ASSESS, our annual workshop on data mining for educational assessment and feedback. It’s being organized at ICDM 2015 this winter. July 20th is the submission deadline for the workshop. Here is a list of submissions we saw in our workshop last year, at KDD. Spread the word!

– Varun