What AM Research told you in 2015 – the data science way?

As the year came to an end, we looked back on what we shared with the world in 2015. As data nerds, we pushed all our blog articles in to an NLP engine to cluster them to identify key themes. Given the small sample size and challenges to find semantic similarity in our specialized area, we waded through millions of unsupervised samples through deep learning with a Bayesian framework, ran it on a cluster of GPUs for a month…yada yada. Well, for some problems it is just that humans can do things easily and efficiently; so that is what we actually did.

The key themes were:

Grading of programs – 4 posts

We need to grade programs better to be able to give automated feedback to learners and help companies hire more efficiently and expand the pool considered for hiring. We at AM dream to have an automated teaching assistant – we think it is possible and will be disruptive. Thus we dedicated 4 of our posts on telling you about automatically grading programs and its impact.

The tree of program difficulty – We found that we could determine the empirical difficulty of a programming problem based on the data structures it uses, the control structures used and its return type, among other parameters. We used these features in a nice decision tree to predict how many test takers would answer the question correctly, and we predicted with a correlation of 0.81! This tells us about human cognition, helps improve pedagogy and also helps generate the right questions to have a balanced test. And this is just the tip of the iceberg. Second, we approached the same by looking at the difficulty of test-cases and their inter correlation. We understood what conceptual mistakes people make and also got a recipe to make better test cases for programs and had insights on how to score them. For instance, we found that a trailing comma in a test case can make it unnecessarily difficult!

Finding super good programmers – Given these thoughts on how to construct a programming test and score it, we showed you how all this intelligence put together with our super semantic machine learning algorithm, we can spot 16% good programmers missed by test case based measures. Additionally, we also found automatically the super good ones writing efficient and maintainable code. So please say a BIG NO to test case based programming assessment tools!


Reproduced from “AI can help you spot the right programmers”. It shows a test case metric misses 16% good programmers. Furthermore AI can help spot 20% super good coders

Pre-reqs to learn programming - Stepping back, we tried determining who could learn programming through a short duration course. We found that it was a function of a person’s logical ability and English but not did not depend on her/his quantitative skills. Interestingly, we found that a basic exposure to programming language could compensate for lower logical ability in predicting a successful student who could learn programming. A data way to find course prerequisites!

Building a machine learning ecosystem – 3 posts

Catching them young! We designed a cognitively manageable hands-on supervised learning exercise for 5th-9th graders. We helped kids, in three workshops spread across different cities, make fairly accurate friend predictors with great success! We think data science is going to become a horizontal skill across job roles and want to find ways to get it into schools, universities and informal education.

“Exams. I would take my exam results, from the report card of every year. And then I will make it on excel and then I will remember the grades and the one I get more grades I will take a gift” [sic.]


Reproduced from datasciencekids.org. Whom will you befriend? Can machine learning models devised by high school kids predict this?

The ML India ecosystem – Our next victims were those in universities. We launched ml-india.org to catalyse the Indian machine learning ecosystem. Given India’s very low research output in machine learning, we have put together a resource center and a mail list to promote machine learning. We also declared ourselves as self-styled evaluators of machine learning research in India and promise to share monthly updates.

Employment outcome data release – We recently launched AMEO, our employability outcome data set at CODS. This unique data set has assessment details, education and demographic details of close to 6000 students together with their employment outcomes – first job designation and salary. This can tell us so much about the labor market to guide students and also identify gaps – to guide policy makers. We are keenly looking forward to what wonderful insights we get from the crowd! Come, contribute!

Pat our back! – 3 posts 


Reproduced from “Work on spoken English grading gets accepted at ACL, AM-R&D going to Beijing!”. We describe our system that mixes machine learning with crowdsourcing to do spontaneous speech evaluation

We told you about our KDD and ACL papers on automatic spoken English evaluation – the first semi-automated automated grading of free speech. We loved mixing crowdsourcing with machine learning – a cross between peer and machine grading – to do super reliable automated evaluation.

And then our ICML workshop paper talked about how to build models of ‘employability’ – interpretable, theoretically plausible yet non-linear models which could predict outcome based on grades. More than 200 organizations have benefited by using this model in recruiting talent and they do way better than linear models!

Other posts

On the posts off these three clusters, we told you about –
Why we exist – why we need data science to promote labor market meritoracy

– The state of the art and goals for assessment research for the next decade (See ASSESS 2015)

Our work on classifying with 80-80 accuracy for 1500+ classes

It has been an interesting year at AM, learning from all our peers and contributing our bit to research, while using it to build super products. We promise to treat you with a lot more interesting stuff in open-response grading, labor market standardizing and understanding next year. Stay tuned to this space!


Aspiring Minds releases AMCAT employment data at CODS 2016!

Aspiring Minds Research is pleased to announce that it will be co-organizing this year’s data challenge at CODS 2016, the annual top-tier conference on machine learning and data science organized by the Indian chapter of KDD.


Undergraduates – performance and salaries
This year, we wanted data science enthusiasts to get a flavor of the kind of data we have and work on. We have released AMEO 2015 – a dataset on Aspiring Minds’ Employability Outcomes. which captures the academic and demographic information of engineering undergraduates giving AMCAT, Aspiring Minds’ battery of standardized assessments. What makes this dataset unique and rich is that it also has employment outcomes (annual salaries of students’ first jobs) along with standardized test scores.

Interesting questions
The answers to a lot of interesting questions possible lie in this dataset –

  • Can we predict the salaries a particular undergraduate would get on graduating?
  • Is the recruitment industry meritocratic – Do people with higher skills get paid higher? Or are there biases which don’t allow for these?
  • How important are English skills in getting a job?

and many more!

Participate and spread the word – 1000 USD cash prizes!
Interested in finding out the answers to these questions?
Take a stab at the data right away by downloading it from the contest website (mentioned below).

Get started right away and help spread the word and!
1000 USD cash prizes to those with the best submissions!

Contest website

As machines become intelligent, where does India stand?

Machine learning is the science of learning to do tasks by observing examples. It is transforming the world by enabling machines do all sorts of ‘intelligent’ tasks such as understanding images, human speech, predicting preferences, diseases and many others. With tremendous amount of data, interconnectedness, sophisticated algorithms and huge processing power in small devices, machines do things which were beyond their reach until recently. On the other hand, machines are still unable to do many tasks which humans do effortlessly, say understanding a story – this constitutes the next big challenge for machines, well, the humans that build these machines!

In some way, it has never been so exciting! Where should India be, as machines are becoming more intelligent? It is simple – it should be making the most of the opportunity. We need to participate and contribute in high quality research, innovation and also convert new results into effective business models.  The opportunity is global – the location of a digital business doesn’t constrain its market – a company in a Bangalore or a Gurgaon could serve the US market, the Europe market or even the whole world. Machine learning is not just a scientific or an academic pursuit. The economy and society can get great returns by the research and innovation in the area.

But are we there yet? Where are we placed in the global scene in both, academic and industrial research?

Read the full article here – http://ml-india.org/where-does-india-stand-machine-learning/

On automated assessments – State of the art and goals

In fall 2014, we organized ASSESS, the first workshop on data mining for educational assessment and feedback, at KDD 2014 [link]. The workshop brought together a total of 80 participants including education psychologists, computer scientists and practitioners under one roof and led to a thoughtful discussion. We have put together a white paper which captures our key discussions from the workshop. The paper primarily discusses why assessments are important, what is the state of the art and what goals should we pursue as a community. It is a brief exposition and serves as a starting point for a discussion to set the agenda for the next decade.

On automated assessments - State of the art and goals

Why are assessments important?

Automated and semi-automated assessments are a key to scaling learning, validating pedagogical innovations, and delivering socio-economic benefits of learning.

  • Practice and Feedback: Whether considering large-scale learning for vocational training or non-vocational education, automating delivery of high-quality content is not enough. We need to be able to automate or semi-automate assessments for formative purposes. Substantial evidence indicates that learning is enhanced through doing assignments and obtaining feedback on one’s attempts. In addition, the so-called “testing effect” demonstrates that repeated testing with feedback enhances students long-term retention of information. By automating assessments, students can get real-time feedback on their learning in a way that scales with the number of students. Automated assessments may become, in some sense, “automated teaching assistants”.
  • Education Pedagogy: There is a great need to understand which teaching/learning/delivery models of pedagogy are better than others, especially with new emerging modes and platforms for education. To understand the impact of and compare different pedagogies, we need assessments that can summatively measure learning outcomes precisely and accurately. Without valid assessments, empirical research on learning and pedagogy becomes questionable.
  • Learning to socio-economic mobility: For learners that seek vocational benefits, there need to be scalable ways of measuring and certifying learning so that they may garner socio-economic benefits from what they’ve learnt. There need to be scalable ways of measuring learning so as to predict the KSOAs (knowledge, skills and other abilities) of learners to do specific tasks. This will help both learners and employers by driving meritocracy in labor markets through reduced information asymmetries and transaction costs. Matching of people to jobs can become more efficient.

We look forward to hearing your thoughts on the paper! Do feel free to write to research@aspiringminds.com

This is an excerpt from the white paper ‘On Assessments – State of the art and goals’, which had contributions from Varun Aggarwal, Steven Stemler, Lav Varshney and Divyanshu Vats, co-organizers, ASSESS 2014 at KDD. The full paper can be accessed here.

AI can help you spot the right programmer!

The industry today is on a constant look-out for good programmers. In this new age of digital services and products, it’s a premium to possess programming skills. Whenever a friend asks me to refer a good programmer for his company, I tell him – why would I refer to you, I will hire her for my team! But what does having programming skills really mean? What do we look for when we hire programmers? An ability to write functionally correct programs – those that pass test cases? Nah..

A seasoned interviewer would tell you that there is much more to writing code than passing test cases! For starters, we really care for how well a candidate understands the problem and approaches a solution than being able to write functionally correct code. “Did the person get the logic?” is generally the question discussed among interviewers. We’re also interested in seeing whether the candidate’s logic (algorithm) is efficient – with low time and space complexity. Besides this, we also care for how readable and maintainable a candidate makes her code – a very frustrating problem for the industry today is to deal with badly written code that breaks under exceptions and which is not amenable to fixes.

So then why do we all use automated programming assessments in the market which base themselves on just the number of test cases passed? It probably is because it is believed that there’s nothing better. If AI can drive cars automatically these days, can it not grade programs like humans? It can. In our KDD paper in 2014 [1], we showed that by using machine learning, we could grade programs on all these parameters as well as humans do. The machine’s agreement with human experts was as high as 0.8-0.9 correlation points!

Why is this useful? We looked at a sample of 90,000 candidates who took Automata, our programming evaluation product, in the US. These were seniors graduating with a computer science/engineering degree and were interested in IT jobs. They were scored on four metrics – percent test-cases passed, correctness of logic as detected by our ML algorithm (scored on a scale of 1-5), run-time efficiency (on a scale of 1-3) and best coding practices used (on a scale of 1-4). We find answers to all that an interviewer cares for –

  • Is the logic/thought process right?
  • Is this going to be an efficient code?
  • Will it be maintainable and scalable?

Fig.1. Distribution of the different score metrics. Around 48% of the candidates who scored 4 on the code logic metric (as detected by our ML algorithm) had passed less than 50% of their test suite.

A clever man commits no minor blunders – Von Goethe

Typically, companies look for candidates who get the code nearly right; say, those who pass 80% test-cases in a test-suite. 36% of the seniors made it through such a criteria. In a typical recruiting scenario, the remaining 64% would have been rejected and not considered for further processes. We turned to our machine learning algorithm to see how many of these “left out” had actually written logically correct programs having only silly errors. What do we find? Another good 16% of these were scored 4 or above by our system, which meant they had written codes which had ‘correct control structures and critical data-dependencies but had some silly mistakes’. These folks should have been considered for the remainder of the process! Smart algorithms are able to spot what would be totally missed by test cases but which could have been spotted by human experts.


Sample codes which fail most of their testsuites but are scored high by our ML system (click to enlarge)

We find a lot of these candidates’ codes pass less than 50% test cases (see figure 1). Why would this be happening? We sifted through some examples and found some very interesting ways in which students made errors. For the problem requiring to remove vowels in a string, a candidate had missed what even the best in the industry fall prey to at times – an incorrect usage of ORs and ANDs with the negation operator! Another had implemented the logic well but messed up on the very last line. He lacked the knowledge of converting character arrays back to strings in Java. The lack of such specific knowledge is typical of those who haven’t spent enough time with Java; but this is easy to learn and shouldn’t cost them their performance in an assessment and shouldn’t stop them from differentiating themselves from those who couldn’t think of the algorithm.


Fig.2. Distribution of runtime efficiency scores and program practices scores in perspective of the nature of the attempted problems

The computing scientist’s main challenge is not to get confused by the complexities of his own making – E. W. Dijkstra

We identified those who had written nearly correct code; those who could think of algorithms and put them into a programming language. However, was this just some shabbily written code which no one would want to touch? Further, had they written efficient code or would it have resulted in the all so familiar “still loading” messages we see in our apps and software. We find that roughly 38% folks thought of an efficient algorithm while writing their codes and 30% wrote code which was acceptable to reside in professional software systems. Together, roughly 20% students write programs which are both readable and efficient. Thus, AI tells us that there are 20% programmers here whom we should really talk to and hire! If we are ready to train them and run with them a little, there are various other score cuts one could use to make an informed decision.

In all, AI can not only drive cars, but find programmers who can make a driverless car. Thinking how to find data scientists  by using data science? – coming soon.

Interested in using Automata to better understand the programming talent you evaluate? Do you have a different take on this? Tell us by writing to research@aspiringminds.com

Gursimran, Shashank and Varun


[1] Srikant, Shashank, and Aggarwal, Varun. “A system to grade computer programming skills using machine learning.” Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014. Continue reading