Paper accepts at ICML and KDD!

Some more good news!

Soon after our recent acceptance of our spoken English grading work at ACL, our work on learning models for job selection and personalized feedback gets accepted at the workshop Machine Learning for Education at ICML! Some results from this paper were discussed in one of our previous posts. The tool was built five years ago and has since helped a couple of million students get personalized feedback and aided 200+ companies hire better. I shall also be giving an invited talk at this workshop.

Earlier this month, we also got a paper at KDD accepted, which builds on our previous work in spontaneous speech evaluation. We find how well we can grade spontaneous speech of natives of different countries and also analyze the benefits the industry gets with such an evaluation system.

Busy year ahead it seems – paper presentations at France, Beijing, Australia and finally New Jersey, where we’re organizing the second edition of ASSESS, our annual workshop on data mining for educational assessment and feedback. It’s being organized at ICDM 2015 this winter. July 20th is the submission deadline for the workshop. Here is a list of submissions we saw in our workshop last year, at KDD. Spread the word!

– Varun

What we learn from patterns in test case statistics of student-written computer programs

Test cases evaluate whether a computer program is doing what it’s supposed to do. There are various ways to generate them – automatically based on specifications, say by ensuring code coverage [1] or by subject matter experts (SMEs) who think through conditions based on the problem specification.

We asked ourselves whether there was something we could learn by looking at how student programs responded to test cases. Could this help us design better test cases or find flaws in them? By looking at such responses from a data-driven perspective, we wanted to know whether we could .a. design better test cases .b. understand whether there existed any clusters in the way responses on test cases were obtained and .c.  whether we could discover salient concepts needed to solve a particular programming problem, which would then inform us of the right pedagogical interventions.

A visualization which shows how our questions cluster by the average test case score received on them. More info on this in another blog :)

A visualization which shows how our questions cluster by the average test case score received on them. More on this in another post :)

We built a cool tool which helped us look at statistics on over 2500 test cases spread across over fifty programming problems attempted by nearly 18,000 students and job-seekers in a span of four weeks!

We were also able to visualize how these test cases clustered for each problem, how they correlated with other cases across candidate responses and were also able to see what their item response curves looked like. Here are a couple of things we learnt in this process:

One of our problems required students to print comma-separated prime numbers starting from 2 till a given integer N. When designing test cases for this problem, our SMEs expected there to be certain edge cases (when N was less than 2) and some stress cases (when N was very large) while expecting the remainder of the cases to check the output for random values of N, without expecting them to behave any differently. Or so they thought. :) On clustering the responses obtained on each of the test cases for these problems (0 for failing a case and 1 for passing it), we found two very distinct clusters being formed (see figure below) besides the lone test case which checked for the edge condition. A closer look at some of the source codes helped us realize that values of N which were not prime numbers had to be handled differently – a trailing comma remained at the very end of the list and lots of students were not doing this right!

A dendogram depicting test case clustering for the prime-print problem

A dendogram depicting test case clustering for the prime-print problem

This was interesting! It showed that the problem’s hardness was not only linked to the algorithm of producing prime numbers till a given number, but also linked to the nuance of printing it in a specific form. In spite of students getting the former right, a majority of them did not get the latter right. There are several learnings from this. If the problem designer just wants to assess if students know the algorithm to generate primes till a number, s/he should drop the part to print them in a comma separated list – it adds an uncalled for impurity to the assessment objective. On the other hand, if both these skills are to be tested, our statistics is a way to confirm the existence of these two different skills – getting one right does not mean the other is doable (say, can this help us figure out dominant cognitive skills that are needed in programming?). By separating the test cases to check the trailing comma case and reporting a score on it separately, we could ideally give an assessor granular information on what the code is trying to achieve. Contrast this to when test cases were simply bundled together and it wasn’t clear what aspect the person got right.

More so, when we designed this problem, the assessment objective was to primarily check the algorithm for generating prime numbers. Unfortunately, the cases that did not handle the trailing comma went down on their test case scores in spite of having met our assessment criterion. The good news here was that our machine learning algorithm [2] niftily picked it up and was able to say by the virtue of their semantic features that they were doing the right job!

We also fit 3-PL models from Item Response Theory (more info) on each test case for some of our problems and have some interesting observations there on how we could relate item-parameters to test case design – more on this in a separate post!

Have ideas on how you could make use of such numbers and derive some interesting information? Write to us, or better, join our research group! :)

Kudos to Nishanth for putting together the neat tool to be able to visualize the clusters! Thanks to Ramakant and Bhavya for spotting this issue in their analysis.

– Shashank and Varun

 References -

[1] KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs, Cadar, Cristian, Daniel Dunbar, and Dawson R. Engler. OSDI. Vol. 8. 2008.

[2] A system to grade computer programming skills using machine learning, Srikant, Shashank, and Varun Aggarwal. Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014.

Work on spoken English grading gets accepted at ACL, AM-R&D going to Beijing!

Good news! Our work on using crowdsourcing and machine learning to grade spontaneous English has been accepted at ACL 2015.

  • Ours is the first semi-automated approach to grade spontaneous speech.
  • We propose a new general technique which sits between completely automated grading techniques and peer grading: We use crowd for doing the tough human intelligence task, derive features from it and use ML to build high quality models.
  • We think, this is the first time anyone used crowdsourcing to get accurate features that are then fed into ML to build great models. Correct us if we are wrong!

Design of our Automated Spontaneous Speech grading system.

Figure 1: Design of our Automated Spontaneous Speech grading system.

The technique helps scale spoken English testing, which means super scale spoken English training!

Great job Vinay and Nishant.

PS: Also check out our KDD paper on programming assessment if you already haven’t.

- Varun

A re-beginning : Welcome to AM Research!

We finally have a place to feature the work which we began five years ago. Great effort, Tarun, to get this up and running.

We thought this was important since education technology and assessments are going through a revolution. We wish to add our two teaspoons of wisdom (did I actually say that!) to the ongoing battle against the conventional non-scalable and unscientific ways of training, assessing and skill matching. We look forward to making this as a means to collaborate with academics, the industry and anyone who feels positively about education technology.

Sector/Roles Employability(%)
BUSINESS FUNCTIONS
Sales and Business Development 15.88
Operations/Customer Service 14.23
Clerical/Secretarial Roles 35.95
ANALYTICS AND COMMUNICATION
Analyst 3.03
Corporate Communication/Content Development 2.20
IT AND ITeS INDUSTRY
IT Services 12.97
ITes and BPO 21.37
IT Operations 15.66
ACCOUNTING ROLES
Accounting 2.55
TEACHING
Teaching 15.23

Table 1: By using standardized assessments of job suitability, in a study of 60,000 Indian undergraduates, we find that a strikingly low proportion of them have skills required for the industry. All these students got detailed feedback from us to improve. The table shows the percentage of students that have the required skills for different jobs. (Refer: National Employability Report for Graduates, under Reports in Publications)

We think assessments will be the key to democratize learning and employment opportunity: it provides a benchmark for measuring success of training interventions, provides feedback to learners creating a ‘dialogue’ in the learning process and most importantly, helps link learning to tangible outcomes in terms of jobs and otherwise.

Let me state it simply: To scale learning and make employment markets meritocratic, we need to scale automated assessments. This is the space we dabble in!

If you are thirsty for data, refer to the table and figure in this post. It tells the story of the problem we are up against and trying to solve.

Figure 1: 2500 undergraduates were surveyed to find their employment outcomes one year after they got their undergraduate education. We categorized their colleges in three categories (tier 1-3) based on their overall performance in AMCAT, our employability test. We find that a candidate in a tier 3 college has 24% lower odds of getting a job and 26% lower salary when he/she has the same merit (AMCAT scores) as a tier 1 students. Similarly, a 1 point drop in college GPA (on a 10 pt scale) decreases job odds by 16% and salary by 9%. Neither of these two parameters are useful predictors of job success beyond AMCAT scores. This shows a clear bias in the employment ecosystem. (Refer ‘Who gets a job’ under Reports in Publications)

How do we solve it? Stay tuned to our subsequent job posts…

Varun