An Automated Test of Motor Skills for Job Prediction and Feedback

We’re pleased to announce that our recent work on designing automated assessments to test motor skills (skills like finger dexterity and wrist dexterity) has been accepted for publication at the 9th International Conference on Educational Data Mining (EDM 2016).
Here are some highlights of our work –

  • The need: Motor skills are required in a large number of blue collar jobs today. However, no automated means exist to test and provide feedback on these skills. We explore the use of touch-screen surfaces and tablet-apps to measure these skills.
  • Gamified apps: We design novel app-based gamified tests to measure one’s motor skills. We’ve designed apps to specifically check finger dexterity, manual dexterity and multilimb co-ordination.
    amultifingermanual

 

 

 

 

 

 

 

 

  • Validation on three jobs: We validated the scores from the apps on three different job roles – tailoring, plumbing and carpentry. The results we present make a strong case for using such automated, touch-screen based tests in job selection and to provide automatic feedback for test-takers to improve their skills!

If you’re interested in the work and would like to learn more, please feel free to write to research@aspiringminds.com

The tree of program difficulty

What makes a programming problem hard?

Why are some programming problems solved by more number of students while others are not. The varying numbers we saw got us thinking on how the human brain responds to programming problems. This was also an important question for us to have an answer for when we designed an assessment or wanted guidance on pedagogy. Understanding what makes a programming problem hard would enable us to put questions into a programming assessment of a given difficulty where neither everyone would get a perfect score nor a zero and would also help us in creating equivalent testforms for the test.

We tried taking a jab at it by answering it empirically. We marked 23 programming problems on four different parameters on a 2 or 3 point scale — how hard is the data structure used in the implementation, what data structure is being returned from the target function, how hard it is to conceive the algorithm, the implementation of the algorithm and how hard is it to handle edge cases for the given problem. [See the attached PDF for more details on the metrics and the rubric followed]. There was some nuance involved in choosing these metrics – for instance, the algorithm to a problem could be hard to conceive if, say, it requires thinking through a dynamic programming approach, but its implementation can be fairly easy, involving a couple of loops. On the other hand, the algorithm to sort and then to merge a bunch of arrays can be simple in themselves but implementing such a requirement could be a hassle.

For these problems, we had responses from some 8000 CS undergraduates each. Each problem was delivered to a test-taker in a randomized testform. From this we pulled out how many people were able to write compilable code (this was as low as 3.6% :( to as high as 74% for different problems) and how many got all test cases right. We wanted to see how well we could predict this using our expert-driven difficulty metrics (our difficulties are relative and can change based on sample; for an absolute analysis we could have predicted the IRT parameters of the question — wanna try?)

So, what came out? Yes! we can predict. Here is the base correlations matrix. They are negative because a harder problem has a lower correct rate.

Correlations Data Structure Algorithm Implementation Edge-logic
Percent-pass all test cases -0.25 -0.42 -0.43 -0.05

program-difficulty-treeWe tried a first cut analysis on our data by building a regression tree with some simple cross-validation. We got a really cool, intuitive tree and a prediction accuracy of 0.81! This is our ‘Tree of Program Difficulty’ ;-) . So what do we learn?

The primary metric in predicting whether a good percentage of people are able to solve a problem right is the algorithmic difficulty. Problems for which the algorithm is easy to deduce (<1.5) immediately witness a high pass rate whereas those for which it is hard (>2.5) witness a very poor pass rate. For those that’re moderately hard algorithmically (between 1 and 2.5), the next criterion deciding the pass percentage is the difficulty in implementing the algorithm. If it’s easy to implement (<2), we see a high pass rate being predicted. For those that're moderately hard in implementation and algorithm, the difficulty of the data structures used in the problem then predicts the pass rate. If an advanced data structure is used, the rate falls to less than 6% and is around a moderate 11% otherwise.

So, what nodes do your problems fall on? Does it match our result? Tell us!

Thanks Ramakant for the nifty work with data!

-Shashank and Varun

March 2015