AM Research is a division of Aspiring Minds. Aspiring Minds aspires to build an assessment-driven job marketplace (a SAT/GRE for jobs) to drive accountability in higher education and meritocracy in labor markets. The products developed based on our research has impacted more than two million lives and the resulting data is a source of continuous new research.
A cocktail of assessment, HR, machine learning, data science, education, social impact with two teaspoons of common sense stirred in it.
A new year is on the horizon. For many people it is time to make resolutions about what to do in the coming year. This year, instead of focusing entirely on what you want to do, consider thinking more carefully about those things you want to avoid. Our recent research, also covered by WSJ print, found that the secret to success is knowing what NOT to do and then not doing it! For instance, there were many things during this past year that experts advised should not be done – such as NOT to do a Brexit, NOT to elect ultra-nationalist voices and NOT to demonetize one’s currency without a plan. Only time will tell whether these were actually bad decisions. We find that recognizing a bad decision and avoiding it is far more important for success than focusing on the best things to do.
Our evidence came from tracking job success. We found that the most successful salespersons, customer service agents and managers weren’t those who chose the “best” course of action in a given situation, but rather were those who knew what NOT to do in a situation and avoided those actions. For instance, in a situation when you are very late for a sales meeting, what one absolutely should not do is fail to apologize. On the other hand, there might be different ways one could apologize or show regret, some being better than others, as deemed by experts. However, our work showed choosing among these different ways of expressing regret was not predictive of one’s success in a sales job. What mattered was the ability to identify what should not be done (i.e., expressing no regret). The wrong response may seem obvious in this situation, but it isn’t obvious to everyone and is also not obvious for many other situations.
Our study was based on a methodology called situational judgment testing. We provided candidates with a series of specific situations and asked them to choose from among a number of possible ways to respond to each situation – a technique known as situational judgment testing or SJT (see Figure 2 for an example). We asked them to choose which of the options presented for each situation would be the best way to respond and which would be the worst. We then analyzed the data to see if their choices predicted actual job performance (such as sales targets achieved) for a few different roles.
We expected that the people who were most successful in the workplace would be those who were able to identify what experts in the field said were the best ways to respond to each scenario. It turns out that was not the case. Instead, what we found was that the people who were most successful on the job were those who were correctly able to identify the worst answer to a larger number of situations. They knew what course of action was important to avoid for more scenarios. Specifically, the correlation between the ability to correctly identify the worst responses and job performance ranged r = 0.28 to 0.33 and was statistically significant. By contrast, the correlation between the ability to correctly identify the best responses to the scenarios and performance ranged r = 0.14 to 0.16 and was not statistically significant.
This work has important ramifications, the first and most immediate of which is being able to filter and hire better performers simply by concentrating on whether they know how to avoid doing the wrong things, which are typically widely-agreed upon, rather than trying to find people who pick the best answer. This should influence interview methodologies, case based discussions and other ways of candidate evaluation.
Another significant contribution is to the field of situational judgment testing. Unlike IQ tests, situational judgment tests are traditionally hard to standardize. Different organizations, functions and cultures have different notions of the ‘best’ way to handle a situation. Thus, with the best answer philosophy, one needs to build different tests and scoring mechanisms for each. On the other hand, the contribution of our work is that the ‘worst’ answer is more universal and consistent across diverse environment. It suggests that the development of SJTs can be relatively standardized across fields of study in a way that has not previously been possible.
Above and beyond all of these, the results have implications for our daily lives. Specifically, the results suggest that maybe this year you ought to concentrate on what ‘not to do’ and train your mind to avoid those things! Our conjecture is that it will lead to better happiness to your lives.
Make a start and list out what things you will avoid doing in 2017. We have our top on the list…Not write boring blogs!
Results presented at Ubicomp 2016, Heidelberg, Germany
Knowledge and cognitive ability tests have been automated and are taken on computers for more than three decades now. Pretty much all of you would have taken a SAT, GMAT or a GRE. What about motor skills? They are needed for almost all vocational jobs, say a plumber’s manual dexterity in fixing a screw. The best tests for them still are these bulky boards, pegs and instruments.
No one till date really thought about exploiting the power of the touch interfaces to develop such tests. Touch screen based devices are now ubiquitous in form of mobile phones and tablets. We wanted to find whether we can test people’s skills, say in tailoring and machining, by making them do things on the tablet. We wrote creative apps to make them do various actions on a tablet — rotating their fingers, pinching them, moving their elbows and shoulders to trace… and so on.
We reported in our Ubicomp paper, presented last week, that the scores from these tests actually do predict the speed and accuracy of industrial tasks done by machinists, tailors and machine operators. In fact, they are better predictors than the bulky manual tests! Our test scores can predict all parameters of task performance measured by us. The correlation ranges 0.19 to 0.37, similar to what a logical ability test would predict for a knowledge worker. In comparison, manual test scores correlate significantly only for 4 out of 7 task performance ratings and ranges 0.19-0.33.
This has great implications for the training and job matching of vocational workers. Using these apps, vocational job aspirants can test their motor skills at the comfort of their homes. They can get feedback and work on improving their skills. Also, if they perform well, they can generate credentials such as “Motor skills certified for a tailor” and highlight them to employers. The same assessments can be used by the industry to filter and recruit high performing employees.
We are happy to present the world’s first validated motor skill test. There is so much more opportunity for further research – figuring out which scores correlate to performance in which task, creating a job to score map, creating more innovative apps and so on… Let us do it with the power of the touch interface.
Jobseekers wish to know what skills are required by the industry in their region and also, what skills pay the most. So do institutions of higher and vocational education. Unfortunately, there is no information about this. It is considered hard to collate such information and the old school way of running surveys with corporations is time-consuming, expensive and mired by subjectivity.
We went after this problem the big data way – we scrapped some 4 million job openings from the web for the US, automatically matched them to our taxonomy of 1064 job roles and the 200+ skills required for these job roles. What did we get out of this? The US Skill Demand Map – For each state in the US, we know what percent of open jobs require a given skill and how much does a skill pay. For instance, see the Heat Map below — it shows how much does the software engineering skill pays in different US states. All this is generated automatically and be updated in minutes every month based on the current open jobs in the market!
This map is interactive. A jobseeker can enter his key skill to find which states demand it the most and which states pay for it the most. Additionally, s/he can scroll across the map to find the demand/compensation in each state for a given skill. On the other hand, the candidate can enter a state and find out top paying and high-demand skills in the state. Try it now!
Such analysis also helps us uncover policy trends (See our report). We found that agreeableness and finger dexterity are the most in demand skills after Information Gathering and Synthesis, which has the highest demand. One may see in the map below the states which have more percent of jobs requiring agreeableness and those where finger dexterity is required more often.
On the other hand, we can find the states which have the most demand and pay the most for say, analytical skills. New York pays the most for the skill, whereas the highest percent of jobs in Virginia need analytical skills. (See Figure 3)
The U.S. Skill Demand Map fills a major information gap in the labor market. To our knowledge, this is the first effort to objectively present the demand for skills across US states to aid better decision-making by job seekers. It is based on objective data, it is quick, accurate and user-friendly.
Trying to understand what skill to gain or how best to utilize your skills? Use our interactive map now!
Machine learning has helped solved many grading challenges – spoken english, essay grading, program grading and math problem grading to cite a few examples. However, there is a big impedance in using these methods in real world settings. This is because one needs to build an ML model for every question/prompt – for instance, in essay grading, a different model designed to grade an essay on ‘Socialism’ will be very different from one which can grade essays on ‘Theatre’. These models require a large number of expert rated samples and a fresh model building exercise each time. A real-world practical assessment works on 100s of questions which then translates to requiring 100s of graders and 100s of models. The approach doesn’t yield to be scalable, takes too much time and most of the times, is impractical.
In our KDD paper accepted today, we solve this challenge quite a bit for grading computer programs. In KDD 2014, we had presented the first machine learning approach to grade computer programs, but we had to build a model per problem. We have now invented a technique where we need no expert graded samples for a new problem and we don’t need to build any new models! As soon as we have around a few tens of ‘good’ codes for a problem (automatically identified using test case coverage and static analysis), our newly invented question-agnostic models automatically take charge. How will this help us? With this technology, our machine learning based models can scale, in an automated way, to grade 1000s of questions in multiple languages in a really short span of time. Within a couple of weeks of a new question being introduced into our question pool, the machine learning evaluation kicks in.
There were couple of innovations which led to this work, a semi-supervised approach to model building:
We can identify a subset of the ‘good’ set automatically. In the case of programs, the ‘good set’, codes which get a high grade, can be identified automatically using test cases. We exploit this to find other programs similar to these in a feature space that we define. To get a sense of this, think of a distance measure from programs identified as part of the ‘good set’. Such a ‘nearness’ feature would then correlate with grades across questions irrespective of whether it is a binary search problem or a tree traversal problem. Such features help us build generic models across questions.
We design a number of such features which are invariant to the question and correlate to the expert grade. These features are inspired by the grammar we proposed in our earlier work. For instance, one feature is how different is an unseen program from the set of keywords present in the ‘good set’; while another is the difference in the programs in the kind of computations they are doing. Using such features, we learn generic models for a set of problems using supervised learning. These generic models work super well for any new problem as soon as we get our set of good codes!
Check out this illustrative and easy-to-grasp video which demonstrates our latest innovation.
The table presents a snapshot of the results presented in the paper. As shown in the last two columns, the ‘question-independent’ machine learning model (ML Model) constantly outperforms the test suite based baseline (Baseline). The claim of ‘question-independence’ is corroborated by similar and encouraging results (depicted in last three rows) obtained on totally unseen questions, which were not used to train the model.
What does this all mean?
- We can really scale ML based grading of computer programs. We can continue to add new problems and the models will automatically start working within a couple of weeks.
These set of innovations apply to a number of other problems where we can automatically identify a good set. For instance, in circuit solving problems, the ones with the correct final answer could be considered a good set; this can similarly be applied to mathematics problems or an automata design problem; problems where computer science techniques are mature to verify functional correctness of a solution. Machine learning can automatically then help grade other unseen responses using this information.
Hoping to see more and more ML applied to grading!
Work done with Gursimran Singh and Shashank Srikant
We’re pleased to announce that our recent work on designing automated assessments to test motor skills (skills like finger dexterity and wrist dexterity) has been accepted for publication at the 9th International Conference on Educational Data Mining (EDM 2016).
Here are some highlights of our work –
- The need: Motor skills are required in a large number of blue collar jobs today. However, no automated means exist to test and provide feedback on these skills. We explore the use of touch-screen surfaces and tablet-apps to measure these skills.
- Gamified apps: We design novel app-based gamified tests to measure one’s motor skills. We’ve designed apps to specifically check finger dexterity, manual dexterity and multilimb co-ordination.
- Validation on three jobs: We validated the scores from the apps on three different job roles – tailoring, plumbing and carpentry. The results we present make a strong case for using such automated, touch-screen based tests in job selection and to provide automatic feedback for test-takers to improve their skills!
If you’re interested in the work and would like to learn more, please feel free to write to firstname.lastname@example.org
- Plan what NOT to do in 2017!
- World’s first automated motor skill test – exploiting the power of touch tablets
- The first interactive US Skill Demand Map- A big data approach
- Scaling up machine learning to grade computer programs for 1000s of questions in multiple languages
- An Automated Test of Motor Skills for Job Prediction and Feedback
- assessment research
- Big Data
- Computer Program Assessments
- Data science
- decision trees
- hiring assessment
- hiring test
- item difficulty
- Kids learning
- Machine Learning
- motor skill test
- online hiring assessment
- online hiring test
- programming assessments
- programming test
- Test Cases
- testing research