Aspiring Minds and AI

Aspiring Minds has been doing machine learning, aka artificial intelligence, for 8 years now, much before it became a vogue. We solved original questions using AI, not copycatting the West – doing a lot of firsts in the world. Here is a quick recap of Aspiring Minds’ tryst with AI, together with how AI evolved in India.

Phase I- “ML in a niche”- We had hired two engineers to work on Machine learning projects in 2010. After one year, they came to my room and inquired about their future, since all their friends were doing software development. Hardly anyone knew about ML.

- 2012: Launched SVAR: An AI based spoken English evaluation product

Today, SVAR is used across the world including India, Philippines, China and Latin America. It automatically generates scores on pronunciation and fluency based on speech samples of a person.

Was this the first AI-based product from India that reached scale?

- 2012: Made one of our data set public and organized a Machine Learning Competition

The competition had entries from India, Brazil, Belgium and Pakistan. See the leader-board and winners hereThis was probably the first by an Indian company and among the first few in the world.

- 2013: Launched AUTOMATA: World’s first machine learning based programming assessment

Automata is used by companies across the world – some examples include Wipro, Cognizant, Baidu, ZTE and one of the largest ecommerce giants in the USA. It is backed by three publications and several patents.

- 2014: Published our first ML paper on grading programming skills automatically at KDD

The paper has quickly garnered 28 citations. This was followed by several other papers on automatic grading of spoken English, motor skills, soft-skills, published at KDD, ACL, Ubicomp, IJSE and others. We also did a first workshop on AI+assessments at KDD with international collaborators.

Aspiring Minds remains one of the very few Indian companies that publish in ML conferences

 
Aspiring Minds and AI-03
 

Phase II- Big Data Science Fascination- Everyone by now had started talking about Big Data and Data Science – a new name for machine learning! Most work in India was around data engineering and not deriving intelligence from data. MOOCs on AI exploded –everyone who took the course didn’t really learn.

- 2015: Organized the world’s first Data Science Camp for Kids

We organized a very successful hands-on data science camp for kids of standard 5th-8th. Kids performed the full flow of supervised learning. Since then this open source project has been replicated at Illinois, Seattle, Pune and Bangalore. It also led to a paper on the pedagogy of teaching machine learning to kids.

- 2015: Launched ml-india.org, the first effort ever to audit India’s ML activity and a resource repository for all MLers

ML India brings all ML efforts in India under a single roof. Read more about how does India fare in ML – the main motivation behind setting up this forum. The group has 1800+ members, hosted 27 machine learning meetups, lists 146 ML professionals, 55 companies, 28 data sets and 11 groups.

We also launched a new data set AMEO, at iKDD. Attracted users from Harvard Kennedy School, Dublin Institute of Technology, New York University, TCS, Sapient and Flytxt.

- 2016: Launched the World’s first automated motor skills test

Aptitude tests have been automated for ages. But motor skills test, a way to measure skills of blue collar workers have not. We used the power of tablets and machine learning to do it and show that it is predictive of the job performance of blue collar workers. Read here.

- 2016: US Skill Map and India Skill Map- Big Data Analysis

Automatically crawled the web to aggregate jobs of USA and India, to create the world’s first interactive Skill Demand Map. Check it out here.

Phase III- National Interest in AI, but with nascent understanding –Data science had by now died a silent death, only to be replaced by Artificial Intelligence. From the PMO, Finance Minister to Niti Aayog, today everyone is interested in AI. Yet, we have little novel methods or application of AI from India. We have little local expertise in AI – our research contribution is 1/15th of US and 1/8th of China.

- 2017: Machines started understanding codes that do not compile!

Automata, our program skill grading platform started scoring uncompilable codes, a first in the world! Our algorithm could read meaning of programs, which a compiler couldn’t and generate feedback for so many more students.

And the journey continues!!!

This has been possible by efforts of many in Aspiring Minds’ research team, most notably, Shashank Srikant, Rohit Takhar, Vishal Venugopal, Gursimran Singh, Bhanu Pratap Singh, Vinay Shashidhar, and Milan Sachdeva.

Phase IV: How can India lead in Artificial Intelligence? From doing research, we started thinking about research policy. My recent book, ‘Leading Science and Technology: India Next’ focuses primarily on the research ecosystem in India and highlights several areas where we should improve. It is supported by a white paper on how India should invigorate its Artificial Intelligence ecosystem. This is where we need to go next…

- Varun Aggarwal

The first interactive US Skill Demand Map- A big data approach

Jobseekers wish to know what skills are required by the industry in their region and also, what skills pay the most. So do institutions of higher and vocational education. Unfortunately, there is no information about this. It is considered hard to collate such information and the old school way of running surveys with corporations is time-consuming, expensive and mired by subjectivity.

We went after this problem the big data way – we scrapped some 4 million job openings from the web for the US, automatically matched them to our taxonomy of 1064 job roles and the 200+ skills required for these job roles. What did we get out of this? The US Skill Demand Map – For each state in the US, we know what percent of open jobs require a given skill and how much does a skill pay. For instance, see the Heat Map below — it shows how much does the software engineering skill pays in different US states.  All this is generated automatically and be updated in minutes every month based on the current open jobs in the market!

 Figure 1: Compensation for software engineering skill

Figure 1: Compensation for software engineering skill

This map is interactive. A jobseeker can enter his key skill to find which states demand it the most and which states pay for it the most. Additionally, s/he can scroll across the map to find the demand/compensation in each state for a given skill. On the other hand, the candidate can enter a state and find out top paying and high-demand skills in the state. Try it now!

Such analysis also helps us uncover policy trends (See our report). We found that agreeableness and finger dexterity are the most in demand skills after Information Gathering and Synthesis, which has the highest demand. One may see in the map below the states which have more percent of jobs requiring agreeableness and those where finger dexterity is required more often.

 

Figure 2: Skills in highest demand in each U.S. state (other than Information Gathering & Synthesis)

Figure 2: Skills in highest demand in each U.S. state (other than Information Gathering & Synthesis)

On the other hand, we can find the states which have the most demand and pay the most for say, analytical skills. New York pays the most for the skill, whereas the highest percent of jobs in Virginia need analytical skills. (See Figure 3)

Figure 3: Heat maps for demand and compensation for analytical skills

Figure 3: Heat maps for demand and compensation for analytical skills

The U.S. Skill Demand Map fills a major information gap in the labor market. To our knowledge, this is the first effort to objectively present the demand for skills across US states to aid better decision-making by job seekers. It is based on objective data, it is quick, accurate and user-friendly.

Trying to understand what skill to gain or how best to utilize your skills? Use our interactive map now!

-Varun

Scaling up machine learning to grade computer programs for 1000s of questions in multiple languages

Machine learning has helped solved many grading challenges – spoken english, essay grading, program grading and math problem grading to cite a few examples. However, there is a big impedance in using these methods in real world settings. This is because one needs to build an ML model for every question/prompt – for instance, in essay grading, a different model designed to grade an essay on ‘Socialism’ will be very different from one which can grade essays on ‘Theatre’. These models require a large number of expert rated samples and a fresh model building exercise each time. A real-world practical assessment works on 100s of questions which then translates to requiring 100s of graders and 100s of models. The approach doesn’t yield to be scalable, takes too much time and most of the times, is impractical.

In our KDD paper accepted today, we solve this challenge quite a bit for grading computer programs. In KDD 2014, we had presented the first machine learning approach to grade computer programs, but we had to build a model per problem. We have now invented a technique where we need no expert graded samples for a new problem and we don’t need to build any new models! As soon as we have around a few tens of ‘good’ codes for a problem (automatically identified using test case coverage and static analysis), our newly invented question-agnostic models automatically take charge. How will this help us? With this technology, our machine learning based models can scale, in an automated way, to grade 1000s of questions in multiple languages in a really short span of time. Within a couple of weeks of a new question being introduced into our question pool, the machine learning evaluation kicks in.

There were couple of innovations which led to this work, a semi-supervised approach to model building:

  • We can identify a subset of the ‘good’ set automatically. In the case of programs, the ‘good set’, codes which get a high grade, can be identified automatically using test cases. We exploit this to find other programs similar to these in a feature space that we define. To get a sense of this, think of a distance measure from programs identified as part of the ‘good set’. Such a ‘nearness’ feature would then correlate with grades across questions irrespective of whether it is a binary search problem or a tree traversal problem. Such features help us build generic models across questions.

  • We design a number of such features which are invariant to the question and correlate to the expert grade. These features are inspired by the grammar we proposed in our earlier work. For instance, one feature is how different is an unseen program from the set of keywords present in the ‘good set’; while another is the difference in the programs in the kind of computations they are doing. Using such features, we learn generic models for a set of problems using supervised learning. These generic models work super well for any new problem as soon as we get our set of good codes!

Check out this illustrative and easy-to-grasp video which demonstrates our latest innovation.

 

The table presents a snapshot of the results presented in the paper. As shown in the last two columns, the ‘question-independent’ machine learning model (ML Model) constantly outperforms the test suite based baseline (Baseline). The claim of ‘question-independence’ is corroborated by similar and encouraging results (depicted in last three rows) obtained on totally unseen questions, which were not used to train the model.

Metric
Question Set
#Questions
ML Model
Baseline
Correl
All questions
19
0.80
0.65
Bias
All questions
19
0.24
0.35
MAE
All questions
19
0.57
0.85
Correl
Unseen questions only
11
0.81
0.65
Bias
Unseen questions only
11
0.27
0.31
MAE
Unseen questions only
11
0.59
0.84

What does this all mean?

  • We can really scale ML based grading of computer programs. We can continue to add new problems and the models will automatically start working within a couple of weeks.
  • These set of innovations apply to a number of other problems where we can automatically identify a good set. For instance, in circuit solving problems, the ones with the correct final answer could be considered a good set; this can similarly be applied to mathematics problems or an automata design problem; problems where computer science techniques are mature to verify functional correctness of a solution. Machine learning can automatically then help grade other unseen responses using this information.

Hoping to see more and more ML applied to grading!

Varun

Work done with Gursimran Singh and Shashank Srikant

An Automated Test of Motor Skills for Job Prediction and Feedback

We’re pleased to announce that our recent work on designing automated assessments to test motor skills (skills like finger dexterity and wrist dexterity) has been accepted for publication at the 9th International Conference on Educational Data Mining (EDM 2016).
Here are some highlights of our work –

  • The need: Motor skills are required in a large number of blue collar jobs today. However, no automated means exist to test and provide feedback on these skills. We explore the use of touch-screen surfaces and tablet-apps to measure these skills.
  • Gamified apps: We design novel app-based gamified tests to measure one’s motor skills. We’ve designed apps to specifically check finger dexterity, manual dexterity and multilimb co-ordination.
    amultifingermanual

 

 

 

 

 

 

 

 

  • Validation on three jobs: We validated the scores from the apps on three different job roles – tailoring, plumbing and carpentry. The results we present make a strong case for using such automated, touch-screen based tests in job selection and to provide automatic feedback for test-takers to improve their skills!

If you’re interested in the work and would like to learn more, please feel free to write to research@aspiringminds.com

Data Science For Kids Goes International

We successfully organised our first international data science workshop for kids at the University of Illinois as a part of SAIL, a one-day event to learn more about life on campus by attending classes taught by current students.
The workshop aimed towards introducing the idea of machine learning and data-driven techniques to middle-to-high-school kids. Participants went through a fun exercise to understand the complete data science pipeline starting from problem formulation to prediction and analysis.cssail
Special mention and thanks to the mentors, Narender Gupta, Colin Graber and Raghav Batta, students at the university who helped us execute the academic and peripheral logistics of the workshop efficiently and making the experience engaging and interesting for the attendees.

naren

colinraghav

 

 

 

 

 

 

Narender Gupta                     Colin Graber                          Raghav Batta

To read the mentor experiences click here.
Visit
sail.cs.illinois.edu for more information on the event or workshop.