The principle goal of machine learning is to automate common tasks, with the help of computers. To do so, machine learning algorithms try to mimic human learning based on a mathematical model. What is common to humans (e.g. recognising a face, lacing one’s shoes) is in fact a set of highly complex tasks for a machine.
The main idea behind machine learning is that things that happened/coincided frequently in the past should happen/coincide in the future. For instance: if event B always happens after event A, you can expect to see event B happen shortly after event A.
Can machine learning beat the human mind?
Of course, humans have many advantages over machines, including asking for an expert’s opinion, transferring knowledge from past experiences, and interacting with our environment to gather more information. But there are limitations to our capacities: we can be forgetful, or personally biased before making a decision. In order to mimic human thinking, machine learning must be done with human expertise and under supervision. Otherwise the model’s input could be totally random and the model would learn nothing.
In most cases, machine learning aims not to be better than a human, but to be able to address an issue well enough, but at scale. That is to say, the idea is that machines will be able to answer questions gathering lots of information faster than a human, and this possibly thousands or millions of times in a row. This volume is the main difference between traditional statistics (think good old linear regression via Excel) and modern machine learning. It is both the “length” (number of observations) and “width” (number of descriptors per observation) of the data that have changed the algorithms, which exist only because we now have the required computational power.
For example, a machine learning algorithm can classify a product based on its outward appearance in a tiny fraction of a second whereas a human would need several seconds per product and would not be able to deal with hundreds of potential labels. Deep learning expert (& Director of AI at Tesla) Andrej Karpathy raced against machine learning on this task. It took Karpathy months, and he spent around one minute per image at the beginning. He reached a 5.1% error rate while GoogLeNet, the competing machine learning solution, reached a 6.67% error rate, but was able to answer immediately for each image. The machine learning solution is thus more suitable for production and scalability. Today’s solutions achieve a 3.46% error rate, and are both dramatically faster and more accurate than humans.
What are the differences among artificial intelligence, machine learning and deep learning?
Performing complex tasks automatically is the goal of artificial antelligence. This can be addressed using different methods among which is machine learning. When dealing with abstract inputs like text, image, or audio, machine learning can be based on a specific family of algorithms called “deep learning” algorithms.
For instance, a smart vacuum cleaner that detects walls with captors is AI without machine learning. A smart vacuum cleaner that learns when to clean and where to clean more frequently, based on cleaning history, is AI that leverages machine learning (but without deep learning). Finally, if a smart vacuum cleaner is given a camera to detect your dog or your baby, the result is AI with deep learning.
Machine learning solutions can fall into 3 categories, which are addressed in a series of articles:
- Part 2: Supervised learning: predicting future behaviour based on past data;
- Part 3: Unsupervised learning: grouping similar observations together;
- Part 4: Reinforcement learning: interacting with the environment to achieve a precise goal.
Want to learn more? Stay tuned!