AI Terms You Need To Know
In 1956, Dartmouth Professor John McCarthy organized a conference called “Cerebral Mechanisms in Behavior.” It was about how computers could be used to think like the human brain.
McCarthy also did something else that was very important at the conference: He coined the phrase “artificial intelligence.”
Although, many of the participants were not particularly thrilled with this. But then again, no one could come up with anything better!
Since then, the field of AI would go on to spawn many other words and acronyms. Many of them were technical or would just fade away.
But of course, others have become major categories. Here’s a look:
Machine Learning (ML): This often gets confused with the phrase AI. But there are key differences.
Keep in mind that AI describes the broad category of intelligence machines and software. Then there are various subsets and one of them is ML.
The roots of this category goes back to the late 1950s. At the time, IBM developer Arthur Samuel created the first ML program, which allowed a person to play checkers against a computer. However, he did not use the typical if/then/else structure for this since. He believed this was too inflexible.
Instead, Samuel relied on processing data. This made it possible for a computer to understand how to play better chess.
Samuels defined ML as a “field of study that gives computers the ability to learn without being explicitly programmed.”
And yes, the category has gone on to be one of the most important in AI.
Deep Learning: This often gets confused with AI and ML. So what are the differences? Consider that deep learning is a subset of ML. It is also where much of the innovation of AI has happened during the past decade. The pioneering efforts of academics like Geoffrey Hinton, Yoshua Bengio, and Yann LeCun have been critical for the success.
They leveraged neural networks, which process data by assigning a stream of weights to the items. The “deep learning” part of this is where there are many hidden layers that provided more sophisticated analysis. This is what has helped with breakthrough applications like self-driving automobiles, advanced fraud protection and virtual assistants like Siri and Alexa.
Natural Language Processing (NLP): This leverages AI for recognizing speech. This was one of the early applications of the technology but progress was tough.
Yet during the past ten years, there have been major strides with NLP. Consider that a key to this has been the use of deep learning.
In the corporate world, NLP has been essential for the growth of chatbots. These systems have been a big help in improving customer support. According to Grand View Research, the spending is forecasted to hist $1.25 billion by 2025, with the CAGR (compound annual growth rate) of 24.3%.
Explainability: Sophisticated AI systems like deep learning are often called “black boxes.” This means that it is far from clear why the models are coming up with certain predictions and insights. Unfortunately, this can make the technology difficult to pass muster with regulators.
What can be done? Well, there is something known as explainability. This uses sophisticated models to detect the root causes for AI models. Keep in mind that this is an emerging category – but it is showing promise – and is likely to be a strong growth area of the market.
Generative Adversarial Network (GAN): This is one of the most recent innovations with AI. The developer of the GAN is Ian Goodfellow, a PhD in machine learning.
The story of how he came up with this concept is certainly interesting. It was while he was in Montreal in 2014. Goodfellow talked with friends about how deep learning could create photos. Then when he went back home, he started coding this up. The idea was to have two AI models compete against each other – and this could ultimately create content.
The GAN would get instant traction. Goodfellow would then become one of the most recruited AI experts in the world and went to work for Google and Apple.
Supervised and Unsupervised Learning: Supervised learning is the traditional approach to AI. This involves using labeled data to come up with the models.
But this has some limitations. Note that most data is unstructured and this means it needs to be labeled. Oh, and the labeling process is often labor-intensive.
Yet there is another approach: unsupervised learning. With this, a model will not need any labels. Instead, it will find the inherent patterns in the data (say by identifying clusters). In some cases, the AI can be used to create the labels as well.
Reinforcement Learning: This is based on how people typically learn something – that is, by trial-and-error. Think of it as a reward-punishment system.
So far, reinforcement learning has been focused on game playing. For example, DeepMind used it to beat the world champion of Go.
But in the coming years, reinforcement learning could become incredibly important for commercial applications, such as for robotics and NLP.