#AI – What humans can teach Artificial Intelligence

Home Blends & Trends 21 January 2020

Ah, Artificial Intelligence… Seen today as the zenith of technological evolution and algorithms, it’s something that can make you dream, or give you nightmares! AI has been back in the spotlight since 2016, and today it’s the hot topic on everyone’s lips – even if the expression itself is sometimes misused. In his 2018 report For A Meaningful Artificial Intelligence, French representative and mathematician Cédric Villani writes that for AI, the “ambitious objective is to understand and reproduce human cognition.”

The term Artificial Intelligence would lead one to believe that a computer could, itself, be intelligent. But is this really true? Can the complexity of a human brain be reproduced in a machine – even the most sophisticated one?

Intelligence: what does it mean?

To answer these questions, we must first define intelligence itself. We sometimes think of intelligence as just memory, but this is, of course, reductive. If it were true, a filing cabinet of archives could be intelligent! Merriam Webster defines intelligence as “the ability to learn or understand or to deal with new or trying situations”. So intelligence is the quality by which an individual adapts and reacts appropriately to his or her environment.

However, these decisions themselves are not intelligent unless they follow demonstrable logical reasoning, rather than being made at random. Individuals call upon their practical or theoretical experience (i.e. external experience gained by an individual’s or group’s learning), where memory is an important component. And computers have definitely got memory. Thanks to machine learning, computers are also adept at making decisions based on millions of past experiences. (If you’d like to learn more about machine learning, check out our series of articles on the subject!) Like its human counterpart, computers use machine learning to make contextualized decisions using historical data and a mathematical model. In this way, we could say that AI is indeed intelligent, as it possesses a large memory and is able to adapt accordingly to its environment

Artificial intelligence, at its best… 

First of all, remember that machine learning algorithms at the heart of AI have been human-engineered to respond to a specific task. To do this, we give the computer a multitude of examples in which each possible action has been completed many times over, and we then specify all qualifying or limiting criteria. The computer’s role is then to find a link among the criteria, the actions, and the result in order to make a choice.

Like a child, AI learns first in order to later act independently. For example, in this article we gave the example of a child who learns to tell different fruits apart. We can take it further though: what if the child learned how to distinguish between edible and non-edible fruits? This knowledge, passed down through generations, is reinforced by the personal experience of each new generation.

On Netflix for example, each user profile is unique as the platform uses an algorithm to offer a personalized experience, suggesting both new and already-seen content for the user. Human intervention certainly plays an important role, but once the algorithm’s learning is complete, human intervention is no longer useful and the computer can act… alone! The idea is therefore to automate tasks that are time-consuming and complex for humans.

Decisions automatically made by computers are actually all around us, across many domains: entertainment and leisure (AlphaGo, AI-composed music), self-driving cars, the legal or healthcare systems…

And, of course, it’s not (always) about getting rid of human intervention. On the contrary, AI can often serve human intelligence, “augmenting” it. In Estonia, AI could be used to clear the backlog of small claims in court. Doctors could also use AI to detect anomalies in patient x-rays. Small steps for AI, but a giant leap for mankind!

And at its worst?

Though Artificial Intelligence might have an impressive list of greatest hits, it also has its fair share of serious failures. Take, for example, the scandal that ensued when the Google Photos algorithm identified African Americans as gorillas, or Microsoft’s infamous accidentally-racist chatbot, deleted just 24 hours after launching. Are these mishaps due to a lack of intelligence in the computers? Or a cruel lack of ethics?

In reality, several factors contribute to an error-prone algorithmic model:

  • Poor quality data (training samples were not representative of the full population in the Google case, for example), or biased data (it is tough to train an algorithm to select job candidates in a non-sexist manner if, historically, the candidate selection process is sexist)
  • Forgetting essential criteria by omitting key subject information, or lack of business savvy
  • An error in developing the model itself

These examples show that these failures are due to the humans behind the algorithms – just as a parent is considered legally responsible for their minor child! With knowledge of gender bias, even as it is inherent in the profession (note that only  22% of AI professionals are female), it is possible to repair the algorithms progressively to make them more intelligent. To tackle ethical problems, researchers are trying to work relevant ethical considerations into their algorithms. For this, the performance of algorithms is measured globally both using historical data, and in smaller data categories (e.g. male/female groups for facial recognition). Only an algorithm that guarantees equal precision across all categories will be considered relevant.

It is an illusion to imagine that AI can be unbiased, as if it could emanate from a state of being. Because it comes from humans, it is therefore immersed from the beginning in a state of culture: the culture of its creators. 

Caroline Lair, Co-founder of Women in AI

Artificial Education rather than Artificial Intelligence

Without human intervention, the computer is limited in its generalization capacity and its bias management, which either exist in the initial data or are brought in by the human during algorithm development. In much the same way, a human in isolation cannot benefit from others’ experiences or the pooling of ideas, and is unable to generalize his or her knowledge or rationalize his or her decisions.

Furthermore, Artificial Intelligence is also limited by legal restrictions – for the better! GDPR in Europe, for instance, restricts the type of information that algorithms can receive and process, by limiting the lifespan of cookies and requesting explicit consent from users. These measures directly impact the memory of an algorithm, and thereby its intelligence.

Today, data collection (online or otherwise) is highly regulated. The days of storing unlimited data without explicit consent are long gone. Without saving the data, information will never inform our AI, unlike an in-store salesperson who expands his or her personal experience and client knowledge over time.

Ultimately, it is human interaction and knowledge sharing that make computers intelligent. Therefore, we can say that AI as such does not exist, because the interaction takes place in one direction only: from humans to the machine. Far from possessing any sort of moral conscience, AI is only in its infancy. It would be more accurate to talk about Artificial Education than Artificial Intelligence, as it is the human that will educate the machine, though the inverse is not (yet) true.

Would you like another cup of tea?