Teaching machines to learn

Your guide to what may be our future robot overlords

Graphic by Aldo Rios.

Most “introduction to psychology” students would be familiar with the importance of computers in the history of cognitive psychology, or the study of human mental processing. From the beginning of computer science, engineers have been working to equip machines with human abilities. Computers actually inherited their name from a human profession, foreshadowing their profound impact on eliminating the need for human labour.

The first computers were people who earned their living by working as human calculators. From 1935 to 1970, female computers provided essential contributions to the scientific community by processing data for what would later become NASA. In fact, the highly anticipated upcoming movie Hidden Figures is actually based on a few of these women.

Times changed, and thanks to the automatization of mathematical operations with the invention of (non-human) computers, computer scientists have since become more ambitious in their efforts to artificially emulate mental functions. The idea seems like a fantasy straight out of a Philip K. Dick novel – how do we program computers to perform what were once seen as solely biological, and often only human, functions?

Although humans have had an advantage in comprehending sensory inputs, such as speech and visual information, scientists are trying to develop computers that exceed human perception in performing various tasks, such as recognizing faces, identifying sounds, and processing written language.

The first step to designing biologically inspired computers is a basic understanding of the brain’s structure. Mental activity and behaviour result from electrical communication between neurons. According to work by neuroscientists, as summarized by the American National Institute of Health, a network of neurons, or neural network, can strengthen, terminate, or form entirely new intercellular connections, enabling our brains to modify themselves as we acquire new memories.

This response to experience is, on a very basic level, what enables humans to develop highly refined pattern recognition skills. This is where the ambitious achievements of the biological-acting computer come to a screeching halt. Regardless of how many times they run, traditional computer programs cannot improve themselves without revisions from human programmers. Unlike fixed software, human students can learn without needing a teacher to directly reprogram their neural connections. This is why it would be necessary to create an artificial neuron network for the machine. Neural networks, when built by man, are meant to simulate being able to learn from experience.

The CIRP Encyclopedia of Production Engineering explains how artificial neural networks enable machine learning – the ability of software to self-correct error between actual and desired outputs, much like the purpose of actual human neural networks. The encyclopedia adds, “artificial neural network algorithms attempt to abstract this complexity and focus on what may hypothetically matter most from an information processing point of view.” Mathematically capturing how neural networks change with experience allows for the achievement of what Microsoft defines as the three kinds of machine learning: supervised, unsupervised, and reinforcement.

When given a set of data, supervised learning allows machines to infer relationships between separate inputs. In facial recognition, this type of learning allows a machine to recognize the same person’s face in many different images. We can easily recognize our friends in photographs despite changes to their clothing, hairstyle, or facial expression, but this ability is highly complex and difficult to simulate with computers.

Unsupervised learning requires machines to organize randomly ordered data into a coherent pattern. If you imagine throwing a deck of cards onto the floor and sorting them into sensible categories, that is the kind of ability that unsupervised machine learning enables computers to do.

Reinforcement-based learning is similar to how people expect rewards for good behaviour and punishments for bad decisions. It encourages machines to repeat desired responses to inputs and discourages inappropriate responses. Depending on the more specific type of cognitive ability that a programmer wants to simulate, these three types of machine learning each generate a diverse variety of neural networks.

Such innovations have the potential to benefit current applications of artificial intelligence. Anyone who has talked with Siri, looked at Google Street View, or checked lists of recommended products on Amazon already has personal experience with machine learning algorithms.

Although she can be frustrating at times, Siri does a decent job at recognizing human speech. Google Street View now has the capacity to read house numbers, which is useful when people are trying to pair addresses with images of buildings. Amazon uses past shopping habits and ratings of products to infer a user’s preferences and suggest related items. These are fairly routine mental abilities for humans, but computers are only just beginning to realize their potential to learn from experience and apply knowledge in helpful ways.

The same basic principles behind these technologies are enabling some people to entrust machine learning with more critical jobs. For example, IBM’s Watson is an artificial intelligence system that earned credibility for defeating the human champions in the trivia gameshow Jeopardy! Watson is now finding ideal treatments for cancer patients by processing the contents of thousands of medical studies, though IBM is quick to reassure everyone that the medical professionals who work with Watson will still be doing “most of the thinking.”

While Watson can use its capacity to process a massive amount of data to make statements about what the best courses of treatment might be, that ability cannot replace the expertise of a human cancer specialist. Whereas systems like Watson are designed only to play a supportive role in medical treatment and other human decision-making tasks, other applications of artificial intelligence do threaten to make human labour obsolete in some contexts – and it doesn’t always have a positive effect on human beings. If Google’s self-driving cars become practical for widespread use, professional truck drivers and taxi drivers may be the next victims of automatization.

How can human drivers compete with the natural advantage that computing systems have in processing speed? How can human drivers compete with an automated system that does not require any rest? It is clear that technology will continue to bring radical changes in the way we all live.

Although new technologies are designed to help, big changes never occur without some negative results. Addressing developments in machine learning requires us to consider not only the benefits, but also the potential social costs that will arise from outsourcing complex tasks and higher mental functions to our digital helpers.