The future of artificial intelligence

Artificial intelligence. This pair of words in most of us — myself included — stirs up images of pale-faced androids on starships and disobedient computers named HAL. John Anderson, an Artificial intelligence (AI) researcher at the University of Manitoba’s department of computer science, would like to change that perception.

“AI is about making computers more intelligent and more adaptable to everyday situations,” he said. “It’s about things like speech recognition on your cell phone, and context-dependent word recognition when you’re talking to your bank on the phone.” Anderson explained that AI is all around us, but our imaginations snap to images of Terminators and cylons because when real AI is doing its job, you don’t even realize it’s there.

According to Anderson, modern AI “is condemned to only be noticed when something goes wrong, that’s why so many people only think of AI in terms of robots.”

If AI is a part of our daily lives right now then surely it will play a more visible role in our lives down the road, right? Not necessarily. Anderson said that as AI advances you will continue to not see it in action more often, meaning that as AI is integrated more and more into our daily lives we may notice things like more accurate search engine results and better book recommendations from websites like Amazon.ca, but whether or not you equate those things to examples of futuristic AI is another story.

For you “robotophiles” — who are no doubt lamenting the fact that the closest your future self may get to AI on a daily basis is dealing with a clever automated phone menu — there is hope. “You will see more AI in robots too, however, turning AI into a consumer product has all sorts of layers that a scientist might not think of, like the issue of liability.” Consider this: if your car’s AI parallel parks for you (a feature already available on some Mercedes and Lexus vehicles) and crashes itself into another car, who is to blame? As lame as it sounds, issues such as these are keeping AI out of a lot of consumer products.

The American military, however, is actively developing AI-enabled weapon-carrying robots. But while the concern over law suits is somewhat diminished with military robots, there are infinite ethical questions associated with giving a computer the ability to make life and death decisions.

If you put the ethical issues aside, while you and I may currently only see the negative in strapping guns onto an intelligent robot, Anderson sees more philosophical implications. When you give AI the decision to pull a trigger and kill a human being, Anderson says, “what you are really doing is quantifying the rules of engagement.” He sees this as a small departure from what we ask our own soldiers to do on the battlefield every day, and adds that in computerizing those kinds of decisions we may be forced to better understand those kinds of issues and the potential conflicts which might exist within a given set of engagement rules.

Anderson — a devout AI optimist — also sees another positive, and more practical side to military robots: “If you take the gun away, you should see that if we can trust it (on a battlefield) then we should be able to trust it in a lot of other places.” Or put another way, AI that’s good enough for the battlefield should be good enough for your Roomba.