TECH TALK: A.I. on the Road: Who’s Driving?

ERIC’S TECH TALK

by Eric Austin
Computer Technical Advisor

In an automobile accident – in the moments before your car impacts an obstacle, in the seconds before glass shatters and steel crumples — we usually don’t have time to think, and are often haunted by self-recriminations in the days and weeks afterward. Why didn’t I turn? Why didn’t I hit the brakes sooner? Why’d I bother even getting out of bed this morning?

Driverless cars aim to solve this problem by replacing the human brain with a silicon chip. Computers think faster than we do and they are never flustered — unless that spinning beach ball is a digital sign of embarrassment? — but the move to put control of an automobile in the hands of a computer brings with it a new set of moral dilemmas.

Unlike your personal computer, a driverless car is a thinking machine. It must be capable of making moment-to-moment decisions that could have real life-or-death consequences.

Consider a simple moral quandary. Here’s the setup: It’s summer and you are driving down Lakeview Drive, headed toward the south end of China Lake. You pass China Elementary School. School is out of session so you don’t slow down, but you’ve forgotten about the Friend’s Camp, just beyond the curve, where there are often groups of children crossing the road, on their way to the lake on the other side. You round the curve and there they are, a whole gang of them, dressed in swim suits and clutching beach towels. You hit the brakes and are shocked when they don’t respond. You now have seven-tenths of a second to decide: do you drive straight ahead and strike the crossing kids or avoid them and dump your car in the ditch?

Not a difficult decision, you might think. Most of us would prefer a filthy fender to a bloody bumper. But what if instead of a ditch, it was a tree, and the collision killed everyone in the car? Do you still swerve to avoid the kids in the crosswalk and embrace an evergreen instead? What if your own children were in the car with you? Would you make the same decision?

If this little thought exercise made you queasy, that’s okay. Imagine how the programmers building the artificial intelligence (A.I.) that dictates the behavior of driverless cars must feel.

There may be a million to one chance of this happening to you, but with 253 million cars on the road, it will happen to someone. And in the near future, that someone might be a driverless car. Will the car’s A.I. remember where kids often cross? How will it choose one life over another in a zero-sum game?

When we are thrust into these life-or-death situations, we often don’t have time to think and react mostly by instinct. A driverless car has no instinct, but can process millions of decisions a second. It faces the contradictory expectations of being both predictable and capable of reacting to the unexpected.

That is why driverless cars were not possible before recent advances in artificial intelligence and computing power. Rather than traditionally linear, conditional-programming techniques of the past (eg: If This Then That), driverless cars employ a new field of computer science called “machine learning,” which utilizes more human-like functions, such as pattern-recognition, and can update its own code based on past results in order to attain better accuracy in the future. Basically, the developers give the A.I. a series of tests, and based on its success or failure in those tests, the A.I. updates its algorithms to improve its success rate.

That is what is happening right now in San Francisco, Boston, and soon New York. Las Vegas is testing a driverless bus system. These are opportunities for the driverless A.I. to encounter real-life situations and learn from those encounters before the technology is rolled out to the average consumer.

The only way we learn is from our mistakes. That is true of driverless cars, too, and they have made a few. There have been hardware and software failures and unforeseen errors. In February 2016, a Google driverless car experienced its first crash, turning into the path of a passing bus. In June 2016, a man in a self-driving Tesla was killed when the car tried to drive at full speed under a white tractor trailer crossing in front of it. The white trailer against the smoky backdrop of a cloudy sky fooled the car. The occupant was watching Harry Potter on the car’s television screen and never saw it coming.

Mistakes are ubiquitous in our lives; “human error” has become cliché. But will we be as forgiving of such mistakes when they are made by a machine? Life is an endless series of unfortunate coincidences, and no one can perfectly predict every situation. But, lest I sound like Dustin Hoffman in the film Rain Man, quoting plane crash statistics, let me say I am certain studies will eventually show autonomous vehicles reduce overall accident rates.

Also to be considered are the legal aspects. If a driverless car strikes a pedestrian, who is responsible? The owner of the driverless car? The car manufacturer? The developer of the artificial intelligence governing the car’s behavior? The people responsible for testing it?

We are in the century of A.I., and its first big win will be the self-driving car. The coming decade will be an interesting one to watch.

Get ready to have a new relationship with your automobile.

Eric can be emailed at ericwaustin@gmail.com.

 
 

Responsible journalism is hard work!
It is also expensive!


If you enjoy reading The Town Line and the good news we bring you each week, would you consider a donation to help us continue the work we’re doing?

The Town Line is a 501(c)(3) nonprofit private foundation, and all donations are tax deductible under the Internal Revenue Service code.

To help, please visit our online donation page or mail a check payable to The Town Line, PO Box 89, South China, ME 04358. Your contribution is appreciated!

 
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *