Hanson Robotics
stanislav petrov

AI Lesson: 35 Years Ago, This Russian Saved the World

The date was Sept. 26, 1983, just after midnight in Moscow. In a small back room, alone, sat USSR Lt. Col Stanislav Petrov. His job: Watch all radar and warning the Soviets in the event of a U.S. nuclear missile strike.

In that eventuality, the USSR was to immediately launch a total, all-out nuclear counterattack on the U.S., according to Petrov speaking to reporters years later.

And that night, 35 years ago, the computer alarms did go off, warning that a single American nuclear missile was on its way. 

Petrov figured it was a computer error as it seemed unlikely that the U.S. would send just one missile.

But then the alarms went off again, growing louder and louder as the computer notified him that a second, and then a third, fourth and fifth missile, were on their way, too.

The monitor in front of Petrov started flashing the Russian word for START in bright letters, an automated instruction apparently indicating that the USSR should launch its massive counterstrike.

Petrov had no way of knowing for sure, but intuition told him the computer was mistaken. The alarms grew deafening. Petrov had just minutes to decide whether to follow orders and call Soviet leadership, as protocol demanded, or to trust his gut.

If he was wrong, U.S. missiles would wreak destruction on the USSR and things he held dear, without any counter at all.

But what if he were right?

Within a few minutes, Petrov knew he’d been right — and that he had just prevented global thermonuclear war.

Could Petrov have saved the world in the age of AI?

“Petrov made his decision by taking into account the complex context around him. He was trained in a specific context — his machine and the lights —  but when things went down, he looked beyond that context and reasoned that it didn’t make sense. And he took appropriate action,” says Jim Hendler, a Rensselaer Polytechnic Institute 

Hendler, a professor of computer, web and cognitive sciences and the coauthor, with Alice Mulvehill, of the upcoming book, Social Machines: The Coming Collision of Artificial Intelligence, Social Networking and Humanity (Springer, 2016).

But the real question, Hendler told me after his recent appearance in a headline artificial intelligence panel at the Heidelberg Laureate Forum in Germany, is not so much around how Petrov saved the world, though that certainly is interesting. 

It’s about what happens the next time this happens, and Petrov isn’t around.

“My bigger worry,” said Hendler, “has to do with AI getting smarter (because) at some point we’re going to remove Petrov from the loop.” Removing humans from key warfare decisions is already a topic of discussion around drone and cyberwarfare, he added.

The “issue is having someone (i.e., a human) with intuition being somewhere in the loop before the missiles get launched.”

Can we guarantee technology is compliant with the rules of war?

The future of AI, robotics and other technologies in relation to critical human decision in peace and wartime is a continuing topic of debate, of course.

But ouldn’t so-called smart machines and robots be programmed with caution as well as with the laws of war in mind?

“I see no way we can guarantee compliance with the laws of war,” the University of Sheffield’s AI professor, Noel Sharkey, told me. He is, non incidentally, a co-founder of the Foundation for Responsible Robotics.

This is “a real worry for international security — we have no idea what will happen.”

“We’ve got to take responsibility for the technology we create,” Sharkey added.

“Yet we seem to be sleep walking into this the same way we did into the internet.”

 Humans often have no clue what the future holds so far as technology is concerned, he said. Yet the makers of new technologies often over-promise.

For instance, the makers of self-driving cars often promise that their products will save lives, Sharkey points out. But will they? We don’t know, he said, and “I wish (manufacturers) would stop saying that.”

Perhaps humans shouldn’t be putting AI, robots or other smart machines in the decision loop at all, he added. A more prudent approach might be just to use them as sensors, and let humans and human values and intuition make key determinations.

That’s the sort of thinking that makes sense, says RPI’s Hendler.

“There are three questions,” Hendler adds. “One: Will AI (systems) work well enough during unplanned and unlikely events (that) they were not trained (for)? Two, will humans be able to (properly question) machines in that case, especially when the sophistication (of the machines) is high? And three: Will there even be a human there at all? That’s the one that scares me.”

This is a legitimate concern. In a world of smart robots, deep-learning software and artificial intelligence, would a Petrov figure have been there at all?

“What I hope the world will be smart enough to do is realize that if the humans and computers have different capabilities, then where serious decisions are being made — matters of life, death, major money and so on — that we’re smart enough to find ways to keep the human in the loop,” Hendler told me.

More than worries about whether AI-equipped computers will destroy mankind in various sci-fi like scenarios, as mathematician Stephen Hawking has warned, these are pressing, realistic questions, Hendler said.

After all, HAL 9000 attacked humans in a movie. But Petrov saved a real world — our world — and we must ensure conditions are right for future saves to happen.

by Gina Smith

Image — The Russian who saved the world: Wikimedia Commons.

Newsletter

Follow Hanson Robotics