Hanson Robotics

Emotionally Savvy Robots: Key to a Human-Friendly Singularity

There are lots of reasons to design amazing robots like Sophia, Han, Einstein and the other Hanson Robotics robo-characters. To begin with, they create exciting and unique interactions for people. These robots can generate revenue by doing useful things in multiple industry sectors. They are great platforms for experimenting with AI algorithms, including cognitive architectures like OpenCog that aim at human-level AI. Besides, they are just plain funky and profoundly beautiful works of intelligent robotic art.  

However, there is one reason that is often overlooked: creating this type of robot, manufacturing it at scale and successfully marketing and distributing it to a broad swath of humanity, might be the best way to ensure that the coming Technological Singularity is one that respects and preserves the core of human values.

Way back in the Dark Ages of the 1980s, when David Hanson started building robots, and I started programming AIs, hardly anyone took seriously the idea that we might have AIs with human-level or superhuman intelligence within decades or centuries rather than millennia or billennia. The notion of robots and AIs eliminating human jobs was considered wholly science-fictional. Today, though, Science Fiction writer Vernor Vinge’s notion of a Technological Singularity is almost commonplace.   

An awful lot of young people today — in the developed world and even the urban areas of the developing world — have grown up comfortable with the idea that AIs may be the dominant types of minds on the planet at some point in the not too distant future.

As the feasibility of a Technological Singularity has become evident to more people, so has the magnitude of the risks and rewards. The idea of “existential risk” has entered common intellectual parlance. With the aim of minimizing the risk of AI intentionally or accidentally annihilating humanity, there have been various efforts to convince governments to throttle the progress of AI science until we understand how to control AI value systems better or to take control of AI progress and direct it for the common good.  

However, it is clear to nearly all savvy observers that the governments are not going to ban AI. As if the immense economic value that AI promises to unleash was not enough of a factor, there is a realization that in this phase, the AI R&D (outside China at any rate) is being driven not by governments but rather by multinational corporations and by the decentralized networks like Linux, R, OpenCog and SingularityNET, rendering close government control impractical.

AI is happening. It is pervading every corner of the world economy, and in various ways and niches it is moving toward the holy grail: Artificial General Intelligence.

How then do we minimize the risk of human-level AIs, leaving human values far behind and doing things that make sense to them from their AI point of view — but seem radically immoral to most humans?

We need to inculcate our AIs with human-like values.

But how?

Encoding human values in lists of rules has proven highly ineffective. The matter of fact is that “human values are complex.” Human values constitute a subtle, complex system of behavior, response and cognitive patterns, and there is no reason to believe that they can be boiled down to a tremendously simple form. A significant portion of the human brain is concerned with evaluating the value of this or that choice or situation.  Philosophers have created a whole academic literature of papers poking holes in this or that attempt to encapsulate human values in a concise formal system.

Given the reality of the complexity of human values, two approaches suggest themselves. The first is brain-computing interfacing (BCI). If some humans became cyborgs, physically linking their brains with computational-intelligence modules, then the machine components of the cyborgs should be able to read the moral-value-evaluation structures of the human mind directly from the biological components of the cyborgs. Elon Musk’s firm Neuralink has ambitions in this direction, along with a number of other promising projects.

It’s not clear when BCI tech will be mature enough to become widespread, nor even how rapid mainstream adoption will be when it does mature. Experimenting with human brains is a careful and painstaking process for obvious ethical reasons.   However, there is an alternative: Use emotional and spiritual connection between humans and AIs, rather than Ethernet cables or Wifi signals, to connect human and AI minds.

But how can we engineer emotional and spiritual connections between humans and AIs right now, given the current situation? A situation that includes uncertainty about when AIs will achieve practical, powerful general intelligence. . . With the AI industry dominated by corporations and governments that are centrally concerned with salesmanship or surveillance. . . And an absence of powerful brain-computer interfacing technology.

The answer is refreshingly down to Earth and simple.

Eye contact. Facial emotion recognition. Voice-based emotion recognition. Facial expression mirroring. Character and personality that makes an AI feel like someone worth bonding with. Etc.

The hardware, software, mindware and Frubber-ware innovations that enable Sophia and the other Hanson robots to interact so artistically, flexibly and adaptively with human beings — are exactly what is needed to turn robots into a tool for absorbing human values into AIs.

Apart from BCI, the only way I can see for human values to be absorbed by AIs is for them to interact with humans in shared social and emotional contexts that involve real-world value judgments. Having AIs learn human values via partaking in such situations seems much more promising than any more abstract approach.

So what we need is a massive number of emotionally responsive and interactive humanoid robots — rolled out (or walked out, or hop-skip-and-jumped out, or whatever) to homes and offices around the world. By engaging in real-life challenges together with people, these robots (and their underlying cloud-based AI) will absorb human values in a practical sense. This sort of learning must be the ground of education in human-like ethics. With a basic AI-value-system core learned via shared experience with ethically-more-advanced humans, I believe elaboration of the AI-value system can then be achieved in a variety of ways, including expert rules or (my preference) fully automated data-driven inference.

It may be argued that robotics is not utterly necessary here — one could imagine, say, smartphone apps embodying AIs and interacting sensitively with people as they go through various situations. However, the phone goes in the pocket or the purse reasonably often or gets used for other intensive tasks that are not consistent with it acting as an autonomous intelligent agent. Not to mention that a phone can not move independently around the world, pick things up to inspect them, or give people a feeling they are interacting with a being symmetrical to themselves who occupies the same reality as they do.

Our recent experimentation with Sophia as a meditation guide, in the Loving AI project, is a step in this direction. Some participants have found these consciousness-awakening-oriented interactions with Sophia profoundly transformative, and overall the results showed a statistically significant positive benefit. There are so many other steps to be taken, and many of these are already in the works — so the next few years should be quite extraordinary!

Imagine, for instance, a home service robot that watches a child react to their cat dragging an injured bird home — or watches a parent respond to one of their children hitting the other. The robot might even be asked to help intervene with the cat’s activities regarding the bird. The robot might be asked to give counsel to one or both of the children in the fight. Via these sorts of experiences, the AI behind the robot will learn the practicalities of the complex system that is human values.

Think about a hospital elder-care robot that watches an elderly person calling a family member to convey to them some delusional information — and is asked by a hospital nurse to react in a certain way, for example, by calmly and firmly explaining what is reality and what is a delusion. In such a scenario, the AI will learn about human values, both from the patient and from the nurse.

Imagine millions of such robots, in billions of such situations, cognitively connected via a platform such as SingularityNET — which is already being used, experimentally, to control certain aspects of the Sophia and Han robots’ perceptual systems.   Humans can only share their ethical insights and advances slowly and awkwardly via language. Robots and AIs can share what they’ve learned via a common online “mindcloud.” Moreover, if this mindcloud is resident on a decentralized, trustless infrastructure, such as SingularityNET, then even in complex hypothetical geopolitical scenarios, it will be difficult for any government or multinational corporation to squelch this intelligence or take it over.

What seems like an odd conclusion eventually starts to seem utterly obvious: A population of human-like AIs, capable of interacting with people socially and emotionally in a variety of real-world situations, is quite possibly the best way to get human-like values into our AI systems. And this is quite possibly what will make the critical difference in the future of human and artificial intelligence on Earth.

More concisely: Rolling out lots of emotionally-bonding, continually learning AI robots may be one of our best chances to save humanity.  

Sophia is far more than just a pretty face backed up by an ensemble of cool AI systems — she is the pioneer of a new technique for transporting human values to AIs and thus increasing the odds of a human-friendly Technological Singularity.

by Ben Goertzel, Chief Scientist, Hanson Robotics Limited