GET right up close to Dmitry Itskov and sniff all you like — you will not pick up even the faintest hint of crazy. He is soft-spoken and a bit shy, but expansive once he gets talking, and endearingly mild-mannered. He never seems ruffled, no matter what question you ask. Even if you ask the obvious one, which he has encountered more than a few times since 2011, when he started “this project,” as he sometimes calls it.
Read more about the Dmitry Istkov android portrait, Dmitroid, here
Next month, leaders in the world of robotics, neuroscience, and artificial intelligence will converge on New York City for the second annual Global Future 2045 Congress, an event devoted entirely to the quest toward “neohumanism” – the next evolution of humankind. GF2045 is the brainchild of Russian billionaire Dmitry Itskov, who’s made it his life’s goal to transpose human consciousness into a machine, thus giving us the power of immortality. (Really.)
Among those presenting during the two-day GF2045 conference is renowned roboticist Dr. David Hanson, who will unveil the world’s most lifelike humanoid android, designed in the likeness of Itskov. Founder of Hanson Robotics, Hanson is a true Renaissance Man, with a background ranging from poetry to sculpting for Disney to the creation of humanlike androids that are said to possess the inklings of human intelligence and even emotion. As we edge closer to GF2045, which takes place June 15 and 16, we chatted with Dr. Hanson over Google+ Hangouts to get his insight on mankind’s march toward the future.
Realistic Humanlike Robots for Treatment of ASD, Social Training, and Research; Shown to Appeal to Youths with ASD, Cause Physiological Arousal, and Increase Humanto-Human Social Engagement
by Dr. David Hanson, Dr. Daniele Mazzei, Dr. Carolyn Garver, Arti Ahluwalia, Danilo De Rossi, Matt Stevenson, Kellie Reynolds
Zeno: A Cognitive Character
by David Hanson, Stuart Baurmann, Thomas Riccio, Richard Margolin, Tim Dockins, Matthew Tavares, Kevin Carpenter
The IbnSina Center: An Augmented Reality Theater with Intelligent Robotic and Virtual Characters
by Nikolaos Mavridis and Dr. David Hanson
The Coming Robot Revolution: Expectations and Fears About Emerging Intelligent, Humanlike Machines
by Dr. Yoseph Bar-Cohen and Dr. David Hanson
Design of Android type Humanoid Robot Albert HUBO
by Jun-Ho Oh, David Hanson, Won-Sup Kim, Il Young Han, Jung-Yup Kim, Ill-Woo Park
Upending the Uncanny Valley
by David Hanson, Andrew Olney, Ismar A. Pereira, Marge Zielke
Expanding the Aesthetic Possibilities for Humanoid Robots
by Dr. David Hanson
STTR Phase I: An Actuated Skin for Robotic Facial Expressions
by Dr. David Hanson and Shashank Priya
Piezoelectric Actuation and Sensing for Facial Robotics
by Yonas Tadesse, Shashank Priya, Harry Stephanou, Dan Popa & David Hanson
Enhancement of EAP actuated facial expressions by designed chamber geometry in elastomers
by D. Hanson ; R. Bergs ; Y. Tadesse ; V. White ; S. Priya
Progress toward EAP actuators for biomimetic social robots
by Dr. David Hanson
Robotics in the World of Entertainment
by Dr. David Hanson
A wide variety of technologies and methods make Hanson robots the world’s most lifelike. As Hanson integrates disciplines including mechanical engineering, material science, computer animation, and artificial intelligence, a new super-disciplines emerges, leading towards a gestalt of sentient personalities both as art and artificial organisms.
The life in our robots springs from innovations including: Character Engine cognitive AI software, our bio-inspired nanotech known as FrubberTM, bioengineered facial mechanisms, walking biped robot bodies with grasping hands, & a great deal of artistry too.
The Character Engine A.I. software adds the spark of soul—enabling our robots to think, feel, to build relationships with people as they understand your speech, see your face, hold natural conversations, and evolve. As we proceed, we expect our robots to grow ever smarter, building increasingly meaningful relationships with people.
The software controls our android hardware, including expressive faces on walking biped robot bodies. Hanson’s patented Frubber (“flesh rubber”), a spongy elastomer using lipid-bilayer nanotech, self-assembles into humanlike cellwalls (inspired by human cellular mechanisms). As a result, Frubber mimics human flesh more accurately than any other known technology, using 1/20th the power of other materials to achieve hyper-expressive robot faces. These innovations enable a greater range of expressions, with better aesthetics, and moreover lightens the mechanism and power-consumption, to enable the world’s first total walking androids.
Thanks to the Frubber, we can pack more expressions in our robots. Our robot mechanisms mimic the action of over 64 muscles in the human face, eyes and neck, with the size and speed of action comparable to natural human facial actions. Under the control of the Character Engine, these faces become a new animation medium–one that befriends you.
And the artistry is essential to bringing robots to life. Frubber, Character Engine, walking robot bodies–all serve as media for creative interpretation in the hands of our artists, explored artistically for surprising effects. David Hanson’s background as a Disney sculptor, and RISD film-maker, redefines robots as four-dimensional interactive sculpture. This approach transmutes technology into a living being, to achieve a character via integrative fusion of the mechanisms, materials, and responses of the intelligent software, to choose any of billions of possible expressions, affected at just the right moment–the robot simply comes to life, looks you warmly in the eye, and inspires the sense of a mind within the machine.
David Hanson is a robot designer and engineer who spent the past decade trying to create robots with character; ones that are able to empathize, understand speech and eventually begin to module how a person is actually feeling, so that relationships between humans and robots can be built.
A UCSD news release touts their new android child that is designed to mimic the expressions of a one year old human child as it learns to control its body and interact with humans. There have been a lot of creepy baby-headed robots lately but you may notice this one is a little less creepy and a little more life-like than others. Why? Because the android combines the work of some of the best technology out there: Japanese humanoid robotics hardware and a Hanson Robotics head. David Hanson’s android heads have received wide-spread recognition as the most human-like and expressive around.
A new, life-like robot is being used by researchers to gain insight into how a baby learns. Named Diego-san by its creators, the robot measures four feet, three inches tall, and has over 60 moving parts, including a pair of high-resolution cameras for eyes.
Meet “DIEGO-SAN”, developed by David Hanson (of Hanson Robotics) for the Machine Perception Lab at the University of California San Diego Institute for Neural Computation. With a face by David Hanson and Hanson Robotics, which mounts on a robotic body (not yet functional), this robotic baby boy was built with funding from the National Science Foundation and serves cognitive A.I. and human-robot interaction research.
With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses A.I. modeled on human babies, to learn from people, the way that a baby hypothetically would. This is a major milestone in “emotionally relevant robotics”–taking the next step from A.I. that learns human movements to A.I. that learns human emotions.
Outside of CES, another child-like gadget caught our attention – and we mean ‘child-like’ in the most literal sense.
Researchers from the Machine Perception Lab in the University of California, San Diego (UCSD) Institute for Neural Computation have been working on developing a sophisticated humanoid robot modelled after a one-year-old baby for about two years now, in order to study how infants develop motor skills and learn to interact with the world using non-verbal communication.
Diego-san, from UCSD’s Machine Perception Laboratory, comes achingly close to reaching the other side of the Uncanny Valley—so close that you just can’t look away. The child-size robot has an enlarged version of a 1-year-old boy’s face, and the body stands about four feet tall.
The team at the University of California at San Diego’s Machine Perception Lab just posted a video (embedded below) of DIEGO-SAN, a rather disturbing looking robot that it designed to replicate a 1-year-old baby.
The Machine Perception Lab conducts development of systems that simulate natural human facial expressions. The team worked with Hanson Robotics to create the face and Japanese robotics company Kokoro to build the body.
His name is Diego-san. He was born at UCSD. He’s a baby robot. We had seen photos of him before, and we knew researchers were working hard to get it moving and doing other things that babies do. Now, for the first time, Diego-san shows off its face on video.
The world is getting a long-awaited first glimpse at a new humanoid robot in action mimicking the expressions of a one-year-old child. The robot will be used in studies on sensory-motor and social development – how babies “learn” to control their bodies and to interact with other people.
Diego-san’s hardware was developed by leading robot manufacturers: the head by Hanson Robotics, and the body by Japan’s Kokoro Co. The project is led by University of California, San Diego full research scientist Javier Movellan.
Computers are evolving. We have voice-controlled assistants on our phones, telepresence robots for when we can’t make it to a meeting in person, and self-driving cars that are headed to a road near you.
These machines aren’t just taking over human tasks. Computerized systems are also taking on more human characteristics. As technology gets more advanced, how will our relationships with it change?
A partially assembled Joey Chaos is one of roboticist David Hanson’s “talking heads”—head-and-shoulders robots designed to push the boundaries of realism with both their physical traits and cognitive abilities. Hanson developed a biologically inspired skin material, “frubber,” that makes his robot heads appear uncannily lifelike. Cutting-edge speech programs give his creations the ability to converse.
Children with autism struggle to relate directly to other people. But what happens when they interact with a child-sized robot that has an expressive human face but isn’t limited by emotional reactions to their struggles?
Researchers at the University of Texas Arlington (UTA) are working with the Dallas Autism Treatment Center to find out through Zeno, a robot that has been tested with patients to help them develop more appropriate responses to everyday social interactions.
David Hanson and I share a similar background in media, art and design. We both value new possibilities for human platforms for life extension. Where we are different is in our focus: I designed “Primo Posthuman” as a future body prototype for exploring theoretical ideas regarding regenerative media, nanorobots and AGI. Alternatively, David is actually building humanoid robots — including the Robokind commercial robot humanoid, and a variety of extremely realistic robot heads, incorporating unprecedentedly realistic facial expressions and voice. This interview covers some of David’s work in this area, including its exciting broader implications.
Hanson Robotics is well known for its family of robots with delicately engineered, highly expressive faces made out of something that isn’t called Flubber. At anywhere from $8,500 to $14,500, this level of sophistication doesn’t come cheap, but a new model of Zeno the robotic boy has dropped some hints about a new generation of smaller cousins which will be much less expensive.
She gets amazed. She gets disgusted. She gets sad. And she’s a robot–the so-called FACE robot designed by scientists at the University of Pisa in Italy.
Modeled after a research team member’s wife, the robot was created to show a wide range of human facial expressions, with the goal of going beyond the “uncanny valley” phenomenon.
Along a winding dirt road, just west of the Lincoln Gap in Bristol, Vt., sit two big yellow houses on a sprawling property featuring ten solar panels, a dock overlooking a sunlit, trout-filled pond, and porches adorned with rocking chairs. In the smaller of the two houses lives Bina-48, one of the most renowned and highly sought after humanoid robots in America.
IEEE Xplore – HEFES: An Hybrid Engine for Facial Expressions Synthesis to control human-like androids and avatars
Nowadays advances in robotics and computer science have made possible the development of sociable and attractive robots. A challenging objective of the field of humanoid robotics is to make robots able to interact with people in a believable way. Recent studies have demonstrated that human-like robots with high similarity to human beings do not generate the sense of unease that is typically associated to human-like robots. For this reason designing of aesthetically appealing and socially attractive robots becomes necessary for realistic human-robot interactions.
”How to Build an Android” is the honest title of an earnest book, the first by David F. Dufty, a senior research officer at the Australian Bureau of Statistics. It explains how a team of researchers at the University of Memphis collaborated in 2005 with an artist and robotics expert, David Hanson, to create what was then the most sophisticated android anywhere, a replica of the science-fiction writer Philip K. Dick.
G.I.A. (Gestural Interactive Automation) wants to be your robot friend. Created by Daniel Jay Bertner, his GIA robot sculpture is anchored to a wall and tracks you as you move by using motion tracking software, interacts with you by using facial recognition software and projects a video image of a human face on a sphere that responds with facial expressions.
We are a lot closer to living robots than we may realize, says a scientist who has developed a robot that uses the same words, expressions and movements that humans do to communicate with each other.
Philip K. Dick Andriod, built by Hanson Robotics, is a state-of-the-art robot with a large vocabulary, complex facial expressions, a sense of humour and something of an ego, ABC News reported.
He is, well, surprisingly human, not just in mechanics, but also in appearance.
Philip comes complete with hair, teeth and wrinkles. He can cock an eyebrow, smirk and respond to questions. It’s one of the most advanced cognitive artificial intelligent projects in the world.
David Hanson of Hanson Robotics, Philip’s creator, for lack of a better term, believes we are a lot closer to living robots than we may realize.
An interview with a robot starts kind of like a bad date. There’s strange small talk, some awkward pauses, then, if you’re lucky, you hit on a topic of conversation that gets the chemistry flowing.
For Philip, that topic is himself.
Built by Hanson Robotics, Philip K. Dick Andriod is a state-of-the-art robot with a large vocabulary, complex facial expressions, a sense of humor and something of an ego.
“Being a robot at this time in history is really exciting because my technology is changing, advancing, so fast that it just seems like a world of possibilities,” Philip said. “A great adventure waiting to happen.”
View more articles about the Philip K. Dick android here
By imitating natural expressions and even human cognition, David Hanson’s robots–made using a unique rubber substance that mimics human facial tissue–are making it easier to interact with robots by giving them a human face.
David Hanson, founder and CEO of North Texas-based Hanson Robotics, is a leader in the race to build a humanlike robot, whose features and behavior would be indistinguishable from those of a human, similar to a “replicant” in the film Blade Runner. This month’s National Geographic cover story focuses on the issues connected to Hanson’s work, so Points asked him to address the controversies and questions directly.
View more articles about the Albert Hubo robot here
We’ve been hearing a lot about talking robots lately: They emote! They banter! They’re on the cusp of consciousness! So we asked Jon Ronson to chat up a few. Conclusion: The future will be mildly confusing, occasionally profound, and frequently hilarious
The firm behind the Robokind devices has been working on this technology for some time. Now the latest product, a suite of knee-high droids intended for educational and research uses, has sprung Athena-like from the brain of company founder David Hanson and research into the soft, malleable “frubber” synthetic skin. The robot is powered by sophisticated servos that let it walk, gesture, and make extremely complex movements of its face–including aping brow movements, eye and eyelid movements and complicated mouth moves that can produce a smile.
In the field of robotics, we have no Newton. No one who, assisted by a falling fruit, cried out, “Eureka, I have it, and it is called a… I know… a robot.”
No, the concept of a robot first occurred to some unknown person in some far distant time, as he or she, engaged in a grinding, repetitive task, dreamed of a mechanical contrivance that could do some of the dirty work. We know that moment was more than five hundred years ago, because we have sketchbooks from the incomparable Leonardo da Vinci, dated 1495, that contain detailed plans for one.
Robots already build our cars and vacuum our floors. Will they one day be our companions, too? Engineers are designing robots with the social smarts to understand human feelings, learn from human teachers, carry on conversations, and even make jokes. But is a future full of robotic companions a delightful dream—or a lonely nightmare?
Last year, we reported that British researchers are using a Charles Babbage robot head to develop emotional machines. We wondered whether the Charles head was a Hanson Robotics creation. We now have the answer.
“Yes, Charles is a Hanson Robotics creation,” David Hanson, founder and CTO of the company, tells us.
Hanson says they built the robot more than a year ago and he was pleased to see that the Cambridge researchers have put it to work. “I think they’re up to some good stuff,” he says.
Above is an image of Charles at the Hanson robot factory.
Hanson also updated us on his company’s latest developments — they’ve been busy working on some new robots and updating old ones. These creations are incredible, and I can’t decide where I’d put them in the uncanny valley chart.
The saga of sci-fi philosopher Philip K Dick’s robot head continues. The original head, which was lost in 2006, has been rebuilt by Hanson Robotics, maker of some creepily realistic robot faces. The new Dick head is just the latest chapter in a story which is as weird and twisted as any of the author’s own novels.
Military spending on American robotics is robust, but our war-robot platforms are getting us no closer to a humanoid. Those platforms are purpose-built for bomb defusal and aerial surveillance, and so don’t provide broader opportunities. “Korea and Japan have national strategic initiatives in robotics,” the WTEC group wrote. “In the U.S., Darpa programs”—the chief source of military funding for high-level robots—“are highly applied and short-term oriented, while its support for basic research in robotics has been drastically reduced.”
Ten minutes into my interview with the robot known as Bina48, I longed to shut her down.
She was evasive, for one thing. When I asked what it was like being a robot, she said she wanted a playmate — but declined to elaborate.
“Are you lonely?” I pressed.
“What do you want to talk about?” she replied.
Other times, she wouldn’t let me get a word in edgewise. A simple question about her origins prompted a seemingly endless stream-of-consciousness reply. Something about robotic world domination and gardening; I couldn’t follow.
But as I was wondering how to end the conversation (Could I just walk away? Would that be rude?) the robot’s eyes met mine for the first time, and I felt a chill.
She was uncannily human-looking.
“Bina,” I ventured, “how do you know what to say?”
“I sometimes do not know what to say,” she admitted. “But every day I make progress.”
Nothing Eileen Oldaker tried could calm her mother when she called from the nursing home, disoriented and distressed in what was likely the early stages of dementia. So Ms. Oldaker hung up, dialed the nurses’ station and begged them to get Paro.
Patients at the Knollwood Military Retirement Residence are calmed by a robot styled after a baby seal.
Speaking robots have a range of uses. Autom, top, acts as a diet coach, prodding users to meet their goals. Zeno, above, is billed as a “supertoy” by Hanson Robotics, its maker.
Paro is a robot modeled after a baby harp seal. It trills and paddles when petted, blinks when the lights go up, opens its eyes at loud noises and yelps when handled roughly or held upside down. Two microprocessors under its artificial white fur adjust its behavior based on information from dozens of hidden sensors that monitor sound, light, temperature and touch. It perks up at the sound of its name, praise and, over time, the words it hears frequently.
“Oh, there’s my baby,” Ms. Oldaker’s mother, Millie Lesek, exclaimed that night last winter when a staff member delivered the seal to her. “Here, Paro, come to me.”
“Meeaakk,” it replied, blinking up at her through long lashes.
Why is nobody scared of robots anymore? It seems like only yesterday we could barely get the popcorn to our mouths, so atremble were our puny fingers at the sight, and the thought, of the Terminator. On the written page, Isaac Asimov was spending an outsize chunk of one of history’s more prolific careers wondering, and worrying, about the line between man and machine, both where and whether it could be drawn. The whole gosh-darn concept of “the uncanny,” supposedly underpinning much of human fear, was described—at the term’s coining in 1906 by psychologist Ernst Jentsch—as “doubts whether an apparently animate being is really alive; or conversely, whether a lifeless object might be, in fact, animate.” Robo-fear, he might just as accurately have titled it, if people had talked like that back then and if the relevant Japanese hair-metal band had traveled back through time to grant its permission.
David Hanson’s robots are by now somewhat familiar faces, including his Einstein robot currently being used as a research tool at Javier Movellan’s Machine Perception Lab at UCSD, and the punk rock conversationalist Joey Chaos. A less familiar face is that of Bina Rothblatt, the blonde at the end of the table in the above photograph. Bina is a robot commissioned by Sirius Satellite Radio inventor Martine Rothblatt to look like her beloved wife. Take that, uncanny valley!
View more articles about the Bina-48 robot here
At the 2009 TED, Hanson Robotics showed our collaboration with UCSD Machine Perception Lab (Javier Movellan, Marian Bartlett, Nick Butko, Jake Whitehill, Paul Ruvolo). This robot tracks faces and sound, percieves facial expressions, and mimics the facial expressions. Our belief is that understanding human expressions can help to model human empathy and enable machine empathy.
Say salam wa aleikum to an Arabic-speaking android developed at United Arab Emirates University and billed as the first of its kind in the world. It could enter mass production to help people at shopping malls.
The Ibn Sina robot, named after an 11th century philosopher, can recognize faces, converse with people by speaking in classical Arabic, connect to the Internet, and retrieve information. As seen in the video below, it can also exchange kisses with people.
Born in Belgrade, in what was then Yugoslavia, Maja Matarić originally wanted to study languages and art. After she and her mother moved to the United States, in 1981, her uncle, who had immigrated some years earlier, pressed her to concentrate on computers. As a graduate student at the Massachusetts Institute of Technology, Matarić wrote software that helped robots to independently navigate around obstacles placed randomly in a room. For her doctoral dissertation, she developed a robotic shepherd capable of corralling a herd of twenty robots.
Einstein the robot has enchanting eyes, the color of honey in sunlight. They are fringed with drugstore-variety false eyelashes and framed by matted gray brows made from real human hair. “What is that, makeup?” a visiting engineer asks, and, indeed, on closer examination I can see black eyeliner smeared beneath Einstein’s lower lids, à la David Bowie in 1971. The machine’s gaze is expressive—soulful, almost.
David Hanson, Einstein’s creator, is visiting from Texas to help scientists here at the University of California at San Diego (UCSD) prepare the robot for an upcoming conference. Hanson switches the robot on—really just a head and neck—and runs it through some of its dozens of expressions. Its lips purse. Its brow furrows. Its eyes widen as though in horror, then scrunch mirthfully as it flashes a grin.The 27 motors in the face make a wretched grinding sound, and when the mouth opens, I see a tangle of wires where the prodigious brain should be. Einstein’s white wig is missing and the skin of its neck hangs in flaps, because its shoulders, made of plastic, got shattered in shipping.
According to developmental psychologists, as infants, we learn to govern our bodies through a process of random experimentation and feedback. We contort our faces into weird shapes, watch our parents react, and then switch up our movements accordingly.
Now, computer scientists at the University of California, San Diego are applying this same strategy to robotics research. Through the use of machine learning, they’ve made it possible for their robot–an Einstein lookalike–to teach itself to make realistic facial expressions.
More cancers will be preventable in 5 to 10 years, using a vaccine. People wearing artificial feet may scale walls a la Spider-Man. Robots will come with lifelike faces that convey human emotion.
That was just a sampling of the technology envisioned for the future at TED, the annual Technology, Entertainment and Design gathering of corporate, Hollywood and scientific glitterati touted as a caldron of ideas and innovation.