Sentient machines

67 %
33 %
Information about Sentient machines

Published on February 10, 2014

Author: PollenStrategy


THIS WEEK Machines come to life By building robots using the principles of biology, we can sit back and wait for intelligent behaviour to simply emerge Christopher harting THE pearly white humanoid watches placidly as the woman moves a toy brick sitting on the table. Inside, iCub’s imagination is running wild. The robot is being tested for its ability to track the mental states of others. Known as theory of mind, this gives humans many sophisticated traits, including empathy and deception. Robots have demonstrated theory of mind before, but iCub is different. Last week, at the Living Machines conference in London, researchers revealed that it is the first robot to acquire theory of mind without specific programming. “This all emerged,” says Peter Dominey, leader of the research team. Dominey is one of a band of roboticists who are showing that building with basic biological machinery – instead of ever-more complex algorithms – can endow robots with lifelike characteristics. “We can directly take advantage of the evolutionary lessons of nature,” says biologist Joseph Ayers of Northeastern University in Nahant, Massachusetts. “We are 8 | NewScientist | 10 August 2013 not forced to rely on the conjecture of engineers.” Many successes in artificial intelligence are largely due to having abandoned the attempt to model human thinking (see page 32). But several roboticists believe that for robots to acquire some complex traits found in the animal world – like social skills, or the ability of insects to react to changes in airflow during flight – machines still sorely need biology. These biomimetic machines, in Cognition Laboratory at INSERM, the French national medical research agency, in Lyon, France, found that this gave iCub a more natural form of learning. For instance, by recording the coordinates of objects placed on a table in front of it, iCub learned for itself that the word “left” didn’t refer to coordinates, but rather to the relative positions of objects. Theory of mind was an accident that emerged from the next stage of experiments. Psychologists “To give robots powers believe that humans use a we don’t have, such as simulated internal self to learn. flying, we are copying Imagine you are about to grasp insect nervous systems” a glass. Internally, a part of your brain is predicting the amount of turn, are helping biologists hone force needed to hold it. If, in the their understanding of animals event, you break the glass, your (see “Beebot explains the bee”). brain makes a note of the Dominey’s iCub is a case in difference between prediction and point. It has an accurate model reality, and adjusts its knowledge of human autobiographical of the world accordingly. memory. Like ours, this is split To mimic this, Dominey’s team into an episodic memory for gave iCub a simulated internal recording specific events, and a self. Every time the robot executes semantic memory, which finds an action, like moving an object, patterns in the events and turns it instructs its own arm to move them into rules, or knowledge. and records the result. In parallel, Dominey’s team at the Robot through an identical programme, it gives the same instructions to simulated iCub. Instead of carrying out the action, the simulated iCub uses its knowledge from past events to predict the outcome. Any differences between the real outcome and the simulated one are a sign that iCub’s knowledge needs updating. Crucially, because this gave iCub two versions of itself, the researchers realised that they had inadvertently created the set-up for it to understand the mental state of others. All they had to do was link simulated iCub to –I don’t know why I want honey- another individual, rather than 2013 Celeste Biever itself, which they did by instructing simulated iCub to stop updating its memory if that person wasn’t present. Free mind Humans typically acquire theory of mind around the age of 3 or 4. Whether they’ve reached this mental milestone can be determined by the Sally-Anne test. In it, a child is shown two characters, Sally and Anne. Sally puts her ball in a basket and leaves, then Anne moves Sally’s ball into a box. The child is asked where Sally will look for her ball when she returns. Children who have theory of mind will correctly say that Sally will look in the basket, even though they know the ball is now in the box. When facing the test, iCub

In this section n First lab-grown burger cooked and eaten, page 10 n How our ancestors went their separate ways, page 13 n Smart tags mean you’ll never lose anything again, page 19 beeBOT explains the bee The future of some robots may lie in biology (see main story), but it is not a one-way street. Machines are also giving something back to the field that spawned them – hyperrealistic test beds for theories. “Working with robots is a good way to test a theory because robots can fail,” says Paul Verschure of the University of Pompeu Fabra in Barcelona, Spain, one of the organisers of the Living Machines conference in London last week. “If the robot doesn’t work, it’s a good sign that a theory sucks.” One example is a humanoid iCub robot that can model the mental states of others. How children develop this ability is a mystery. iCub shows it could arise from visualising future versions of yourself, and provides a way to test this. If humans are slower at modelling the minds of others, or are less empathetic when they are simultaneously trying to predict the consequences of their own actions, it could be evidence that they, like iCub, use the same cognitive processes for both tasks, says roboticist Peter Dominey of INSERM, the French national medical research agency, in Lyon. Meanwhile, Joseph Ayers of Northeastern University in Nahant, Massachusetts, who is creating bee-like robots with insect nervous systems, hopes they will similarly teach us something about biology. He plans to compare how synthetic and real insects react in identical situations to further hone his model. computer algorithms. “The problem with algorithms is that you have to anticipate every possible situation and have a determined escape strategy for each,” says Ayers. That’s a pain when you’re doing something as prone to variation as flying. His team’s secret is circuit boards that produce chaotic electrical signals – much as real neurons are thought to – allowing a multitude of possible solutions to be explored on the fly. “Chaos allows you to explore your full parameter space until you find a solution that allows you to escape,” says Ayers. The approach seems to be working. At the Living Machines conference, he showed how his team had controlled the flight of a small toy helicopter with a synthetic nervous system. Much like a bee’s waggle dance, which tells other bees where food is, the team transmitted a distance and a direction to the helicopter, which it successfully followed. “To my knowledge this is the first time someone has controlled an aerial vehicle purely from biological knowledge of –Wild inside– networks,” says Ayers. The next step is to use the same neural passed by comparing its own control system to pilot RoboBees record of events with the record (see picture, bottom left). The belonging to simulated iCub. circuit boards use less power and “We get theory of mind for free,” are lighter than a GPS navigation says Dominey. His student, system would be. The approach Grégoire Pointeau, who presented could be used to integrate other the results, says they could be lifelike sensors, such as an used to make robots that can artificial nose, into flying robots. anticipate the needs of others. This new strain of biomimetics Other teams are also watching isn’t confined to brains and lifelike behaviour emerge from nervous systems. Later this year, their machines. To give robots a new journal, Soft Robotics, will powers we don’t have, such as be launched. This reflects a trend flying, Ayers and his colleagues are towards squishy robot bodies that copying the electrical activity of replicate the structure of biological insect nervous systems, with the soft tissues and so have the same aim of creating an artificial bee adaptive properties without being that could pollinate plants. programmed to respond to every Previous attempts to imitate eventuality. Editor-in-chief Barry animal movement have relied on Trimmer of Tufts University in building equivalents of specific Boston, loves the example of abilities, like magnetic compasses caterpillar skin. “It has unlimited and air velocity sensors, but such degrees of freedom but there is no robots are still controlled by supercomputer,” he says. n 10 August 2013 | NewScientist | 9

Add a comment

Related pages

The Myth of Sentient Machines | Psychology Today

Digital computers might be fundamentally incapable of supporting consciousness. Some of today's top techies and scientists are very publicly expressing ...
Read more

Introduction | Sentient Machines

sen•tient [sen-shuhnt] –adjective. having the power of perception by the senses; conscious. characterized by sensation and consciousness. –noun
Read more

Sentient Machines

Sentient Machines. Home; Introduction; Copyright 2016 Sentient Machines. Vigilance Theme by Jestro ...
Read more

Artificial Brains - The quest to build sentient machines

Artificial brains are man-made machines that are just as intelligent, creative, and self-aware as humans. No such machine has yet been built, but it is ...
Read more

Sentient Machines - THX jetzt als MP3 in top Qualität ...

Sentient Machines - THX jetzt als MP3 in top Qualität herunterladen. Komplette Alben und Einzeltitel verfügbar - Amazon Music
Read more

Artificial intelligence - Wikipedia

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that ...
Read more

The age of the sentient machine is upon us | InfoWorld

The age of the sentient machine is upon us. More like this. Bye-bye mouse, hello mind control. ... Although such sentient machines are inevitable, ...
Read more

Sentient | Definition of Sentient by Merriam-Webster

Sentient ultimately comes from the Latin verb sentire, which means "to feel" and is related to the noun sensus, meaning "feeling" or "sense."
Read more

Sentience - Wikipedia

Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to ...
Read more

Machines | The Matrix Wiki | Fandom powered by Wikia

Machines gathering for the Million Machine March. "Then man made the machine in his own likeness. Thus did man become the architect of his own demise.
Read more