Great stories by Isaac Asimov!
BUT, here it's only the sub title of an interesting article by New Scientist...
Copy Paste from:
http://www.newscientist.com/article/mg21829171.900-consciousness-why-we-need-to-build-sentient-machines.html?full=true#"Consciousness: Why we need to build sentient machines
25 May 2013 by Celeste Biever
Magazine issue 2917. Subscribe and save
For similar stories, visit the Robots and The Human Brain Topic Guides
Video: Watch a conscious robot avoid pain without using software:
http://www.newscientist.com/videoredirect?bctid=2378814445001Only by building an artificial consciousness will we truly be able to understand the mysteries of our own brains
Read more: "Consciousness: The what, why and how"
FROM C-3PO of Star Wars to Wall-E, the sentient garbage collector, the prevalence of conscious machines in the stories we tell seems to reflect humanity's deep desire to turn creator and design an artificial intelligence.
It might seem as if we stand little chance of making an artificial consciousness when the natural variety remains such an enigma. But in fact the quest for machine consciousness may be key to solving the mystery of human consciousness, as even scientists outside AI research are starting to acknowledge. "The best way of understanding something is to try and replicate it," says psychologist Kevin O'Regan of Descartes University in Paris, France. "So if you want to understand what consciousness is, well make a machine that's conscious."
That may sound fanciful, but AI research has already sparked one of the leading theories of consciousness to date – the global neuronal workspace model (see "Consciousness: what's the point?" -
http://www.newscientist.com/article/mg21829171.800-consciousness-why-arent-we-all-zombies.html). It derives from attempts in the 1970s to develop computer speech recognition. One approach was to try to identify short sounds, roughly equivalent to individual letters, which had to be strung into syllables, then into words and sentences.
Because of the ambiguities at each step of this process – think of the many possible meanings of the sound "to" for instance – exploring all the possibilities in turn would have taken far too long. So multiple programs worked at different stages of the problem in parallel, sharing the results likely to be of interest to others through a central database known as the blackboard.
The resulting Hearsay II system was 90 per cent accurate, as long it stuck to a vocabulary of 1000 words. It was eventually overtaken by other software, but not before it had come to the attention of philosopher Bernard Baars, who wondered whether our own brains might have a similar architecture (
http://dx.doi.org/10.1007/978-1-4615-9317-1_2).
Baars, now at George Mason University in Fairfax, Virginia, saw consciousness in the role of the blackboard, although he called it the brain's global workspace. Baars proposed that incoming sensory information and other low-level thought processes initially stay in the unconscious. Only when information is salient enough to enter the global workspace, do we become aware of it in the form of a conscious "broadcast" to the whole brain. Since Baars proposed this idea, in 1983, numerous strands of supporting evidence have accumulated, including that derived from scanning the brains of people under anaesthesia.
While Baars stumbled on the AI work that informed his theory, some computer scientists are now deliberately trying to copy the human brain. Take a software bot called LIDA, which stands for learning intelligent distribution agent. LIDA has unconscious and conscious software routines working in parallel, designed as a test of global workspace principles (
http://www.newscientist.com/article/mg21028063.400-bot-shows-signs-of-consciousness.html). But in this case the term "conscious" does not mean that the program is sentient, just that it broadcasts the most important results across all subroutines.
Nitty gritty
The much-hyped Human Brain Project, based in Geneva, Switzerland, aims to build a functioning software simulation of the entire human brain on a supercomputer – neuron by neuron (
http://www.newscientist.com/article/mg21729036.800-why-were-building-a-1-billion-model-of-a-human-brain.html). So far the team has managed to model a 10 cubic millimetre chunk of rat brain, but in February they won a €1 billion grant, which they reckon will take them to simulating a whole human brain.
But copying the brain's architecture misses the point, O'Regan thinks. "How the brain is organised isn't the interesting question," he says. "The interesting question is why do we feel?" In other words, how can electricity moving through neurons create the subjective feeling of pain, or the colour red? "It's not just that we know we are in pain, there is the real, nitty-gritty feel," he says.
Trying to create a machine that experiences pain or colours in the same way that we do might require a radical rethink. Pentti Haikonen, an electrical engineer and philosopher at the University of Illinois in Springfield, believes that we will never create a feeling machine using software. Software is a language, he says, and so requires extra information to be interpreted. If you don't speak English, the words "pain" or "red", for instance, are meaningless. But if you see the colour red, that has meaning no matter what your language.
Most computers and robots created so far run on software. Even if they connect to a physical device, like a microphone, the input has to be translated into strings of 1s and 0s before it can be processed. "Numbers do not feel like anything and do not appear as red," says Haikonen. "That is where everything is lost."
Not so for Haikonen's robot. His machine, called XCR for experimental cognitive robot, stores and manipulates incoming sensory information, not via software, but through physical objects – in this case wires, resistors and diodes. "Red is red, pain is pain without any interpretation," says Haikonen. "They are direct experiences to the brain."
XCR has been built so that, if hit with sufficient force, the resulting electrical signal makes it reverse direction – an avoidance response corresponding to pain, Haikonen says. The robot is also capable of a primitive kind of learning. If, when it is hit, it is holding a blue object, say, the signal from its blue-detecting photodiode permanently opens a switch. From now on, the robot associates the colour blue with pain and reverses away.
Watch the robot in training (see video above) and it is hard not to feel sympathy as it is whacked with a stick. "Bad," it intones. "Me hurt, blue bad." The next time Haikonen tries to push the robot towards a blue object, it backs away. "Blue, bad."
Does Haikonen ever feel guilty about hitting his robot? "Now that you put it that way," he says, "I may feel a little bad."
As robot achievements go, learning to avoid a blue object is no big deal: conventional software-based robots can do it standing on their heads. But the fact that XCR bypasses software, storing sensory information directly in its hardware, takes it the first step down the road to awareness, claims Haikonen. "The contents of the consciousness is limited," he says, "but the phenomenon is there." (
http://dx.doi.org/10.1007/978-3-642-34274-5_4)
Brain in a vat
It's a claim that Haikonen makes very cautiously, and one that has not yet convinced many others. "I would hesitate to call something conscious that had such a limited repertoire of responses," says Murray Shanahan, who studies machine consciousness at Imperial College London. Still, it's a new approach, and the first time that such a claim has been made by any serious AI researcher.
If Haikonen is right, and we can't create a feeling machine based on software, then no matter how big the net gets, it will never be sentient. But a brain in a vat wired up to a supercomputer simulation – a classic thought experiment from philosophy – could be conscious. Haikonen does not say awareness needs a physical body, just a physical brain.
Whether machines of the future run on software or physical brains like those of Haikonen's devising, how would we know if they do achieve sentience? Self-awareness is, by definition, a highly subjective quality. The answer is simple, says O'Regan. Once they behave in the same way we do, we will simply have to assume they are as conscious as we are.
If that sounds preposterous, don't forget, it's the same assumption we make routinely about our fellow humans every day of our lives.
After all, if you somehow got talking to an alien, and you had a similar conversation to one you might have with a person, "you would probably agree that he was conscious – even if it turned out there was cottage cheese inside his brain", says O'Regan. "The underpinnings of his behaviours are irrelevant."
"When you say that I am conscious, that's what makes me conscious," says O'Regan.
This article appeared in print under the headline "I, robot""
From issue 2917 of New Scientist magazine, page 40-41.
Karma
Devvie
~~~ notemail@facebook.com ~~~
Conare nullius momenti videri fortasse missilibus careant
——
All spelling mistakes are my own and may only be distributed under the GNU General Public License! – (© 95-1 by Coredump; 2-013 by DevNullius)