jueves, febrero 15, 2007

Robotics, Artificial Intelligence, Sentient Rights, Speciesism, and Uploading the Mind

by Gregor Wolbring

February 15, 2007

Advances in NBICS will increasingly allow modification and enhancement of the human body beyond species typical boundaries. The appearance of cyborgs, artificial intelligence, sentient non-human life forms, new species (through synthetic biology), and uploading of the mind are a few of the anticipated developments.

"Uploading is the (so far hypothetical) process of transferring the mental structure and consciousness of a person to an external carrier, like a computer. This would make it possible to completely avoid biological deterioration (aging, damage), allow the creation of backup copies of the mind, very profound modifications and postbiological existence."

Uploading is not in the cards for now. Intelligent machines, however, are becoming more pervasive. At the annual RoboBusiness conference last June, Microsoft previewed its robotic software. The first full version of the software was released in December. Microsoft's Robotics Studio (MSRS) is both a product and the lynchpin of a new educational push: the Institute for Personal Robots in Education (IPRE).

A cooking robot should be available in 2007. Others include Robo Waiter 1, Nanny, T-Rot the Thinking Robot Bartender, and a security robot.The New York Times reported that "by 2007, networked robots that, say, relay messages to parents, teach children English and sing (EveR2-Muse humanoid robot) and dance for them when they are bored, are scheduled to enter mass production. If all goes according to plan, robots will be in every South Korean household between 2015 and 2020."

Robots patrolling the neighbourhood are envisioned for 2010. According to Asia Times, intelligent robots represent an industry that could reach 30 trillion won (US$29.7 billion) by 2013 from the current 300 billion won.

The Japanese Ministry of Economy, Trade and Industry is working on a new set
of safety guidelines for next-generation robots. An article in LiveScience outlines the intent: "This set of regulations would constitute a first attempt at a formal version of the first of Asimov's science-fictional Laws of Robotics, or at least the portion that states that humans shall not be harmed by robots." "Japan's ministry guidelines will require manufacturers to install a sufficient number of sensors to keep robots from running into people. Lighter or softer materials will be preferred, to further prevent injury. Emergency shut-off buttons will also be required."

According to the article, "People in Japan are particularly concerned about this problem, due to the accelerating efforts to create robots that will address the coming labor shortage in Japan's elder care industry."

From what one can gather from articles mentioning the regulations (they are in the making, and are expected to be finished by the end of the year) they seem to assume a non-sentient robot. The guidelines would not be able to deal with robots depicted in the movie I, Robot but more or less with robots depicted in the science fiction movies of the 1960s. Of course these 'robot machines' should be safe -- as should any machinery. I do not even understand why the Japanese guidelines outlined in the articles became a news item. We regulate the safety of machines all the time.

Many will say that developing guidelines for robot machines on this level is a missed opportunity to guide the world on the issues of advanced artificial intelligence, with sentient AI the anticipated endpoint -- the potential merging of sentience with machines and non human life-forms, and the generation of new life-forms. Indeed, I think broader debate and guidance than that related to mechanical safety is needed.

Research into advanced AI, with sentient AI as the anticipated endpoint, begs for a different debate with numerous questions. Should Homo sapiens retain special elevated status (see the debate around speciesism)? If yes, towards what and whom? What would the relationship be between sentient non-humans and humans? Would we have treaties like we have between countries now? Or would one group try to enslave the other?

As we perform research leading to these new sentient non-human entities, can we build safeguards into the design? What safeguards? If the sentient gains rights, how would that relate to the 'bioethical principle' of autonomy, beneficence, justice and non-malfeasance ?

Is Homo sapiens the ultimate step in the evolution of the hominid family, or is another step in evolution to be expected or desired? If there is another step, what would it look like? How do we define human beings? What are the criteria for personhood? How do sentient beings relate to today's concept of personhood? Do we have to redefine personhood to take account of new technological realities? If we do redefine personhood, how would that affect people perceived as persons today? Could some who are now perceived as persons become non-persons? The concept of 'personhood' has been used throughout history to strip people -- and often entire groups of people -- of their human rights.

Can we give other biological and non-biological forms sentience without consent? To what level should we perform the design of life forms through NBICS? Would we have to move from human rights to sentient rights? Would we have to link personhood to cognitive capabilities? How would we set the limits -- like what level of cognitive abilities one would have to show before achieving full legal protection? It is interesting that one of the demands at a recent World Congress of Disabled People International was: "We defend and demand a concept of 'person' that is not linked to a certain set of abilities."

Uploading the mind into a non-human biological or non-biological framework, and the sentience of AI, challenges what is human. But that assumes that being human is directly linked to the human body. Is consciousness and sentience linked to the Homo sapiens body? If not, sentience may be the guiding factor for rights and not the fact of being human? Should human rights be replaced by sentient rights in the future? If yes, under what circumstances?

The Choice is Yours

While all of this may sound futuristic, and sentient non-humans may never come into existence, a lot of money is being spent on pursuing this research and these questions are seldom debated. I think they deserve much broader attention.

Gregor Wolbring is a biochemist, bioethicist, science and technology ethicist, disability/vari-ability studies scholar, and health policy and science and technology studies researcher at the University of Calgary. He is a member of the Center for Nanotechnology and Society at Arizona State University; Member CAC/ISO - Canadian Advisory Committees for the International Organization for Standardization section TC229 Nanotechnologies; Member of the editorial team for the Nanotechnology for Development portal of the Development Gateway Foundation; Chair of the Bioethics Taskforce of Disabled People's International; and Member of the Executive of the Canadian Commission for UNESCO. He publishes the Bioethics, Culture and Disability website, moderates a weblog for the International Network for Social Research on Diasbility, and authors a weblog on NBICS and its social implications.

Etiquetas:

1 Comentarios:

Anonymous Anónimo dijo...

Hi there! Someone in my Facebook group shared this
site with us so I came to give it a look. I'm definitely enjoying the information. I'm
book-marking and will be tweeting this to my followers! Outstanding blog and great design and style.
Here is my page :: black friday ads

1:03 a.m.  

Publicar un comentario

Suscribirse a Comentarios de la entrada [Atom]

<< Página Principal