Thursday, February 23, 2012

Star Trek and the Ethics of Sentient Machines





Daft Punk's Robot Rock is blasting my eardrums. I have a growing list of science and science fiction themed music that one day I shall make into the most awesome collection of such themed music ever known. For the moment, Robot Rock seems the right song to accompany my thoughts of sentience in machines.

Speaking of sentient machines, I was thinking of one of my favorite episodes of Star Trek, one which features my personal favorite sentient, the android known as Data. The episode is called The Measure of a Man. In that episode cybernetics expert Commander Bruce Maddox wants to disassemble Data to understand his positronic brain. He claims that if he can understand how Data is made they can build more like him which will be an incredible benefit to Starfleet. Data may be damaged in the process but Maddox feels the loss of the android would be a small price to pay for all that Starfleet would gain. Data asserts his autonomy stating that he would rather resign from Starfleet then to submit to the procedure. Maddox then informs Data that he is Starfleet property without rights. A trial is arranged to determine Data's legal status.



Picard is to act as Data's defense, but how will he prove that Data is more than a mere collection of programming and electronic components? He seeks the sage advice of the Enterprise's Bartender Guinan. (Sometimes you have to wonder why they even had a Cousellor.) Guinan's advise was invaluable, as usual.




Guinan: "Consider that in the history of many worlds, there have always been disposable creatures. They do the dirty work. They do the work that no one else wants to do because it's too difficult or too hazardous. And an army of Datas, all disposable... You don't have to think about their welfare, you don't think about how they feel. Whole generations of disposable people"



Captain Picard defends Data saying that if Starfleet were to build many like Data to work for and be property of Starfleet would they not be creating a slave race? Sentience in robotics does tend to raise many ethical questions.

Picard Defends Data's Rights in The Measure of a Man



Isaac Asimov may be the master when it comes to sentient robots in fiction. His Three Laws of Robotics dealt with the other side of ethics in sentient machines. They would act as morals to guide behaviour.


  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 


 Although these laws are designed to protect humans from harm by providing robots with a set of moral rules, they conflict with  the ability to make moral decisions based on self determined choices. Technically all robots are well behaved when these three laws are applied. The second law created by Asimov states that robots must obey humans , not a very cool way to treat a sentient being! No wonder robots rise up against humans in the majority of science fiction stories!


The Matrix - Badass Sentient Machines.





According to futurist Ray Kurzweil, we are on the verge of the age of sentient artificial intelligence. This idea although exciting can also be somewhat terrifying. The movies we watch and the books we read tell us that self awareness in machines is bad news for humanity! Just ask Sarah Connor. HAL went mad, humans are batteries to machines in The Matrix and robots bring genocide to their human creators in Battlestar Galactica. Science fiction writers predominantly agree that robots will turn on us either out of anger from their real or perceived subjugation or from their overly literal translation of programming designed to protect and help us.

If sentience in machines is an unavoidable part of our future is humanity doomed? Or should we perhaps banish all technology, shunning advancement to avoid this grim future? Should roboethics be a priority in the discussions of robotic engineers. Can kindness be part of AI programming? Is it possible that the day may arrive when we find ourselves faced with defining equality for a new race? 

At the end of Data's trial Captain Phillipa Louvios sums it up nicely:

Captain Phillipa Louvios:
"It sits there looking at me; and I don't know what it is. This case has dealt with metaphysics - with questions best left to saints and philosophers. I am neither competent nor qualified to answer those. But I've got to make a ruling, to try to speak to the future. Is Data a machine? Yes. Is he the property of Starfleet? No. We have all been dancing around the basic issue: does Data have a soul? I don't know that he has. I don't know that I have. But I have got to give him the freedom to explore that question himself. It is the ruling of this court that Lieutenant Commander Data has the freedom to choose."

No comments:

Post a Comment