100% (1)
Pages:
9 pages/≈2475 words
Sources:
7
Style:
APA
Subject:
Psychology
Type:
Essay
Language:
English (U.S.)
Document:
MS Word
Date:
Total cost:
$ 38.88
Topic:

At What Point Should We Give Robots Rights, And Which Rights?

Essay Instructions:

You will write a 2,200-2,600 word argumentative paper on one of the topics we cover in class. The articles that are being used as sources are listed on the syllabus that is uploaded stating from the Topic 5 until the last topic.

Essay Sample Content Preview:

Robot Rights
Student’s Name
Institutional Affiliation
Robot Rights
Introduction
The topic of giving robots rights has been an issue of considerable debate. Much of this debate is based on whether robots can have a conscience. The conscience is the part of the mind that tells people, whether their actions are wrong or right. It is difficult for robots to display such traits. Floreano and Keller (2010) explain that robots are not part of the evolution process, and are created to ease the burden of humans. This notion is evident in several parts of the world, for instance, in India, robots have replaced the country’s sewer workers who die each year from the inhumane working conditions (Callahan, 2017). Unlike human beings, robots are computational systems that can be backed up and duplicated into new hardware. Humans are irreplaceable individuals with a finite lifespan. However, robots are not unique and are easily replaceable. Even though robots might reach a level of cognitive abilities, such as consciousness and self-awareness, there might be issues over whether they should have similar rights as humans. Although robots should be given rights, an important consideration, therefore, would be the context in which the robots are given the rights. The primary purpose of their creation is to help human beings in any means possible. Based on the example of India, the sewerage cleaning job goes against the rights of human beings, but instead, robots have been chosen as the best replacement. Using robots, in this case, might not be considered immoral. Therefore, the context in which robots are given rights is most important. Against this backdrop, this essay will discuss the rights that robots should be provided and at what point robots should be given these rights.
Why Robots Should Have Rights
Humans have attributed moral accountability to robots by giving them emotions. According to Arbib and Fellous (2004), robots have been created with emotions to help with human interaction. The robots have been given emotional expressions and bodily postures to better imitate human interactions. However, the feelings are only simulations. Building robots with emotional attributes might lead humans to take them accountable for their actions. A study by Ackerman (2010) confirms this by analyzing how humans react to a robot’s judgment. Based on the research, the robot was designed to be socially interactive as a means of confirming to the subjects that it was able to form social relationships. However, the robot judged the subjects' choices to determine whether they would win the final prize. Thirty percent of individuals believed that the robot had emotions while fifty percent thought that it had a conscious (Ackerman, 2010). The act of a robot being able to make judgments that affect humans or take actions that might negatively influence human beings is reason enough to give them rights. A robot might malfunction and cause great harm to individuals.
An example would be a malfunction when the robot is driving and accidentally runs over a human. By acknowledging that robots can have emotional and moral attributes, then it would be essential to give them the rights to ensure that they are morally harmonized with humanity. To minimize the possibilities of robot malfunctioning, they should be given the right to be designed with a degree of being trustworthy. This would mean that they should be designed to be socially compatible, cognitively compatible, and technologically fit-for-purpose. Robots should also be given the right to be protected from an ethical system and legal systems. This right also protects the robot from being deployed without having been adequately tested and confirmed to meet professional safety and ethical standards. According to Murphy and Woods (2009), robots should meet the highest legal standards of ethics and safety; moreover, the robots should be endowed with efficient autonomy to protect its existence. The protection, however, should not contradict their role of responding to humans appropriately, and to the law of being deployed without meeting safety standards (Murphy & Woods, 2009).
As machines continue to acquire a set of human-like abilities, they are also likely to be considered as social equals rather than pieces of belongings. Humans are becoming more mutually connected to robots. Nowadays, robots can establish a sense of mutual relation which evokes the human desire to nurture these robots. As argued by Turkle (n.d.), fundamental interactions with robots have changed how humans perceive robots. Individuals who have had close encounters with robots for some time regard them as compassionate objects, and this changes the way in which humans judge the appropriateness of a machine’s relationship. The more individuals interact with machines, the more likely they are to develop a connection that is much deeper than expected, and one that resembles human-to-human relationships. Having such a connection evokes the desire for humans to take care of these robots and nurture them. Turkle shares some interesting insights into how close humans and robots have become. During the 1980s, humans developed a deep connection with computational objects. Simple toys easily became engaging. By the 1990s, Turkle explains that the community was changing into a robotics culture, which changed their perception of robots as simple pieces of property into relational artifacts. The change into a robotics culture was accompanied by the notion of creating a companionship that feels appropriate to a person/robot relationship. With the rapid technological advancements, more humans are likely to encounter robots and develop a deep connection with the machines. The connection is bound to spark a personal relationship and the desire to nurture the robot as explained by Turkle. The possibility of robots to evoke a sense of mutual relation with humans makes it a necessity for robots to be given rights. The rights will not only protect the robots but the humans as well.
Although the robots have proven to deserve rights, it is essential to ascertain the point at which the robots deserve these rights. Machines are becoming more life-like and intelligent. They have also shown to be able to evoke mutual connections. Such traits can be attributed to human-like capabilities, and because of this, it is incumbent upon us to consider them as more than pieces of property. However, the main issue is in deciding which traits or cognitive threshold qualify an entity to be recognized as moral and have social rights. The most critical ethical limits are the capacity to be a responsible moral actor, be self-aware, and experience pain. The point at which robots should be given rights will be when they display all the three thresholds.
Robots can be smarter than humans, but the ability to be self-aware and conscious is much more complicated. To some degree, our perception of how we consider robots as moral is not similar to how we measure our morality. Nelkin (2013) explains that morality is subject to luck and that humans are morally assessable only to the extent that the factors under our control are used to asses our morality. Our everyday judgments as human beings are based on moral luck. If humans adhered to the control principle, then it would be impossible for individuals to asses each other morally.
In some cases, however, we tend to judge an individual’s morality based on their support for specific actions. Nelkim (2013) offers an example by stating that when a woman is pushed and steps on a person’s toe, the individual is less likely to blame her for her actions. This shows that individual’s actions can be affected by factors that are beyond their control. Despite these factors forcing individuals into doing things that are not moral, we do not treat them as less moral. Upon reflection, it would seem that humans assess people morally based on all kinds of ...
Updated on
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:
Sign In
Not register? Register Now!