Win18_Phil302 Cyberethics’s Updates
Lesson 15: Future concerns--Virtual Reality, Robot Ethics, and the Infosphere
Module 8 Lesson 15— Future concerns--Virtual Reality, Robot Ethics, and the Infosphere
There are a number of emerging technologies that we need to spend some time thinking about as we near the end of this course. Given that the technologies you use are often designed years before they actually make it to the public, there are decisions being made right now in research and design labs around the world that you are unaware of now, but the ethical choices made by the designers and builders of these emerging technologies will deeply affect your life in the near future.
Let’s begin by reading section 2.4 and all of section 3 here: http://plato.stanford.edu/entries/it-moral-values/. We can see from this reading that whatever the specific technology we are talking about might be, the rate of change in that technology is likely to be accelerating. We are used to innovating on technology but information technology has allowed us to innovate on the processes of innovation. The consequence of this is that we do not have as much time to adjust to new technologies as we might have in the past. Where it took about seventy five years for land line telephones to reach ninety percent market saturation in the US, it took cell phones about fifteen years to do the same thing. Additionally, mobile smart phones are going to bring internet technologies to people that have been left on the wrong side of the digital divide.
We now have world changing technologies coming at us faster which gives us less time to think through their effects on our society before the next one comes along. We will now look at three broad categories of information technologies that seem poised to bring real change to our lives in the foreseeable future. Each one of these could have an entire course devoted to it but we will have to suffice with a quick overview for now, but we all need to watch the developments of these technologies over the next few years and contribute when we can in the ethical development of them.
Virtual Reality
Virtual reality is all hype, until it is not. This technology has been a staple of science fiction, books, movies and games since the middle of the last century. Unfortunately, it has had a slow birth as a real technology. There was a brief period in the 90s where the technology at first seemed like it was ready for implementation but then died out as a fad. Today we are seeing another resurgence of interest with big money investment from companies like Facebook in the Oculus rift headset. The idea is that this technology will have initial payoffs in gaming but that it will expand well beyond that as a new interface for social networking and business applications.
Included in this technology would be things like blended reality and augmented reality where one might use technologies like Google Glass perhaps that would allow one to see virtual images superimposed on one’s visual field. You could use data and information that was superimposed over what one was looking at to help make decisions. Perhaps you are looking at a shelf and data on comparative prices flash before your eyes next to the item you are looking at to help you make a more informed buying choice. You could be looking at a historical ruin and also see the building as it would have looked new. An acquaintance stops you to say hi but you have forgotten their name, no worries, your augmented reality system has already brought up their Facebook page and you are able to use their correct name and strike up a conversation about their new pet that you just saw on their newsfeed as if they were an old friend.
There are some easy to see ethical issues involved with this technology such as the nauseating effect this technology has on some people which might limit their ability to partake in new virtual wonders. These technologies play with our sense of reality and while they open up the virtual world for us, at this point that seems to come at the cost of our self-awareness of the space around us. We do not want to increase accidents in the real world while pursuing fun in the virtual world. Augmented reality might be used to unfairly alter our buying choices if we are not careful with this technology.
A harder to see concern is what some philosophers call “hyperreality” (please read this article). Have you ever been to a live event and wished that you were instead watching it at home because it would be so much easier to follow the action? Our ability to create televised images is so good now, it is in many ways better than real. Hyperreality is the next step in this direction and it will occur when we either can’t tell, or don’t care about, the difference between what is real and what is fiction, what is artificial and what is not. We might greatly prefer to live in our virtual worlds and spend less and less time in reality. Some studies have shown that children spend less time outdoors than earlier generations, preferring to spend time indoors consuming media. If that is the kind of effect that old style media can have, just imagine the results with virtual, blended and augmented reality.
Robot Ethics
Robotics is another field that seems ripe for innovation and consumer adoptions. Robots, have of course been a staple of science fiction since the at least the 1920s if not before. The idea may be old, but the enabling technologies have been lacking. But it now seems like the miniaturized electronics and computers in our smart phones, along with mobile networks and cloud computing assets are combining to make a number of feasible robot applications. Let’s take a quick look at the ethical impacts of these new robotics technologies. This subfield of computer ethics has picked up the name “roboethics” and at this time it focuses on these areas:
The following are excerpts from: “Open Questions in Roboethics,” in Philosophy and Technology, Luciano Floridi (ed.), John P. Sullins (Guest Ed.), pp. 233-238. Volume 24, Number 3/ September 2011.
Military applications
This is by far the most important of the sub fields of roboethics. It would have been preferable had we worked through all the problems of programming a robot to think and act ethically before we had them make life and death decisions, but it looks like that is not to be. While teleoperated weapons systems have been used experimentally since the Second World War, there are now thousands of robotic weapons systems deployed all over the world in every advanced military organization and in an ad hoc way by rebel forces in the Middle East. Some of the primary ethical issues to be address here revolve around the application of just war theory. Can these weapons be used ethically by programing rules of warfare, the law of war and just war theory into the machine itself? Perhaps machines so programed would make the battlefield a much more ethically constrained space? How should they be built and programmed to help war fighters make sound and ethical decisions on the battlefield? Do they lower the bar to entry into conflict too low? Will politicians see the as easy ways to wage covert wars on a nearly continuous level? In an effort to keep the soldier away from harm, will we in fact bring the war to our own front door as soldiers telecommute to the battlefield? What happens as these systems become more autonomous? Is it reasonable to claim that humans will always be “in” or “on the loop” as a robot decides to use lethal force?
Privacy
Robots need data to operate. In the course of collecting data they will collect some that people may not want shared but which the machine needs nonetheless to operate. There will be many tricky conundrums that have to be solved as more and more home robotics applications evolve. For instance, if we imagine a general-purpose household robot of the reasonably near future, how much data of the family’s day-to-day life should it store? Who owns that data? Might that data be used in divorce or custody settlements? Will the Robot be another entry for directed marketing to enter the home?
Robotic ethical awareness
How does a machine determine if it is in an ethically charged situation? And assuming it can deal with that problem, which ethical system should it use to help make its decision? But we are sorely lacking on the specifics needed to make any of these claims anything more than theoretical. Programmers and engineers are wonderfully opportunistic and do not tend to have emotional commitments to this or that school of thought in ethics. Therefore what we see occurring today is that they tend to make a pastiche of the ethical theories that are on offer in philosophy and pick and choose the aspects of each theory that seem to work and deliver real results.
Affective robotics
Personal robots need to be able to act in a friendly and inviting way. This field is often called social robotics, or sociable robotics, and was largely the brainchild of Cynthia Breazeal form the MIT robotics lab. The interesting ethical question here is if your robot acts like your friend, is it really your friend? Perhaps that distinction doesn’t even matter? With sociable robotics, the machine looks for subtle clues gathered from facial expression, body language, perhaps heat signatures or other biometrics and uses this data to ascertain the user’s emotional state. The machine then alters its behavior to suit the emotional situation and hopefully make the user feel more comfortable with the machine. If we come to accept this simulacrum of friendship, will this degrade our ability to form friendship with other humans? We might begin to prefer the company of machines.
Sex Robots
It seems strange but it is true that there are already semi responsive sex dolls that do count as a minor type of robot. These machines are such a tantalizing dream for some roboticists that there is little doubt that this industry will continue to grow. This category of robotics supercharges the worries raised by affective robotics and adds a few more. Sociable robots examine the user biometrics so the robot can elicit friendly relations, but here the robot examines biometrics to elicit illicit relations. A sex robot is manipulating very strong emotions and if we thought video games were addictive, then imagine the behavior a game consul one could have sex with might produce. These machines are likely to remain on the fringe of society for some time but the roboticist David Levy has argued that since this technology can fulfill so many of our dreams and desires, it is inevitable that it will make deep market penetration and eventually will be wide spread in our society. This will result in many situations that will run the spectrum from tragic, to sad, to humorous. The key point here is if the machines can really be filled with love and grace or if we are just fooling ourselves with incredibly expensive and expressive love dolls. I can easily grant that engineers can build a machine many would like to have sex with, but can they build a machine that delivers the erotic in a philosophical sense? Can they build a machine that can make us a better person for having made love to it?
Carebots
Somewhat related to the above is the carebot. These machines are meant to provide primary or secondary care to children, the elderly and medical patients. There are already a number of these machines, such as the Paro robot, in service around the world. On one end of the scale you have something like Paro that is meant to provide something like pet therapy for its users. Towards the middle of the scale you would have machines built to assist medical caregivers in lifting and moving patients or helping to monitor their medications or just to check in with patients during their stay. At the far end of the scale you would have autonomous or semi-autonomous machines that would have nearly full responsibility in looking after children or the elderly in a home setting. Here again we have some of the same issues raised by social robotics and the concomitant privacy issues. But in addition to those you have the troubling problem of why aren’t other humans taking care of their own children and elderly. What kind of society are we creating where we wish to outsource these important human relations to a machine?
Robot Surgery
These are robots that assist in surgery and other life and death medical practices such as administering medication. Often the surgeons using these machines are close by but this technology could also be used to allow a surgeon to work on a patient many thousands of miles away, perhaps a wounded soldier or a patient with serious conditions who is living in remote or economically depressed places of the world. This technology puts a new wrinkle on many of the standard medical ethics issues and we need more medical ethicists to study this phenomenon in-depth.
Autonomous vehicles
Our roadways are soon to change in a very radical way. Autos and large transportation vehicles may soon have no human driver. Already many of our vehicles can be seen as robots of a sort, some luxury vehicles will take over in emergency breaking situations and when you fall asleep at the wheel. A number of autos will park themselves completely autonomously. The vast majority of the issues involved here will be legal but there will be social upheaval and resistance here. Imagine the destruction autonomous cars will have on the egos of the American Male who largely bases his entire personality on his vehicle. More importantly, can one trust a vehicle to make the right decisions when those decisions mean the lives of you, your family and all those around you? There have already been deaths caused by faulty automatic navigation services because people robotically follow the robotic voice no matter what it says even if it is giving incorrect directions that lead one out into the middle of Death Valley. This latter event was caused by the fact that maps services use proprietary data that is programmed in by people with no experience of the territory they are mapping and they further do not share changes to road conditions that may save the lives of users of a competitor’s service.
Attribution of moral blame
This is one of the biggest conundrums in roboethics. Nearly all moral systems have some way of assessing which moral agent involved in a system is to blame when things go wrong. Most humans respond to blame and punishment and will modify their behavior to avoid it when possible. But how does one blame a machine? Will people use robots as proxies for the bad behavior in order to remove themselves from blame? When an military robot kills innocent civilians, who is to blame? If you are asleep in your robotic car and it runs down a pedestrian, did you commit manslaughter or are you just an innocent bystander?
Environmental Robotics
There are two ways to look at the environmental ethics impacts of robotics. One is to look at the impact of the manufacture, use and disposal of robots. Currently there is no green robotics that I am aware of and we should push for this to be developed. The second interesting idea is that robotics could provide an invaluable tool for gathering data about environmental change. The very same robots that are used to monitor enemy troops and scour the ocean floor for enemy activity can be easily re-tasked to monitor forests, ocean ecosystems, protect whale and dolphins or any number of environmental tasks that unaided humans find difficult.
Infosphere
The infosphere is a word that was coined to refer to the new information environment we are creating that layers over the natural environment. Think of it like the ecosphere but for machines. Machines use information technology to work together. As these networks continue to form and evolve, the idea is that they will reach a certain complexity that will rival the natural world. Watch this short video that explains how the philosopher Luciano Floridi uses the term infosphere. Another way to think about it is if we were to meld virtual reality and robotics together with future information technologies and networks, then we will have created the infosphere. The ethical challenges of this would be very large as what were are doing is creating a new kind of environment along with the organisms that would inhabit it. No humans have ever faced this kind of challenge before and it would require equally innovative thought in ethics and morality to be done correctly. One new form of ethics that attempts to do this is called information ethics and it has a number of subfields that focus on specific challenges of the growth of information technology and the infosphere.
Assignment 22, Writing Reflection (200 to 400 words) posted in the comments section below. We have covered a wide territory here. Go back and pick something that you found particularly interesting or challenging and describe how an understanding of the ethical theories we have looked at in this class can help us make better choices in the development of emerging technologies?
Assignment 22
One section that truly challenged my open-mindedness was that about sex/love robots. When thinking critically about this new technology, a lot of personal beliefs come up: how do we define love? Life? Do we really know enough about either of these to form any concrete definitions? The author of the article seems to think love is made entirely of what one perceives: the one-sided experience in one’s head. He thinks it doesn’t matter what the person you are in love with is actually like; all that matters is what you experience them to be. In a sense that is true, because everyone’s reality is different. However, I don’t believe that means you can equate a relationship between two humans with a relationship between a human and an object which someone perceives to be semi-real. Love and human connection is not an objective, factual, measurable, programmable interaction because we don’t truly know what makes us feel like that, aside from oxytocin. We don’t know what we want, so how can we program a robot to be “how we want?” While I find some serious issues with this subject, I don’t think this is wrong on an ethical level, because everyone is entitled to their opinion. They are not using anyone or harming anyone in this process, and people are only trying to find solutions to their own loneliness. It could be considered ethical egoism to allow sex robot research to continue: if this is good for those pursuing it, let it be. However, the second this development begins harming relationships and human connection, then it may become more of an ethical issue. This could be related to Value Ethics, as it may infringe upon our values.
Assignment #22
Virtual reality is a big one for me, it is one of those technologies that is taking off at the moment. When you use it, you are mesmerized by the visual resolution that brings all sort of entertainment very close to reality. For example, there is a game that consists of walking on a tightrope, as a player I can tell you the first step I took was very difficult because it really seemed as if I was suspended hundreds of feet off the ground, despite the fact that I knew in my head that it was a virtual reality experience. More complex is the relationship between fantasy and reality. Even if the bad behavior rests solely in one individual’s private thoughts, does that thinking pose a danger to other people? There is evidence that repeated exposure to pornography is associated with harmful conduct towards women and that it legitimizes violent attitudes and behaviors. Does that evidence mean we should worry about misogynistic or violent virtual reality experiences? Will these “games” make it more acceptable for people to engage in actual harmful behaviors? As the world moves towards a future based on virtual reality, artificial intelligence, and machine learning, we have to think about where to draw virtual lines, what kinds of situations are problematic, and how to recraft our laws, regulations and policies for the digital world.
Virtual reality is a big one for me, it is one of those technologies that is taking off at the moment. When you use it, you are mesmerized by the visual resolution that brings all sort of entertainment very close to reality. For example, there is a game that consists of walking on a tightrope, as a player I can tell you the first step I took was very difficult because it really seemed as if I was suspended hundreds of feet off the ground, despite the fact that I knew in my head that it was a virtual reality experience. More complex is the relationship between fantasy and reality. Even if the bad behavior rests solely in one individual’s private thoughts, does that thinking pose a danger to other people? There is evidence that repeated exposure to pornography is associated with harmful conduct towards women and that it legitimizes violent attitudes and behaviors. Does that evidence mean we should worry about misogynistic or violent virtual reality experiences? Will these “games” make it more acceptable for people to engage in actual harmful behaviors? As the world moves towards a future based on virtual reality, artificial intelligence, and machine learning, we have to think about where to draw virtual lines, what kinds of situations are problematic, and how to recraft our laws, regulations and policies for the digital world.
Excellent posts so far. These are some very challenging future tech and in many ways science fiction is not as strange as some of the actual technologies on the way.
ASSIGNMENT 22
One thing that I found particularly interesting are autonomous vehicles. I never in a million years growing up thought that one day I would be a part of a generation that would get to experience self-driving cars. I do believe that these autonomous qualities can be life-saving, for example if someone fell asleep at the wheel. But I also believe that autonomous reflexes will never be comparable to human reflexes. In life or death situations involving cars, I don’t know that I would be able to trust my car to save me. I would like to have control to make decisions that I feel are right. Autonomous vehicles will also make upholding traffic laws very difficult and will eventually put police officers out of their careers. If a car is speeding but the car is self-driving, can the “driver” get a ticket? Also, how would car accidents work? Who would be to blame? The car? These are all things that people need to consider when getting an autonomous vehicle. An ethical theory that helps us better understand the production of autonomous vehicles is utilitarianism. The environmentally sound factor of these self-driving vehicles make the benefits provide the most good for the majority of the people- the utilitarian way.
Assignment 22
I found the most interesting phenomenon here to be that of the sex robots. At first glance, the whole idea sounds quite concerning to me. Sex robots? That just sounds like a horrible science-fiction dream, not something that will actually be invented and used by the masses. While there are definite concerns that I have, a part of me does think that maybe this idea isn’t completely ridiculous. In terms of concerns, I think this may damage how men see women, even more. We already deal with so much sexism, harassment, etc, as seen by the #Metoo movement that is still going on today. I can see a future where men; now used to a sex robot that does everything they want in bed, are unable to talk to any women, to treat them correctly, or have a normal relationship. This could easily happen due to the fact that now men have a quick and easy way to get what they want. But there is a part of me that thinks that if we consider utilitarian thinking, we can at least try and understand why men would want a sex doll. It could be that some men just aren’t good with women, and this is their outlet. And I think that if this makes them happy and keeps them from creeping on women in awful ways, everyone stands to benefit.
Assignment 22:
The section on virtual reality reminded me of the novel, Ready Player One. In this novel, many lived in their virtual reality and it very much affected their real life as well. It is interesting to me that this piece of fiction is actually not far off from what we are considering in releasing similar technological products. Consequentialism is a theory that should largely be referred to when thinking about the release of new technologies. Questions of who benefits from the release and who will be negatively affected should be a large consideration. Precautions and preventions to help those maintain their real life and not get lost in their virtual reality as to not wither away in their offline life should be implemented. Although this would be difficult to enforce given what we have seen with just internet addiction, it is largely important to educate those about the potential sufferings of neglecting the offline life and how to optimize both experiences: virtually and in real life. With the lack of time to adjust to the new releases of technology as was mentioned in the lecture, there is no time to become educated in the ethics of cyberspace. I think it is important that we allow that time to adjust and think and become educated before releasing anything new. Issues such as children’s decrease in outdoor play and increase in media engagement should be addressed and dealt with before expediting this problem amongst the other various problems that have occurred. Much of the problem is lacking a reasonable amount of patience to hold off in releasing a new invention until these problems are more thoroughly addressed.
Assignment #22
A topic that I found to be very interesting is the idea of autonomous vehicles. Actually, it is not so much of an idea anymore, because there are many major car companies that say autonomous transportation will be available very soon. I think the ethics that need to be looked at when it comes to self-driving cars are the simple principles that a person may break the law for ethical reason. For example, needing to rush someone to the hospital, would be ethically justified to most, even though it is illegal. Another example being that often times people who are doing the speed limit on a busy highway will actually get in the way of rushed commuters, so it is ethically responsible to stay on the right of the road and let those fast drivers pass. For an autonomous vehicle, that is unaware of there unwritten rules of the road, they may cause frustration and even accidents on roads, because they were programmed to follow the law, rather than make common sense, ethical decisions. An ethical theory that we can look at for helping us as a population make better choices with newly developed technology, would be utilitarianism. Utilitarianism is an ethical theory that states that the best action is the one that benefits the majority. I believe this would help with emerging technologies because ultimately that is the whole point of new technology, to make our lives easier. And utilitarian views towards new technology like self-driving cars will greatly help our society as a whole, if perfected and used responsibly. Now obviously this is not going to be an easy task to accomplish, but as we have seen with new inventions such as computers, cell phones, television there will be a learning curve, but as we evolve the technology will evolve with us and become safer to use and also easier to access.
Assignment #22
Two topics that stood out to me both pertain to robotics: robot surgery and environmental robots. Although it is proven that doctors use robots for medical aid while in surgery I am not entirely sure I believe it is ethical and trusted. I know robots in modern day are highly intelligent and in some ways smarter then some humans, I don’t trust them with something so important as surgery. If something were to go wrong with surgery who would be ethically responsible? The doctor or the programmer? It is hard to tell and that is why I don’t think they are necessary. On the other hand, I find environmental robots highly admirable. I think there is huge need for this type of technology in the environment and that this could better the world for everyone’s overall happiness.
Assignment #22
Out of all of the topics the one that I found to be most interesting and challenging are the autonomous vehicles. These driverless vehicles are probably going to become the next ‘big’ thing. Google had recently just tested out one of their driverless vehicles on city streets back in November. Although some of these vehicles can be seen as “robotic” with their technological advances it is way different than having the vehicle actually drive itself. There are so many legal concerns and questions that come with these autonomous vehicles. Just a few months ago my family and I actually had a discussion about this direct subject, we all posed some questions to one another and were unsure how to answer them. Such as if the vehicle gets in a crash is it the driver’s fault or the car company’s fault? If an accident were to occur when the driver is intoxicated will that be considered a DUI and will they be at fault? Will there be an age limit that does not allow someone under a particular age to have access to an autonomous vehicle? Will there still be tickets given out for handling a technological device, like a cellphone, while being driven in an autonomous vehicle? Will someone who had their license revoked for old age be allowed back their license if they own an autonomous vehicle? There have been many different ethical theories that can help us better understand the production of autonomous vehicles. One in particular is Jeremy Bentham and John Stuart Mills’ view on classical utilitarianism; it is the overall concept of increasing the net pleasure and manufacturing the “most good for the most people”. Will the production of autonomous vehicles create the most good for the most people? Or will it only do so for a few people? Because one person’s happiness does not outweigh another’s it might be best to cease the existence of autonomous vehicles.