Sex With Robots: A Debate
The development of robotic technology and Artificial Intelligence has been a hot topic at ideacity and in popular culture. Most fictional depictions of AI have themes of morality and power at their core – whether it’s humans abusing their power over sentient robots, or robot sentience as a danger to society.
Recently, there has been an increase in conversation surrounding the morality of love and sex with robots, both in philosophy and in fiction. The film Her depicts a love story between a man and a sentient, bodyless iOS, personified with a female voice. Westworld depicts a darker take on the subject in which an artificial world of androids is created for humans to live out their most violent fantasies.
Needless to say, last year’s debate on Love & Sex With Robots was highly anticipated. Sex dolls are already in existence, and with advancements in the technology of humanoid robots, one can only assume that sex robots are not far behind. Companies like RealDoll have already discussed their plans to create Sex Robots that can communicate with human partners. On the flip side, the makers of the Pepper robot have actually included a ban on sexual acts in its user agreement. We brought in two experts on the subject to debate the future of sex robots, from both a practical and a moral perspective.
On the pro-side of the debate was the author of “Love and Sex with Robots” Dr. David Levy, who has a PhD for his thesis on intimate relationships with artificial partners. Dr. Levy starts by predicting that by the year 2050, people will not only be having sex with robots, people would be falling in love with (or even marrying) them. He argues that the most common reasons that people engage with prostitutes is because they want variety in their sexual experiences, or because they struggle to develop intimate relationships organically. Because of this, he believes that the development of sex robots will ultimately show a decrease in the demand for prostitution – even a decrease in sexual assault.
On the con side of the debate is Dr. Kathleen Richardson, director of the Campaign Against Sex Robots. Her arguments are based on the importance of empathy and the ethics of anti-slavery. She asks the question: would sex robots contribute to developing empathy or interrupt it? She argues that the literal objectification of sex partners would interrupt the process of empathy, and reinforce the politics of entitlement that contribute to sex trafficking and assault.
Both speakers argued about the potential impacts on humans that would come from developing sex robots. Dr. Richardson argues that sex robots would decrease empathy and ultimately increase harm by reinforcing the sexual objectification of humans – particularly women and children. Dr. Levy argues that the creation of sex robots would create an outlet for people who cannot form consensual sexual relationships, and would therefore reduce the potential for sexual harm to humans.
Interestingly, neither speaker discussed the potential for harm to robots that have been designed specifically for human satisfaction. In fact, Dr. Richardson emphasizes at the very beginning of her talk that she is not talking about inanimate objects: “Inanimate objects can’t be harmed. People can.” At the end of his talk, Dr. Levy asks the question: if adults can do what they like with consenting adults, why not with consenting robots? What he does not ask is how consent between a human and a robot is qualified. Shows like Westworld depict humanoid robots that are so sentient that they think they are human. At the heart of this imagined world lies a very real moral question: with androids, can you and should you apply the concept of consent? If the qualification of consent depends on the presence of sentience (or consciousness), as is debated in the film Ex Machina – how do we determine if a robot is sentient?
These questions become increasingly important as technological change occurs at an exponential rate. In the pursuit of discovery, invention often comes before regulation. Is now the time to not only measure robotic sentience, but evaluate how it will be regulated when change has arrived?