Thursday, November 23, 2017

Episode #32 - Carter and Palermos on Extended Cognition and Extended Assault

media_510972_en.jpgorestis_palermos.jpg

In this episode I talk to Adam Carter and Orestis Palermos. Adam is a Lecturer in Philosophy at the University of Glasgow. His primary research interests lie in the area of epistemology, but he has increasingly explored connections between epistemology and other disciplines, including bioethics (especially human enhancement); the philosophy of mind and cognitive science. Orestis is a lecturer in philosophy at Cardiff University. His research focuses on how ‘philosophy can impact the engineering of emerging technologies and socio-technical systems.’ We talk about the theory of the extended mind and the idea of extended assault.

You can download the episode here or listen to it below. You can also subscribe on iTunes and Stitcher (RSS feed).


Show Notes

  • 0:00 - Introduction
  • 0:55 - The story of David Leon Riley and the phone search
  • 3:15 - What is extended cognition?
  • 7:35 - Extended cognition vs extended mind - exploring the difference
  • 13:35 - What counts as part of an extended cognitive system? The role of dynamical systems theory
  • 19:14 - Does cognitive extension come in degrees?
  • 24:18 - Are smartphones part of our extended cognitive systems?
  • 28:10 - Are we over-extended? Do we rely too much on technology?
  • 35:02 - Making the case for extended personal assault
  • 39:50 - Does functional disability make a difference to the case for extended assault?
  • 43:35 - Does pain matter to our understanding of assault?
  • 49:50 - Does the replaceability/fungibility of technology undermine the case for extended assault?
  • 55:00 - Online hacking as a form of personal assault
  • 59:30 - The ethics of extended expertise
  • 1:02:58 - Distributed cognition and distributed blame
 

Relevant Links


   

Wednesday, November 1, 2017

Video Interview about Robot Sex: Social and Ethical Implications



Through the wonders of the modern technology, myself and Adam Ford sat down for an extended video chat about the new book Robot Sex: Social and Ethical Implications (MIT Press, 2017). You can watch the full thing above or on youtube. Topics covered include:

  • Why did I start writing about this topic?
  • Sex work and technological unemployment
  • Can you have sex with a robot?
  • Is there a case to be made for the use of sex robots?
  • The Campaign Against Sex Robots
  • The possibility of valuable, loving relationships between humans and robots
  • Sexbots as a social experiment


Be sure to check out Adam's other videos and support his work.



Tuesday, October 31, 2017

Should Robots Have Rights? Four Perspectives

Ralph McQuarrie's original concept art for C3PO


I always had a soft spot for C3PO. I know most people hated him. He was overly obsequious, terribly nervous, and often annoying. R2D2 was more rougish, resilient and robust. Nevertheless, I think C3PO had his charms. You couldn’t help but sympathise with his plight, dragged along by his more courageous peer into all sorts of adventures, most of which lay well beyond the competence of a simple protocol droid like him.

It seems I wasn’t the only one who sympthasised with C3PO’s plight. Anthony Daniels — the actor who has spent much of his onscreen career stuffed inside the suit — was drawn to the part after seeing Ralph McQuarrie’s original drawings of the robot. He said the drawings conveyed a tremendous sense of pathos. So much so that he felt he had to play the character.

All of this came flooding back to me as I read David Gunkel’s recent article ‘The Other Question: Can and Should Robots Have Rights?’. Gunkel is well-known for his philosophical musings on technology, cyborgs and robots. He authored the ground-breaking book The Machine Question back in 2011, and has recently been dipping his toe into the topic of robot rights. At first glance, the topic seems like an odd one. Robots are simply machines (aren’t they?). Surely, they could not be the bearers of moral rights?

Au contraire. It seems that some people take the plight of the robots very seriously indeed. In his paper, Gunkel reviews four leading positions on the topic of robot rights before turning his attention to a fifth position — one that he thinks we should favour.

In what follows, I’m going to set out the four positions that he reviews, along with his criticisms thereof. I’ll then close by outlining some of my own criticisms/concerns about his proposed fifth position.


1. The Four Positions on Robot Rights
Before I get into the four perspectives that Gunkel reviews, I’m going to start by asking a question that he does not raise (in this paper), namely: what would it mean to say that a robot has a ‘right’ to something? This is an inquiry into the nature of rights in the first place. I think it is important to start with this question because it is worth having some sense of the practical meaning of robot rights before we consider their entitlement to them.

I’m not going to say anything particularly ground-breaking. I’m going to follow the standard Hohfeldian account of rights — one that has been used for over 100 years. According to this account, rights claims — e.g. the claim that you have a right to privacy — can be broken down into a set of four possible ‘incidents’: (i) a privilege; (ii) a claim; (iii) a power; and (iv) an immunity. So, in the case of a right to privacy, you could be claiming one or more of the following four things:

Privilege: That you have a liberty or privilege to do as you please within a certain zone of privacy.
Claim: That others have a duty not to encroach upon you in that zone of privacy.
Power: That you have the power to waive your claim-right not to be interfered with in that zone of privacy.
Immunity: That you are legally protected against others trying to waive your claim-right on your behalf.

As you can see, these four incidents are logically related to one another. Saying that you have a privilege to do X typically entails that you have a claim-right against others to stop them from interfering with that privilege. That said, you don’t need all four incidents in every case.

We don’t need to get too bogged down in these details. The important point here is that when we ask the question ‘Can and should robots have rights?’ we are asking whether they should have privileges, claims, powers and immunities to certain things. For example, you might say that there is a robot right to bodily integrity, which could mean that a robot would be free to do with its body (physical form) as it pleases and that others have a duty not to interfere with or manipulate that bodily form, unless they receive the robot’s acquiescence. Or, if you think that’s silly because robots can’t consent (or can they?) you might limit it to a simple claim-right, i.e. a duty not to interfere without permission from someone given the authority to make those decisions. Legal systems grant rights to people that are incapable of communicating their wishes, or to entities that are non-human, all the time so the notion that robots could be given rights in this way is not absurd.

But that, of course, brings us to the question that Gunkel asks, which is in fact two questions:

Q1 - Can robots have rights?
Q2 - Should robots have rights?

The first question is about the capacities of robots: do they, or could they, have the kinds of capacities that would ordinarily entitle an entity to rights? Gunkel views this as a factual/ontological question (an ‘is’ question). The second question is about whether robots should have the status of rights holders. Gunkel views this as an axiological question (an ‘ought’ question).

I’m not sure what to make of this framing. I’m a fairly staunch moralist when it comes to rights. I think we need to sort out our normative justification for the granting of rights before we can determine whether robots can have rights. Our normative justification of rights would have to identify the kinds of properties/capacities an entity needs to possess in order to have rights. It would then be a relatively simple question of determining whether robots can have those properties/capacities. The normative justification does most of the hard work and is really analytically prior to any inquiry into the rights of robots.

This means that I think there are more-or-less interesting ways of asking the two questions to which Gunkel alludes. The interesting form of the ‘can’ question is really: is it possible to create robots that would satisfy the normative conditions for an entitlement to rights (or have we even already created such robots)? The interesting form of the ‘should’ question is really: if it is possible to create such robots, should we do so?

But that’s just my take on it. I still accept that there is an important distinction to be drawn between the ‘can’ and ‘should’ questions and that depending on your answer to them there are four logically possible perspectives on the issue of robot rights: (a) robots cannot and therefore should not have rights; (b) robots can and should have rights; (c) robots can but should not have rights; and (d) robots cannot but should have rights. These four perspectives are illustrated in the following two-by-two matrix below.



Surprisingly enough, each of these four perspectives has its defenders and one of the goals of Gunkel’s article is to review subject each of them to critique. Let’s look at that next.


2. An Evaluation of the Four Perspectives
Let’s start with the claim that robots cannot and therefore should not have rights. Gunkel argues that this tends to be supported by those who view technology as a tool, i.e. as an instrument of the human will. This is a very common view, and is in many ways the ‘classic’ theoretical understanding of technology in human life. If technology is always a tool, and robots are just another form of technology, then they too are tools. And since tools cannot (and should not) be rights-bearers, it follows (doesn’t it?) that robots cannot and should not have rights.

Gunkel goes into the history of this position in quite some detail, but we don’t need to follow suit. What matters for us are his criticisms of it. He has two. The first is simply that the tool/instrumentalist view seems inadequate when it comes to explaining the functionality of some technologies. Even as far back as Hegel and Marx, distinctions were drawn between ‘machines’ and ‘tools’. The former could completely automate and replace a human worker, whereas the latter would just complement and assist one. Robots are clearly something more than mere tools: the latest ones are minimally autonomous and capable of learning from their mistakes. Calling them ‘tools’ would seem inappropriate. The other criticism is that the instrumentalist view seems particularly inadequate when it comes to describing advances in social robotics. People form close and deep attachments to social robots, even when the robots are not designed to look or act in ways that arouse such an emotional response. Consider, for example, the emotional attachments soldiers form with bomb disposal robots. There is nothing cute or animal-like about these robots. Nevertheless, they are not experienced as mere tools.

This brings us to the second perspective: that robots can and so should have rights. This is probably the view that is most similar to my own. Gunkel describes this as the ‘properties’ approach because proponents of it follow the path I outlined in the previous section: they first determine the properties that they think an entity must possess in order to count as a rights-bearer; and they then figure out whether robots exhibit those properties. Candidate properties include things like autonomy, self-awareness, sentience etc. Proponents of this view will say that if we can agree that robots exhibit those properties then, of course, they should have rights. But most say that robots don’t exhibit those properties just yet.

Gunkel sees three problems with this. First, the terms used to describe the properties are often highly contested. There is no single standard or agreement about the meaning of ‘consciousness’ or ‘autonomy’ and it is hard to see them being decidable in the future. Second, there are epistemic limitations to our ability to determine whether an entity possesses these properties. Consciousness is the famous example: we can never know for sure whether another person is conscious. Third, even if you accept this approach to the question there is still an important ethical issue concerning the creation of robots that exhibit the relevant properties: should we create such entities?

(For what it’s worth: I don’t see any of these as being significant criticisms of the ‘properties’ view. Why not? Because if they are problems for the ascription of rights to robots they are also problems for the ascription of rights to human beings. In other words: they raise no special problems when it comes to robots. After all, it is already the case that we don’t know for sure whether other humans exhibit the relevant properties, and there is a very active debate about the ethics of creating humans that exhibit these properties. If it is the properties that matter, then the specific entity that exhibits them does not.)

The third perspective says that robots can but should not have rights. This is effectively the view espoused by Joanna Bryson. Although she is somewhat sceptical of the possibility of robots exhibiting the properties needed to be rights-bearers, she is willing to concede the possibility. Nevertheless, she thinks it would be a very bad idea to create robots that exhibit these properties. In her most famous article on the topic, she argues that robots should always be ‘slaves’. She has since dropped this term in favour of ‘servants’. Bryson’s reasons for thinking that we should avoid creating robots that have rights are multifold. She sometimes makes much of the fact that we will necessarily be the ‘owners’ of robots (that they will be our property), but this seems like a weak grounding for the view that robots should not have rights given that property rights are not (contra Locke et al) features of the natural order. Better is the claim that the creation of such robots will lead to problems when it comes to responsibility and liability for robot misdeeds, and that they could be used to deceive, manipulate or mislead human beings — though neither of these in entirely persuasive to me.

Gunkel has two main criticisms of Bryson’s view. The first — which I like — is that Bryson is committed to a form of robot asceticism. Bryson thinks that we should not create robots that exhibit the properties that make them legitimate objects of moral concern. This means no social robots with person-like (or perhaps even animal-like) qualities. It could be extremely difficult to realise this asceticism in practice. As noted earlier, humans seem to form close, empathetic relationships with robots that are not intended to pull upon their emotional heartstrings. Consider, once more, the example of soldiers forming close attachments to bomb disposal robots. The other criticism that Gunkel has — which I’m slightly less convinced of — is that Bryson’s position commits her to building a class of robot servants. He worries about the social effects of this institutionalised subjugation. I find this less persuasive because I think the psychological and social effects on humans will depend largely on the form that robots take. If we create a class of robot servants that look and act like C3PO, we might have something to worry about. But robots do not need to exist in an integrated, humanoid (or organism-like) form.

The fourth perspective says that robots cannot but should have rights. This is the view of Kate Darling. I haven’t read her work so I’m relying on Gunkel’s presentation of it. Darling’s claim seems to be that robots do not currently have the properties that we require of rights-bearers, but that they are experienced by human beings in a unique and special way. They are not mere objects to us. We tend to anthropomorphise them, and project certain cognitive capabilities and emotions onto them. This in turn foments certain emotions in our interactions with them. Darling claims that this phenomenological experience might necessitate our having certain ethical obligations to robots. I tend to agree with this, though perhaps because I am not sure how different it really is from the ‘properties’ view (outlined above): whether an entity has the properties of a rights-bearer depends, to a large extent (with some qualifications) on our experience of it. At least, that’s my approach to the topic.

Gunkel thinks that there are three problems with Darling’s view. The first is that if we follow Kantian approaches to ethics, feelings are a poor guide to ethical duties. What’s more, if perception is what matters this raises the question: whose perception counts? What if not everyone experiences robots in the same way? Are their experiences to be discounted? The second problem is that Darling’s approach might be thought to derive an ‘ought’ from an ‘is’: the facts of experience determine the content of our moral obligations. The third problem is that it makes robot rights depend on us — our experience of robots — and not on the properties of the robots themselves. I agree with Gunkel that these might be problematic, but again I tend to think that they are problems that plague our approach to humans as well.

I’ve tried to summarise Gunkel’s criticisms of the four different positions in the following diagram.




3. The Other Perspective
Gunkel argues that none of the arguments outlined above is fully persuasive. They each have their problems. We could continue to develop and refine the arguments, but he favours a different approach. He thinks we should try to find a fifth perspective on the problem of robot rights. He calls this perspective ‘thinking otherwise’ and bases it on the work on Eammuel Levinas. I’ll have to be honest and admit that I don’t fully understand this perspective, but I’ll do my best to explain it and identify where I have problems with it.

In essence, the Levinasian perspective favours an ethics first view of ontology. The four perspectives outlined above all situate themselves within the classic Humean is-ought distinction. They claim that the rights of robots are, in some way, contingent upon what robots are — i.e. that our ethical principles determines what is ontologically important and that, correspondingly, the robot’s ontological properties will determine its ethical status. The Levinasian perspective involves a shift away from that way of thinking — realising that you can only derive obligations from facts. The idea is that we first focus on our ethical responses to the world and then consider the ontological status of that world. It’s easier to quote directly from Gunkel on this point:

According to this way of thinking, we are first confronted with a mess of anonymous others who intrude on us and to whom we are obligated to respond even before we know anything at all about them. To use Hume’s terminology — which will be a kind of translation insofar as Hume’s philosophical vocabulary, and not just his language, is something that is foreign to Levinas’s own formulations — we are first obligated to respond and then, after having made a response, what or who we responded to is able to be determined and identified. 
(Gunkel 2017, 10)

I have some initial concerns about this. First, I’m not sure how distinctive or radical this is. It seems broadly similar to an approach that Dan Dennett has advocated for years in relation to the free will debate. His view is that it may be impossible to settle the ontological question of freedom vs determinism and hence we should allow our ethical practices to guide us. Setting that aside, I also have some concerns about the meaning of the phrase ‘obligated to respond’ in the quoted passage. It seems to me that it could be trading on an ambiguity between two different meanings of the phrase, one amoral and one moral. It could be that we are physically obligated to respond: our ongoing engagement with the world doesn’t give us time to settle moral or ontological questions first before coming up with a response. We are pressured to come up with a response and revise and resubmit our answers to the ontological/ethical questions at a later time. That type of obligated response is amoral. If that’s what is meant by the phrase ‘obligated to respond’ in the above passage then I would say it is a relatively banal and mundane idea. The moralised formulation of the phrase would be very different. It would suggest that our obligated response actually has some moral or ethical weight. That’s more interesting — and it might be true in some deep philosophical sense insofar as we can never truly escape or step back from our dynamic engagement with the world — but then I’m not sure that it necessitates a radical break from traditional approaches to moral philosophy.

This brings me to another problem. As described, the Levinasian perspective seems very similar to the one advocated by Kate Darling. After all, she was suggesting that our ethical stance toward social robots should be dictated by our phenomenological experience of them. The Levinasian perspective says pretty much the same thing:

[T]he question of social and moral status does not necessarily depend on what the other is in its essence but on how she/he/it…supervenes before us and how we decide, in the “face of the other” (to use Levinasian terminology), to respond. 
(Gunkel 2017, 10)

Gunkel anticipates this critique. He argues that there are two major differences between the Levinasian perspective and Darling’s. The first is that Darling’s perspective is anthropomorphic whereas the Levinasian one is resolutely not. For Darling, our ethical response to social robots is dictated by our emotional needs and by our tendency to project ourselves onto the ‘other’. Levinas thinks that anthropomorphism of this kind is a problem because it denies the ‘alterity’ of the other. This then leads to the second major difference which is that Darling’s perspective maintains the superiority and privilege of the self (the person experiencing the world) and maintains them in a position of power when it comes to granting rights to others. Again, the purpose of the Levinasian perspective is to challenge this position of superiority and privilege.

This sounds very high-minded and progressive, but it’s at this point that I begin to lose the thread a little. I am just not sure what any of this really means in practical and concrete terms. It seems to me that the self who experiences the world must always, necessarily, assume a position of superiority over the world they experience. They can never fully occupy another person’s perspective — all attempts to sympathise and empathise are ultimately filtered through their own experience.

Furthermore, I do not see how deciding on an entity’s rights and obligations could ever avoid assuming some perspective of power and privilege. Rights — while perhaps grounded in deeper ethical truths — are ultimately social constructions that depend on institutions with powers and privileges for their practical enforcement. You can have more-or-less hierarchical and absolute institutions of power, but you cannot completely avoid them when it comes to the protection and recognition of rights. So, I guess, I’m just not sure where the Levinasian perspective ultimately gets us in the robot rights debate.

That said, I know that David is publishing an entire book on this topic next year. I’m sure more light will be shed at that stage.




Saturday, October 28, 2017

Episode #31 - Hartzog on Robocops and Automated Law Enforcement

vIVsVj9G.jpeg

In this episode I am joined by Woodrow Hartzog. Woodrow is currently a Professor of Law and Computer Science at Northeastern University (he was the Starnes Professor at Samford University’s Cumberland School of Law when this episode was recorded). His research focuses on privacy, human-computer interaction, online communication, and electronic agreements. He holds a Ph.D. in mass communication from the University of North Carolina at Chapel Hill, an LL.M. in intellectual property from the George Washington University Law School, and a J.D. from Samford University. He previously worked as an attorney in private practice and as a trademark attorney for the United States Patent and Trademark Office. He also served as a clerk for the Electronic Privacy Information Center.

We talk about the rise of automated law enforcement and the virtue of an inefficient legal system. You can download the episode here or listen below. You can also subscribe to the podcast via iTunes or Stitcher (RSS feed is here).


Show Notes

  • 0:00 - Introduction
  • 2:00 - What is automated law enforcement? The 3 Steps
  • 6:30 - What about the robocops?
  • 10:00 - The importance of hidden forms of automated law enforcement
  • 12:55 - What areas of law enforcement are ripe for automation?
  • 17:53 - The ethics of automated prevention vs automated punishment
  • 23: 10 - The three reasons for automated law enforcement
  • 26:00 - The privacy costs of automated law enforcement
  • 32:13 - The virtue of discretion and inefficiency in the application of law
  • 40:10 - An empirical study of automated law enforcement
  • 44:35 - The conservation of inefficiency principle
  • 48:40 - The practicality of conserving inefficiency
  • 51:20 - Should we keep a human in the loop?
  • 55:10 - The rules vs standards debate in automated law enforcement
  • 58:36 - Can we engineer inefficiency into automated systems
  • 1:01:10 - When is automation desirable in law?
 

Relevant Links

   

Wednesday, October 25, 2017

Should robots be granted the status of legal personhood?




The EU parliament attracted a good deal of notoriety in 2016 when its draft report on civil liability for robots suggested that at least some sophisticated robots should be granted the legal status of ‘electronic personhood’. British tabloids were quick to seize upon the idea — the report came out just before the Brexit vote — as part of their campaign to highlight the absurdity of the EU. But is the idea really that absurd? Could robots ever count as legal persons?

A recent article by Bryson, Diamantis and Grant (hereinafter ‘BDG’) takes up these questions. In ‘Of, for and by the people: the legal lacuna or synthetic persons’, they argue that the idea of electronic legal personhood is not at all absurd. It is a real but dangerous possibility — one that we should actively resist. Robots can, but should not, be given the legal status of personhood.

BDG’s article is the best thing I have read on the topic of legal personhood for robots. I believe it presents exactly the right framework for thinking about and understanding the debate. But I also think it is misleading on a couple of critical points. In what follows, I will set out BDG’s framework, explain their central argument, and present my own criticisms thereof.


1. How to Think about Legal Personhood
BDG’s framework for thinking about the legal personhood of robots consists of three main theses. They do not give these names, but I will for sake of convenience:

The Fictionality Thesis: Legal personhood is a social fiction, i.e. an artifact of the legal system. It should not be confused with moral or metaphysical personhood.

The Divisibility Thesis: Legal personhood is not a binary property; it is, rather, a scalar property. Legal personhood consists of a bundle of rights and obligations, each of which can be separated from the other. To put it another way, legal personhood can come in degrees.

The Practicality Thesis: To be effective, the granting of legal personhood to a given entity must be practically enforceable or realisable. There is thus a distinction to be drawn between de jure legal personhood and de facto legal personhood.

Each of these three theses is, in my view, absolutely correct and will probably be familiar to lawyers and legal academics. Let’s expand on each.

First, let’s talk about fictionality. Philosophers often debate the concept of personhood. When they do so, they usually have moral or metaphysical personhood in mind. They are trying to ‘carve nature at its joints’ and figure out what separates true persons from everything else. In doing so, they typically fixate on certain properties like ‘rationality’, ‘understanding’, ‘consciousness’, ‘self-awareness’ and ‘continuing sense of identity’. The argue that these sorts of properties are what constitute true personhood. Their inquiry has moral significance because being a person (in this philosophical sense) is commonly held to be what makes an entity a legitimate object of moral concern, a bearer of moral duties, and a responsible moral agent.

Legal personhood is a very different beast. It is related to moral or metaphysical personhood — in the sense that moral persons are usually, though not always, legal persons. And it is perhaps true that in an ideal world the two concepts would be perfectly correlated. Nevertheless, they can and do pull apart. To be a legal person is simply to be an entity to whom the legal system ascribes legal rights and duties, e.g. the right to own property, the right to enter into contracts, the right to sue for damages, the duty to pay taxes, the duty to pay compensation and so on. Legal systems have, historically, conferred the status of personhood on entities — e.g. corporations and rivers — that no philosopher would ever claim to be a metaphysical or moral person. Likewise, legal systems have, historically, denied the status of personhood to entities we would clearly class as metaphysical or moral persons, e.g. women and slaves. The fictional nature of legal personhood has one important consequence for this debate: it means that it is, of course, possible to confer the status of personhood on robots. We could do it, if we wanted to. There is no impediment or bar to it. The real question is: should we?

The divisibility thesis really just follows from this characterisation of legal personhood. As defined, legal personhood consists in a bundle of rights and duties (such as the right to own property and the duty to pay compensation). The full bundle would be pretty hard to set down on paper (it would consist of a lot of rights and duties). You can, however, divide up this bundle however you like. You can grant an entity some of the rights and duties and not others. Indeed, this is effectively what was done to women and slaves historically. They often had at least some of the rights and duties associated with being a legal person, but were denied many others. This is important because it means the debate about the legal status of robots should not be framed in terms of a simple binary choice: should robots be legal persons or not? It should be framed in terms of the precise mix of rights and duties we propose to grant or deny.

This brings us, finally, to the practicality thesis. This also follows from the fictional nature of legal personhood, and, indeed, many other aspects of the law. Since the law is, fundamentally, a human construct (setting debates about natural vs. positive law to one side for now) it depends on human institutions and practices for its enforcement. It is possible for something to be legal ‘on the books’ (i.e. in statute or case law) and yet be practically unrealisable in the real world due to a lack of physical or institutional support. For example, equal status for African-Americans was ‘on the books’ for a long time before it was (if it even is) a practical reality. Similarly, in many countries homosexuality was illegal ‘on the books’ without its illegality being enforced in practice. Lawyers make this distinction between law on the books and law in reality by using the terms de jure and de facto.

The three theses should influence our attitude to the question: should robots be given the status of legal persons. We know now that this is possible since legal personhood is fictional, but we also need to bear in mind which precise bundle of rights and obligations are being proposed for robots, and whether the enforcement of those rights and obligations is practicable.


2. The Basic Argument: Why we should not grant personhood to robots
Despite the nuance of their general framework, BDG go on to present a relatively straightforward argument against the idea of legal personhood for robots. They briefly allude to the practical difficulties of enforcing legal personhood for robots, and they admit that a full discussion of the issue should consider the precise bundle of rights and obligations, nevertheless their objection is couched in general terms.

That objection has a very simple structure. It can be set out like this:


  • (1) We should only confer the legal status of personhood on an entity if doing so is consistent with the overarching purposes of the legal system.
  • (2) Conferring the status of legal personhood on robots would not be (or is unlikely to be) consistent with the overarching purposes of the legal system.
  • (3) Therefore, we ought not to confer the status of legal personhood on robots.


In relation to (1), the clue is in the title ‘Of, for and by the people’. BDG think that legal systems should serve the interests of the people. But, of course, who the people are (for the purposes of the law) is the very thing under dispute. Fortunately, they provide some more clarity. They say the following:

Every legal system must decide to which entities it will confer legal personhood. Legal systems should make this decision, like any other, with their ultimate objectives in mind…Those objectives may (and in many cases should) be served by giving legal recognition to the rights and obligations of entities that really are people. In many cases, though, the objectives will not track these metaphysical and ethical truths…[Sometimes] a legal system may grant legal personhood to entities that are not really people because conferring rights upon the entity will protect it or because subjecting the entity to obligations will protect those around it. 
(BDG 2017, 278)

This passage suggests that the basic objective of the legal system is to protect those who really are (metaphysical and moral) people by giving them the status of legal personhood, but that granting legal personhood to other entities could also be beneficial on the grounds that it will ‘protect those around’ the entity in question. Later in the article, they further clarify that the basic objectives of the legal system are threefold: (i) to further the interests of the legal persons recognised (ii) to enforce sufficiently weighty moral rights and obligations and (iii) whenever the moral rights and obligations of two entities conflict, to prioritise human moral rights and obligations (BDG 2017, 283).

All of which inclines me to believe that, for BDG, legal systems should ultimately serve the interests of human people. The conferring of legal status on any other entity should never come at the expense of human priority. This leads me to reformulate premise (1) in the following manner (note: the ‘or’ and ‘and’ are important here):


  • (1*) We should only confer the legal status of personhood on an entity if: (a) that entity is a moral/metaphysical person; or (b) doing so serves some sufficiently weighty moral purpose; and (c) human moral priority is respected.


This view might be anathema to some people. BDG admit that it is ‘speciesism’, but they think it is acceptable because it allows for the interests of non-humans to be factored in ‘via the mechanism of human investment in those entities’ (BDG 2017, 283).

Onwards to premise (2). We now have a clearer standard for evaluating the success or failure of that premise. We know that the case for robot legal personhood hinges on the moral status of the robots and the utility of legal personhood in serving the interests of humans. BDG present three main arguments for thinking that we should not confer the status of legal personhood on robots.

The first argument is simply that robots are unlikely to acquire a sufficiently weighty moral status in and of themselves. BDG admit that the conditions that an entity needs to satisfy in order to count as a moral patient (and thus worthy of having its rights protected) are contested and uncertain. They do not completely rule out the possibility, but they are sceptical about robots satisfying those conditions anytime soon. Furthermore, even if robots could satisfy those conditions, a larger issue remains: should we create robots that have a sufficiently weighty moral status? This is one of Bryson’s main contributions to the robot ethics debates. She thinks we have no strong reason to create robots with this status — that robots should always be tools/servants.

The second argument is that giving robots the status of legal personhood could allow them to serve as liability shields. That is to say, humans could use robots to perform actions on their behalf and then use the robot’s status as a legal person to shield themselves from having to pay out compensation or face responsibility for any misdeed of the robot. As noted earlier, corporations are legal persons and humans often use the (limited liability) corporate form as a liability shield. Many famous legal cases illustrate this point. Most law students will be familiar with the case of Salomon v Salomon in which the UK House of Lords confirmed the doctrine of separate legal personhood for corporations (or ‘companies’ to use the preferred British term). In essence, this doctrine holds that an individual owner or manager of a company does not have to pay the debts of that company (in the event that the company goes bankrupt) because the company is a separate legal person. The fear from BDG is that robot legal persons could be used to similar effect to avoid liability on a large scale.

The third argument follows on from this. It claims that robots are much worse than corporations, when it comes to avoiding legal responsibility, in one critical respect. At least with a corporation there is some group of humans in charge. It is thus possible — though legally difficult — to ‘pierce the corporate veil’ and ascribe responsibility to that group of humans. This may not be possible in the case of robots. They may be autonomous agents with no accountable humans in control. As BDG put it:

Advanced robots would not necessarily have further legal persons to instruct or control them. That is to say, there may be no human actor directing the robot after inception.
(BDG 2017, 288

In sum, the fact that there are no strong moral reasons to confer the status of legal personhood on robots (or to create such robots), coupled with the fact that doing so could seriously undermine our ability to hold entities to account for their misdeeds, provides support for premise (2).

I have tried to illustrate this argument in the diagram below, adding in the extra premises covered in this description.



3. Some criticisms and concerns
Broadly speaking, I think there is much to be said in favour of this line of thinking, but I also have some concerns. Although BDG do a good job setting out a framework for thinking about robot legal personhood, I believe their specific critiques of the concept are not appropriately contextualised. I have two main concerns.

The first concern is slightly technical and rhetorical in nature. I don’t like the claim that legal personhood is ‘fictional’ and I don’t think the use of fictionalism is ideal in this context. I know this is a common turn of phrase, and so BDG are in good company in using, but I still don't like it. Fictionalism, as BDG point out, describes a scenario in which ‘participants in a…discourse engage in a sort of pretense (whether wittingly or not) by assuming a stance according to which things said in the discourse, though literally false, refer to real entities and describe real properties of entities’ (BDG 2017, 278). So, in the case of legal personhood, the idea is that everyone in the legal system is pretending that corporations (or rivers or whatever) are persons when they are really not.

I don’t like this for two reasons. One reason is that I think it risks trivialising the debate. BDG try to avoid this by saying that calling something a fiction ‘does not mean that it lacks real effects’ (BDG 2017, 278), but I worry that saying that legal personhood is a pretense or game of make believe will denigrate its significance. After all, many legal institutions and statuses are fictional in this sense, e.g. property rights, money, and marriage. The other reason — and the more important one — is that I don’t think it is really correct to say that legal personhood is fictional. I think it is more correct to say that it is a social construction. Social constructions can be very real and important — again property rights, marriage and money are all constructed social facts about our world — and the kind of discourse we engage in when making claims about social constructs need not involve making claims that are ‘literally false’ (whatever the ‘literally’ modifier is intended to mean in this context). I think this view is more appropriate because legal personhood is constituted by a bundle of legal rights and obligations, and each of those rights and obligations is itself a social construct. Thus, legal personhood is a construct on a construct.

The second concern is that in making claims about robots and the avoidance of liability, it doesn’t seem to me that BDG engage in the appropriate comparative analysis. Lots of people who research the legal and social effects of sophisticated robots are worried about their potential use as liability shields, and about the prospect of ‘responsibility gaps’ opening up as a result of their use. This is probably the major objection to the creation of autonomous weapon systems and it crops up in debates about self-driving cars and other autonomous machines as well. People worry that existing legal doctrines about negligence or liability for harm could be used by companies to avoid liability. Clever and well-paid teams of lawyers could argue that injuries were not reasonably foreseeable or that the application of strict liability standards in these cases would be contrary to some fundamental legal right.* Some people think these concerns are overstated and that existing legal doctrines could be interpreted to cover these scenarios, but there is disagreement about this, and the general view is that some legal reform is desirable to address potential gaps.

Note that these objections are practically identical to the ones that BDG make and that they apply irrespective of whether we grant robots legal personhood. They form part of a general case against all autonomous robots, not a specific case against legal personhood for said robots. To make the specific case against legal personhood for robots, BDG would need to argue that granting this status will make things even worse. They do nod in the direction of this point when they observe that autonomous robots will inevitably infringe on the rights of humans and that legal personhood ‘would only make matters worse’ for those trying to impose accountability in those cases.

The problem is that they don’t make enough of this comparative point, and it’s not at all clear to me that they defend it adequately. Granting legal personhood to robots would, at least, require some active legislative effort by governments (i.e. it couldn’t be granted as a matter of course). In the course of preparing that legislation, issues associated with liability and accountability would have to be raised and addressed. Doing nothing — i.e. sticking with the existing legal status quo — could actually be much worse than this because it would enable lawyers to take advantage of uncertainty, vagueness and ambiguity in the existing legal doctrines. So, paradoxically, granting legal personhood might be a better way of addressing the very problems they raise.

To be absolutely clear, however, I am not claiming that conferring legal personhood on robots is the optimal solution to the responsibility gap problem. Far from it. I suspect that other legislative schemes would be more appropriate. I am just pointing out that doing nothing could be far worse than doing something, even if that something is conferring legal personhood on a robot. Furthermore, I quite agree that any case for robot legal personhood would have to turn on whether there are compelling reasons to create robots that have the status of moral patients. Bryson thinks that there are no such compelling reasons. I am less convinced of this, but that’s an argument that will have to be made at another time.


* Experience in Ireland suggests that this can happen. Famously, the offence of statutory rape, i.e. sex with a child under the age of 18, (which is strict liability) was held to be unconstitutional in Ireland because it did not allow for a defence of reasonable belief as to the age of the victim. This was held to breach the right to a fair trial.



New Paper - The Law and Ethics of Virtual Sexual Assault

Image via Pixabay

I have new paper coming out on the topic of virtual sexual assault. It is to appear in Woodrow Barfield and Marc Blitz's edited collection The Law of Virtual and Augmented Reality (working title), which is due out in 2018. The article is something of a departure for me insofar as it is a survey and introduction to the topic. You can access a pre-publication draft of it here.

Abstract: This chapter provides a general overview and introduction to the law and ethics of virtual sexual assault. It offers a definition of the phenomenon and argues that there are six interesting types. It then asks and answers three questions: (i) should we criminalise virtual sexual assault? (ii) can you be held responsible for virtual sexual assault? and (iii) are there issues with 'consent' to virtual sexual activity that might make it difficult to prosecute or punish virtual sexual assault?




Tuesday, October 24, 2017

Podcast Interview - Singularity Bros #114 on Robot Sex


Logo from the Singularity Bros Podcast


As part of the major publicity drive that I am putting together for the book Robot Sex: Social and Ethical Implications, I just appeared on the Singularity Bros Podcast. We have a very wide-ranging and philosophically rich discussion about the ethics of sexual relationships with robots. You should check it out here.

And remember: if you want to buy the book, it is just a click away.




Sunday, October 22, 2017

Freedom and the Unravelling Problem in Quantified Work


A Machinist at the Tabor Company where Frederick Taylor (founder of 'scientific management') consulted.


[This is a text version of a short talk I delivered at a conference on ’Quantified Work’. It was hosted by Dr Phoebe Moore at Middlesex University on the 13th October 2017 and was based around her book ‘The Quantified Self in Precarity’.]

Surveillance has always been a feature of the industrial workplace. With the rise of industrialism came the rise of scientific management. Managers of manufacturing plants came to view the production process as a machine, not just as something that involved the use of machines. The human workers were simply parts of that machine. Careful study of the organisation and distribution of the machine parts could enable a more efficient production process. To this end, early pioneers in scientific management (such as Frederick Taylor and Lillian and Frank Gilbreth) invented novel methods for surveilling how their workers spend their time.

Nowadays, the scale and specificity of our surveillance techniques has changed. Our digitised workplaces enable far more information to be collected about our movements and behaviour, particularly when wearable smart-tech is factored into the mix. The management philosophy underlying the workplace has also changed. Where Taylor and the Gilbreths saw the goal of scientific management as creating a more consistent and efficient machine, we now embrace a workplace philosophy in which the ability to rapidly adapt to a changing world is paramount (the so-called ‘agile’ workplace). Acceleration and disruption are now the aim of the game. Workers must be equipped with the tools to enable them to navigate an uncertain world. What’s more, work now never ends — it follows us home on our laptops and phones — and we are constantly pressured to be available to work, while maintaining overall health and well-being. Employers are attuned to this and have instituted various corporate wellness programmes aimed at enhancing employee health and well-being, while raising productivity. The temptation to use ‘quantified self’ technology to track and nudge employee behaviour is, thus, increasing.

These are the themes addressed in Phoebe’s book, and I think they prompt the following question, one that I will seek to answer in this talk:

Question: Does the rise of ‘quantified self’ surveillance threaten our freedom in some new or unique way?

In other words, do these new forms of workplace surveillance constitute something genuinely new or unprecedented in the world of work, or are they really just more of the same? I consider two answers to that question.


Answer 1: No, because work always, necessarily, undermines our freedom
The first answer is the sceptical one. The notion that work and freedom are mutually inconsistent is a long-standing one in left-wing circles. Slavery is the epitome of unfreedom. Work, it is sometimes claimed, is a form of ‘waged’ or ‘economic’ slavery. You are not technically owned by your employer (after all you could be self-employed, as many of us now are in the ‘gig’ economy) but you are effectively compelled to work out of economic necessity. Even in countries with a generous social welfare provision, access to this provision is usually tied to the ability and willingness to work. There is, consequently, no way to escape the world of work.

I’ve covered arguments of this sort previously on my blog. My favourite, comes from the work of Julia Maskivker. The essence of her argument is this:

(1) A phenomenon undermines our freedom if: (a) it limits our ability to choose how to make use of our time; (b) it limits our ability to be the authors of our own lives; and/or (c) it involves exploitative/coercive offers.
(2) Work, in modern society, (a) limits our ability to choose how to make use of our time; (b) limits our ability to be the authors of our own lives; and c) involves an exploitative/coercive offer.
(3) Therefore, work undermines our freedom.

Now, I’m not going to defend this argument here. I did that on a previous occasion. Suffice to say, I find the premises in it plausible, with something reasonable to said in defence of each. I’m not defending it because my present goal is not to consider whether work does in fact, always, undermine our freedom, but, rather, to consider what the consequences of accepting this view are for the debate about quantified work practices.

You could argue that if you accept it, then there is nothing really interesting to be said about the freedom-affecting potential of quantified work. If work always undermines our freedom, then quantified work practices are just more in a long line of freedom-undermining practices. They do not threaten something new or unique.

I am sympathetic to this claim but I want to resist it. I want to argue that even if you think freedom is necessarily undermined by work, there is the possibility of something new and different being threatened by quantified work practices. This is for three reasons. First, even if the traditional employer-employee relationship undermines freedom, there is usually some possibility of escape from that freedom-undermining characteristic in the shape of down time or leisure time. Quantified work might pose a unique threat if it encourages and facilitates more surveillance in that down time. Second, quantified work might threaten something new if its utility is largely self-directed, rather than other-directed. In other words, if it is imposed from the bottom-up, by workers themselves, and not from the top-down, by employers. Finally, quantified work might threaten something new simply due to the scale and ubiquity of the available surveillance technology.

As it happens, I think there are some reasons to think that each of these three things might be true.


Answer 2: Yes, due to the unravelling problem
The second answer maintains that there is something new and different in the modern world of quantified work. Specifically, it claims that quantified work practices pose a unique threat to our freedom because they hasten the transition to a signalling economy, which in turn leads to the unravelling problem. I take this argument from the work of Scott Peppet.

A ‘signalling’ economy is to be differentiated from a ‘sorting’ economy. The difference has to do with how information is acquired by different economic actors. Information is important when making decisions about what to buy and who to employ. If you are buying a used car, you want to know whether or not it is a ‘lemon’. If you are buying health insurance, the insurer will want to know if you have any pre-existing conditions. If you are looking for a job, your prospective employer will want to know whether you have the capacity to do it well. Accurate, high-quality information enables more rational planning, although it sometimes comes at the expense of those whose informational disclosures rule them out of the market for certain goods and services. In a ‘sorting’ economy, the burden is on the employer to screen potential employees for the information they deem relevant to the job. In a ‘signalling’ economy, the burden is on the employee to signal accurate information to the employer.

With the decline in long-term employment, and the corresponding rise in short-term, contract-based work, there has been a remarkable shift away from a sorting economy to a signalling economy. We are now encouraged to voluntarily disclose information to our employers in order to demonstrate our employability. Doing so is attractive because it might yield better working conditions or pay. The problem is that what initially appears to be a voluntary set of disclosures ends up being a forced/compelled disclosure. This is due to the unravelling problem.

The problem is best explained by way of an example. Imagine you have a bunch of people selling crates of oranges on the export market. The crates carry a maximum of 100 oranges, but they are carefully sealed so that a purchaser cannot see how many oranges are inside. What’s more, the purchaser doesn’t want to open the box prior to transport because doing so would cause the oranges to go bad. But, of course, the purchaser can easily verify the total number of oranges in the box after transport by simply opening it and counting them. Now suppose you are one of the people selling the crates of oranges. Will you disclose to the purchaser the total number of oranges in the crate? You might think that you shouldn’t because, if you are selling less than the others, you would put you at a disadvantage on the market. But a little bit of game theory tells us that we should expect the sellers to disclose the number of oranges in the crates. Why so? Well, if you had 100 oranges in your crate, you would be incentivised to disclose this to any potential purchaser. Doing so makes you an attractive seller. Correspondingly, if you had 99 oranges in your crate, and all the sellers with 100 oranges have disclosed this to the purchasers, you should disclose this information. If you don’t, there is a danger that a potential seller will lump you in with anyone selling 0-98 oranges. In other words, because those with the maximum number of oranges in their crates are sharing this information, purchasers will tend to assume the worst about anyone not sharing the number of oranges in their crate. But once you have disclosed the fact that you have 99 oranges in your crate, the same logic will apply to the person with 98 oranges and so on all the way down to the seller with 1 orange in their crate.

This is informational unravelling in practice. The seller with only 1 orange in their crate would much rather not disclose this fact to the purchasers, but they are ultimately compelled to do so by the incentives in operation on the market. The claim I am making here — and that Peppet makes in his paper — is that unravelling is also likely to happen on the employment market. The more valuable information we have about ourselves, the more we are incentivised to disclose this to our employers in order to maintain our employability. Those with the best information will do so voluntarily and willingly, but ultimately everybody will be forced to do so in an effort to differentiate themselves from other, potentially ‘inferior’, employees.

This could have a pretty dramatic effect on our freedom. If quantified self technologies enable more and more valuable information be tracked and disclosed, there will be more and more unravelling, which will in turn lead to more and more forced disclosures. This could result in something quite different from the old world of workplace surveillance, partly because it is being driven from the bottom up, i.e. workers do it themselves in order to secure some perceived advantage. There are laws in place that prevent employers from seeking certain information about their employees (e.g. information about health conditions) but those laws usually only cover cases where the employer demands the information. Where the information is being supplied, seemingly willingly, by masses of gig workers looking to increase their employability, the situation is rather different. This could be compounded by the fact that the types of information that are desirable in the new, agile, workplace will go beyond simple productivity metrics into information about general health and well-being. New and more robust legal protections may be required to redress this problem of seemingly voluntary disclosure.

I’ll close on a more positive note. Even though I think the unravelling problem is worth taking seriously, the argument I have presented is premised on the assumption that the information derived from quantified self technologies is in fact valuable. This may not be the case. It may turn out that accurately signalling something like the numbers of hours you slept last night, the number of calories you consumed yesterday, or the number of steps you have taken, is not particularly useful to employers. In that case, the scale of the unravelling problem might be mitigated. But we should still be cautious. There is a distinction to be drawn between information that is genuinely valuable (i.e. has some positive link to economic productivity) and information that simply perceived to be valuable (i.e. thought to be of value by potential employers). Unfortunately, the latter is what really counts, not the former. I see this all the time in my own job. Universities are interested in lots of different metrics for gauging the success of their employees (papers published, number of citations, research funding received, number of social media engagements, number of paper downloads etc. etc.). Many of these metrics are of dubious value. But that doesn’t matter. They are perceived as having some value and so academic staff are encouraged to disclose more and more of them.





Saturday, October 14, 2017

Some things you wanted to know about robot sex* (but were afraid to ask)




BOOK LAUNCH - BUY NOW!

I am pleased to announce that Robot Sex: Social and Ethical Implications (MIT Press, 2017), edited by myself and Neil McArthur, is now available for purchase. You can buy the hardcopy/ebook via Amazon in the US. You can buy the ebook in the UK as well, but the hardcopy might take another few weeks to arrive. I've never sold anything before via this blog. That all changes today. Now that I actually have something to sell, I'm going to turn into the most annoying, desperate, cringeworthy and slightly pathetic salesman you could possibly imagine...

...Hopefully not. But I would really appreciate it if people could either (a) purchase a copy of the book and/or (b) recommend it to others and/or (c) review it and generally spread the word. Academic books are often outrageously expensive, but this one lies at the more reasonable end of the spectrum ($40 in the US and £32 in the UK). I appreciate it is still expensive though. To whet your appetite, here's a short article I put together with Neil McArthur that covers some of the themes in the book.

----------------------------------------------------------------

Sex robots are coming. Basic models exist today and as robotics technologies advance in general, we can expect to see similar advances in sex robotics in particular.

None of this should be surprising. Technology and sex have always gone hand-in-hand. But this latest development in the technology of sex seems to arouse considerable public interest and concern. Many people have questions that they want answered, and as the editors of a new academic book on the topic, we are willing to oblige. We present here, for your delectation, *some* of the things you might have wanted to know about robot sex, but were afraid to ask.


1. What is a sex robot?
A ‘robot’ is an embodied artificial agent. A sex robot is a robot that is designed or used for the purpose of sexual stimulation. One of us (Danaher) has argued that sex robots will have three additional properties (a) human-like appearance, (b) human-like movement and behaviour and (c) some artificial intelligence. Each of these properties comes in degrees. The current crop of sex robots, such as the Harmony model developed by Abyss Creations, possess them to a limited extent. Future sex robots will be more sophisticated. You could dispute this proposed definition, particularly its fixation on human-likeness, but we suggest that it captures the kind of technology that people are interested in when they talk about ‘sex robots’.


2. Can you really have sex with a robot?
In a recent skit, the comedian Richard Herring suggested that the use of sex robots would be nothing more than an elaborate form of masturbation. This is not an uncommon view and it raises the perennial question: what does it mean to ‘have sex’? Historically, humans have adopted anatomically precise definitions of sexual practice: two persons cannot be said to have ‘had sex’ with one another until one of them has inserted his penis into the other’s vagina. Nowadays we have moved away from this heteronormative, anatomically-obsessive definition, not least because it doesn’t capture what same-sex couples mean when they use the expression ‘have sex’. In their contribution to our book, Mark Migotti and Nicole Wyatt favour a definition that centres on ‘shared sexual agency’: two beings can be said to ‘have sex’ with one another when they intentionally coordinate their actions to a sexual end. This means that we can only have sex with robots when they are capable of intentionally coordinating their actions with us. Until then it might really just be an elaborate form of masturbation -- emphasis on the 'elaborate'.


3. Can you love a robot?
Sex and love don’t have to go together, but they often do. Some people might be unsatisfied with a purely sexual relationship with a robot and want to develop a deeper attachment. Indeed, some people have already formed very close attachments to robots. Consider, for example, the elaborate funerals that US soldiers have performed for their fallen robot comrades. Or the marriages that some people claim to have with their sex dolls. But can these close attachments ever amount to ‘love’? Again, the answer to this is not straightforward. There are many different accounts of what it takes to enter into a loving relationship with another being. Romantic love is often assumed to require some degree of reciprocity and mutuality, i.e. it’s not enough for you to love the other person, they have to love you back. Furthermore, romantic love is often held to require free will or autonomy: it’s not enough for the other person to love you back, they have to freely choose you as their romantic partner. The big concern with robots is that they wouldn’t meet these mutuality and autonomy conditions, effectively being pre-programmed, unconscious, sex slaves. It may be possible to overcome these barriers, but it would require significant advances in technology.


4. Should we use child sex robots to treat paedophilia?
Robot sex undoubtedly has its darker side. The darkest of all is the prospect of child sex robots that cater to those with paedophiliac tendencies. In July 2014, in a statement that he probably now regrets, the roboticist Ronald Arkin suggested that we could use child sexbots to treat paedophilia in the same way that methadone is used to treat heroin addiction. After all, if the sexbot is just an artificial entity (with no self-consciousness or awareness) then it cannot be harmed by anything that is done to it, and if used in the right clinical setting, this might provide a safe outlet for the expression of paedophiliac tendencies, and thereby reduce the harm done to real children. ‘Might’ does not imply ‘will’, however, and unless we have strong evidence for the therapeutic benefits of this approach, the philosopher Litska Strikwerda suggests that there is more to be said against the idea than in its favour. Allowing for such robots could seriously corrupt our sexual beliefs and practices, with no obvious benefits for children.


5. Will sex robots lead to the collapse of civilisation?
The TV series Futurama has a firm answer to this. In the season 3 episode, ‘I Dated a Robot’, we are told that entering into sexual relationships with robots will lead to the collapse of civilisation because everything we value in society — art, literature, music, science, sports and so on — is made possible by the desire for sex. If robots can give us ‘sex on demand’ this motivation will fade away. The Futurama-fear is definitely overstated. Unlike Freud, we doubt that the motivations for all that is good in the world ultimately reduce to the desire for sex. Nevertheless, there are legitimate concerns one can have about the development of sex robots, in particular the ‘mental model’ of sexual relationships that they represent and reinforce. Others have voiced these concerns, highlighting the inequality inherent in a sexual relationship with a robot and how that may spill over into our interactions with one another. At the same time, there are potential upsides to sex robots that are overlooked. One of us (McArthur) argues in the book that sex robots could distribute sexual experiences more widely and lead to more harmonious relationships by correcting for imbalances in sex drive between human partners. Similarly, our colleague Marina Adshade, argues that sex robots could improve the institution of marriage by making it less about sex and more about love.

This is all speculative, of course. The technology is still in its infancy but the benefits and harms need to be thought through right now. We recommend viewing its future development as a social experiment, one that should be monitored and reviewed on an ongoing basis. If you want to learn more about the topic, you should of course buy the book.


~ Full Table of Contents ~



I. Introducing Robot Sex
1. 'Should we be thinking about robot sex?' by John Danaher 
2. 'On the very idea of sex with robots?' by Mark Migotti and Nicole Wyatt

II. Defending Robot Sex
3. 'The case for sex robots' by Neil McArthur 
4. 'Should we campaign against sex robots?' by John Danaher, Brian Earp and Anders Sandberg 
5. 'Sexual rights, disability and sex robots' by Ezio di Nucci

III. Challenging Robot Sex
6. 'Religious perspectives on sex with robots' by Noreen Hertzfeld 
7. 'The Symbolic-Consequences argument in the sex robot debate' by John Danaher 
8. Legal and moral implications of child sex robots' by Litska Strikwerda

IV. The Robot's Perspective
9. 'Is it good for them? Ethical concern for the sexbots' by Steve Petersen 
10. 'Was it good for you too? New natural law theory and the paradox of sex robots' by Joshua Goldstein

V. The Possibility of Robot Love
11. 'Automatic sweethearts for transhumanists' by Michael Hauskeller
12. 'From sex robots to love robots: Is mutual love with a robot possible' by Sven Nyholm and Lily Eva Frank

VI. The Future of Robot Sex
13. 'Intimacy, Bonding, and Sex Robots: Examining Empirical Results and Exploring Ethical Ramifications' by Matthias Scheutz and Thomas Arnold
14. 'Deus sex machina: Loving robot sex workers and the allure of an insincere kiss' by Julie Carpenter
15. 'Sex robot induced social change: An economic perspective' by Marina Adshade









Sunday, October 1, 2017

Episode #30 - Bartholomew on Adcreep and the Case Against Modern Marketing

1442864569210.jpg

In this episode I am joined by Mark Bartholomew. Mark is a Professor at the University of Buffalo School of Law. He writes and teaches in the areas of intellectual property and law and technology, with an emphasis on copyright, trademarks, advertising regulation, and online privacy. His book Adcreep: The Case Against Modern Marketing was recently published by Stanford University Press. We talk about the main ideas and arguments from this book.

You can download the episode here or listen below. You can also subscribe on iTunes and Stitcher (RSS is here).


Show Notes

  • 0:00 - Introduction
  • 0:55 - The crisis of attention
  • 2:05 - Two types of Adcreep
  • 3:33 - The history of advertising and its regulation
  • 9:26 - Does the history tell a clear story?
  • 12:16 - Differences between Europe and the US
  • 13:48 - How public and private spaces have been colonised by marketing
  • 16:58 - The internet as an advertising medium
  • 19:30 - Why have we tolerated Adcreep?
  • 25:32 - The corrupting effect of Adcreep on politics
  • 32:10 - Does advertising shape our identity?
  • 36:39 - Is advertising's effect on identity worse than that other external forces?
  • 40:31 - The modern technology of advertising
  • 45:44 - A digital panopticon that hides in plain sight
  • 48:22 - Neuromarketing: hype or reality?
  • 55:26 - Are we now selling ourselves all the time?
  • 1:04:52 - What can we do to redress adcreep?
 

Relevant Links