Thursday, July 28, 2016

Episode #8 - Karen Levy on the Rise of Intimate Surveillance

NYUSchoolOfLaw-KarenLevy-6-682x1024

This is the eighth episode in the Algocracy and Transhumanism podcast. In this episode, I talk to Karen Levy about the topic of intimate surveillance. Karen is an assistant professor in the Department of Information Science at Cornell University, and associate member of the faculty of Cornell Law School. Tracking and surveillance are now ubiquitous. We track the number of steps we take per day, the number of calories we consume, the number of likes we get on our facebook posts and much more. Governments and corporations also track information about what we like, what we buy and what we do. What happens when we use the same technology to track and surveil aspects of our intimate relationships? That's what we discuss in this podcast.

You can listen below or download at the following link. You can also subscribe on Stitcher and iTunes (via RSS feed - just click 'add to iTunes').




Show Notes

  • 0:00 - 1:40 - Introduction
  • 1:40 - 4:58 - What is intimate surveillance?
  • 4:58 - 6:50 - Intimate surveillance in the lifecycle of a relationship
  • 6:50 - 8:15 - What's new about intimate surveillance? Haven't we always been doing it?
  • 8:15 - 24:44 - What kinds of apps are out there that facilitate intimate surveillance? (Apps for finding, connecting and committing)
  • 24:44 - 26:30 - What's good about intimate surveillance?
  • 26:30 - 29:30 - Do intimate surveillance apps get us to focus on the wrong thing?
  • 29:30 - 34:10 - Gender bias and gender stereotypes
  • 34:10 - 38:50 - Consent apps and the problem of technological solutionism
  • 38:50 - 46:15 - Do these apps encourage an exchange-based approach to intimate relationships? Is this a bad thing?
  • 46:15 - 51:15 - Potential privacy harms in intimate surveillance
  • 51:15 - End - Big data and the ethics of technological experimentation
 

Relevant Links

   

Tuesday, July 26, 2016

Is Death the Sculptor of Life or an Evil to be Vanquished?




My friend Michael Hauskeller recently recommended a paper on academia.edu. It was by Davide Sisto and it was entitled “Moral Evil or Sculptor of the Living? Death and the Identity of the Subject”. I was intrigued. Longtime readers will know that I have, for some time now, been half in love with the philosophy of death. I am always keen to read a new perspective or take on the topic.

Unfortunately I was slightly underwhelmed by Sisto’s paper. While it does contain an interesting metaphor — namely: that we should view death as a valuable ‘sculptor’ of our identities — it presents this metaphor in a way that bothers me. It presents it as part of critique of the contemporary (transhumanist) view of death as a biological problem that can be solved with right the technological fix. Indeed, it tries to suggest that those who favour radical life extension are beholden to an absurd metaphysics of death.

Now, to the extent that certain transhumanists believe we can achieve a genuine immortality — i.e. an existence free from all prospect of death — I might be inclined to agree that there is something absurd in their views. But I’m not convinced that this fairly represents the views of anti-ageing gurus like Aubrey de Grey. I think they have a much more modest, and I would suggest sensible, view: that human life can be prolonged far beyond the current limits without thereby causing us to lose something of tremendous value to our sense of self.

The problem, as I see it, is that Sisto’s claim that death is a sculptor is entirely consistent with this more modest view. On top of that, the metaphor suffers from two further problems: the argumentative assumptions underlying it are not particularly original, and it may not be internally coherent.
I want to explain these three problems in the remainder of this blogpost. I do so by first reviewing and presenting Sisto’s view in what I take to be a charitable form. Indeed, I hope to provide a way of visualising Sisto’s sculptor metaphor that clarifies its true meaning.


1. Death as Moral Evil
Sisto’s paper is written in a strange way. Or, to be fairer to him, in a way that doesn’t appeal to me. It’s part intellectual history and part philosophical critique. It isn’t always overt or direct in its critique, preferring to mask its argumentative moves behind high-falutin’ and sometimes obscure linguistic barbs. It also isn’t a clear and simple in what it says, often taking several paragraphs to express what is a relatively simple idea. That’s not to say the material isn’t interesting, but I felt I had to do a lot of work to pare away the rhetorical flourishes and reveal the argumentative core. I’ll try my best to just present that core here.

Ostensibly, Sisto’s paper attempts to contrast two views of death. The first view of death is the one that has now started to dominate in the secular, medicalised world. It is the view of death as something that is part of the current natural order. When Christianity dominated the western world, death was viewed as a consequence of original sin. It was part of our punishment when we were banished from the Garden of Eden. We could overcome death, but only through the right kind of spiritual practice, and only after we shuffled off the mortal coils of our biological bodies.

As the Christian view slowly receded into the background, it was replaced by a biological and medical view of death. Death was a consequence of the current natural order — an unfortunate result of biological decay. Our cells slowly degrade and denature themselves. The degradation eventually reaches a critical point at which our metabolically maintained homeostasis breaks down. This results in our deaths (though the precise markers of biological death are somewhat disputed — ‘brain death’ is the currently preferred view).

This naturalised view of death is very different from the old Christian ideal. This is because contemporary naturalism is closely joined to something that the bioethicist Daniel Callahan calls ‘technological monism’. This is an interesting ontological view of the world that Callahan thinks is pervasive among technically and scientifically inclined people. It is worth pausing to offer a definition of this view (this is mind, not Callahan or Sisto’s):

Technological monism: The belief that everything in the world is, in principle if not in fact, within the reach of our technology. In other words, the view that there is no hard and fast line between the artificial and manipulable and the natural and fixed. All is capable of being technologised.

Under the old Christian ideal there were some fixed and immutable features of our existence. Death was one of them. We couldn’t stop death from happening. Only God had the power to do that. Technological monism suggests that death is not a fixed and immutable feature of our existence. It is something we can — with the right kind of intervention — prevent. We can slow down and reverse our biological ageing. We can preserve our identities for longer than we previously hoped.

This ‘technologised’ view of the world lends support to the belief that death is a moral evil:

Death as Moral Evil: Death is, in principle, within our control. It is a breakdown of our biological machinery that we can, with the right technological intervention, prevent and/or avert. Consequently, death is our responsibility. It is only through lack of effort that it continues to exist.

It is a moral evil because it is something within our power to fix. Hence we are, morally speaking, on the hook for allowing it to continue.

Having outlined this view of death, Sisto then briefly alludes to various suggested techniques for solving the problem of death. These include integrating human biology with machinery (the cybernetic/cyborg solution), digitally uploading our minds to computers, and preventing cellular degradation through biotechnological interventions.


2. Death as the Sculptor of Identity
There is much more in Sisto’s discussion of the ‘death as moral evil’ view, but I think the preceding summary captures the gist. The main argumentative thrust of Sisto’s paper comes from the contrast he draws between this view and his own preferred view of ‘death as a sculptor’.

The essence of this view is that death is not separable from life contrary to what the technological monists want to believe. They want to have a life without death. But this is not possible. Death is a necessary part of life as a whole. It is what gives shape, direction and, above all else, a sense of identity to life. As Sisto puts it at the start of his paper:

[Death can seen as] a “sculptor” that draws from the formless, namely the totality of possibilities, the identity profile of the individual (a symbolic reading of nature in the light of the scientific theories of apoptosis), so that subjective identity is determined by a natural limit.. 
(Sisto 2014, 31)

As you can see from the quote, Sisto explains this symbolic idea by reference to the biological process of apoptosis, or programmed cell death. This is a highly regulated biological process whereby cells within an organisms body will kill themselves off when they are no longer necessary for some particular tissue. The process has been studied extensively in the past 40 years and is literally what sculpts our bodily organs and tissues into their final form. For example, the separation between your fingers and toes is the result of apoptosis — cells that grew in the gaps killed themselves off in order to create the distinctive identity of your fingers.

Sisto makes this example do a lot of work. He argues that the apoptotic process is essential to biological life; that it is what gives the organism its unique identity. He also notes how failures of apoptosis are linked to cancers and excesses of apoptosis are linked to various degenerative diseases like Alzheimers and Parkinson’s. He believes that this supports his contention that life and death are inseparable. Death is built into the biological process of being alive. It is, to reiterate, the necessary sculptor of life:

The theory of apoptosis lets clearly emerge the limits of the theoretical positions that believe in the possibility to redefine human nature and subjectivity in the light of a life deprived of its own death. Life is in itself mortal, its essence is mortality, to the point that a life without death is inconceivable…Dying is already present in the moment of life and unfolds by constantly weaving itself together with living, according to the rules of the inexhaustibility of this relationship and of a ceaseless repetition: death is not a deadline, it is a nuance of life. 
(Sisto 2014, 45)

I’m not sure I know what it means to say that ‘death is a nuance of life’; and I’m not sure that the apoptosis example can be pushed too far; but I do think that Sisto’s view of death is an interesting one. I find the image of death as something that draws from the ‘formless totality of possibilities’ the identity of the individual quite evocative and I think it can be effectively communicated by way of some diagrams. Take a look at this one first.






This is simple decision tree. Imagine your life begins at the first node of this decision tree and that you have two choices to make at this point (i.e. two ways in which you can choose to progress with your life). These two choices are represented by the branches of the tree. They lead to subsequent decision nodes, which in turn branch out into further possible choices. This process of branching out from the original starting node continues on into the future of your life. If we could map out the complete space of possible choices in your life, we could effectively illustrate the ‘totality of possibilities’ that Sisto alludes to in the above quote. We could map out all the possible lives you could live. Of course, we can’t do that. No piece of paper would be large enough to contain all those possible choices. The diagram above is just a very small slither of the space of possible lives. But it suffices to illustrate Sisto’s point, which is made by the next diagram.




This diagram shows what happens when you start making choices within this space of possible lives. You pick one branch over another and you consequently proceed down one path to the exclusion of others. This path represents your lifeline, if you will. The unique course that your life takes. Death then plays an important role in sealing off your lifeline and giving it a unique and singular identity. Once you die, your life becomes characterised by the path you took through the space of possible choices. This path contains all your accomplishments and failures, all your loves and losses, all your aspirations and fears. It effectively constitutes your identity. That’s why one of the lines is highlighted and singled out from all the others.

Without death, this lifeline would lose its unique identity. If you had infinite time to play around in, you could travel back down some other paths; take routes through life that you hadn’t taken before. Death — the end of choice-making — is what sculpts you from the void of possibilities.


3. Three problems with the 'Death as Sculptor' View
As I say, I find this metaphor to be very evocative. It really does give you an interesting perspective on the nature of death. But I don’t think it is as interesting and useful as Sisto supposes. Here I return to the three criticisms I mentioned at the start of this post. These criticisms assume a certain interpretation of Sisto’s article. They assume that he presents this ‘death as sculptor’ view in order to critique those technological monists (among whose ranks we can count transhumanist thinkers like Ray Kurzweil and Aubrey de Grey) who wish to ‘conquer death’. This may or may not be a fair interpretation. Sisto, unfortunately, isn’t as clear as he might be in stating his argumentative aims. I think it a fair interpretation based on his overall tone and the way in which he ends the article:

Without death, life would become a slow-motion film and each human being would be lost in a common and conformed “perfection,” devoid of all the haphazard colours casually matched which are nothing but the outcome of the very limit that gives a sense to existence and structures human nature in the most essential way. 
(Sisto 2014, 45)

This suggests a strong commitment to the view that death is essential to life; and that those who seek to conquer it are wrong-headed in their views. They have an imperfect axiology and metaphysics of death.

Assuming this interpretation is correct, the three criticisms can be voiced. The first is not really a criticism so much as it is an observation, but it may have some critical bite. It is that there is nothing particularly novel in Sisto’s claim that death is valuable to life because it sculpts identity out of the space of possibilities. This is a view that is pervasive in the literature on the ‘tedium of immortality’. It is a view that one finds expressed in Bernard Williams’s classic article on the Makropulos case, and can also be traced through the work of Martha Nussbaum and Aaron Smuts (all of which has been covered on the blog before). It can also be found in Borges short story ‘The Immortal’ which describes the problem with immortality in effectively the same terms as Sisto:

Taught by centuries of living, the republic of immortal men had achieved a perfection of tolerance, almost of disdain. They knew that over an infinitely long span of time, all things happen to all men. As reward for his past and future virtues, every man merited every kindness—yet also every betrayal as reward for his past and future iniquities… 
I know of men who have done evil in order that good may come of it in future centuries, or may already have come of it in centuries past… 
Viewed in that way, all our acts are just, though also unimportant. There are no spiritual or intellectual merits. Homer composed the Odyssey; given infinite time, with infinite circumstances and changes, it is impossible that the Odyssey should not be composed at least once. No one is someone; a single immortal man is all men. Like Cornelius Agrippa, I am god, hero, philosophy, demon and world — which is a long-winded way of saying that I am not. 
(Borges, The Immortal - Hurley Translation)

There is nothing wrong with mimicking arguments from others — and Sisto does give the argument a nice new metaphorical gloss — but there is a potential criticism embedded in this observation insofar as many of these other figures have presented the argument in more persuasive and rigorous terms (Smuts in particular). And there is also a rich literature critiquing their views which should presumably be engaged with.

This brings me to the second criticism. Evocative as the metaphor is, I’m not sure that it can really form the basis of an effective critique of immortality. The suggestion is that mortality is needed for uniqueness of identity. I’m actually sympathetic to that view (I’ve discussed it before) but I think the notion that death helps to do this by selecting a unique lifeline out of the vast space of possibilities is flawed. Go back to the diagrams above for a moment. Note the role of time in them. Time is a unidirectional dimension in which the possible lives are lived. You cannot go back in time to a previous decision node and make a choice again. This is true even if you live forever. More time to live means that you can choose to do things you would have to forget or pass over in the course of a normal lifespan, but time will still be a constraint on how you live your life (unless we also invent time travel). Say, you choose to become a doctor rather than a lawyer. With an infinite amount of time you could give up being a doctor and go back and train as a lawyer. But that ‘going back’ only occurs in the future. You don’t literally get to revisit your choice at the original moment in time. This has one important repercussion for the claim about death and identity. It means that even if you live forever there is still going to be one unique lifeline through the space of possible choices that defines your life. You may get to do everything, but you get to do it in a unique order and sequence.

I’m not suggesting that this is a terribly persuasive criticism. There may still be ways in which we lose our sense of self and identity over the course of eternity. But I would submit that this loss of identity does not arise because our lifeline is not unique. It arises for other reasons, e.g. because we grow tired with ourselves or we lose all sense of accomplishment and achievement and everything starts to blur into an endless sequence of unimportant events. These are ideas developed at length by the likes of Smuts and Williams.

Finally, the third and in many ways most important critique of the ‘death as sculptor view’, is that one can adopt that view and still think that technologically-assisted life extension is a good thing. In other words, the ‘death as sculptor’ view doesn’t really highlight the absurdity of the life extensionist position. The ‘death as sculptor’ view only really highlights the need for limits in life (death being the obvious and ultimate limit). But I don’t know of anyone who favours life extension that believes in genuine immortality or a life without limits. Most just want to prolong life by as much as possible. And even those who think we could ultimately slow down and stop the biological precursors of death, don’t think we will thereby remove death and other sources of limitation from our world. We will still die due to accidents or violence. We will still have the option of dying. And we will still be constrained by other physical and natural forces. Control of our biology does not mean total control of our universe. Maybe we will ultimately get to that — but that’s a long way from the dream of extending lifespan beyond its current limits.

Sunday, July 24, 2016

Piketty on Free Higher Education and the Value of Meritocracy

Blair Hall - Princeton University

I have worked hard to get where I am. I come from a modest middle class background. Neither of my parents attended university. They grew up in Ireland in the 1950s and 1960s, at a time when the economy was only slowly emerging from its agricultural roots. I and my siblings were born and raised in the 1970s and 1980s, in an era of high unemployment and emigration. Things started to get better in the 1990s as the Irish economy underwent its infamous ‘Celtic Tiger’ boom. I did well in school and received a (relatively) free higher education, eventually pursuing a masters and PhD in the mid-to-late 2000s. I worked hard during this period of time, combining my educational opportunities with my talents and abilities. But by the time I finished my PhD the country was one again in recession. I was forced to emigrate to get a job and I only returned to Ireland in 2014 to a low-rank position in a regional Irish University. I am not poor, and I am unlikely to want for anything in my life. Although it is not much, I certainly feel like I deserve to be where I am.

Or do I? My life story seems to epitomise the valuable role of education in determining social position (or, at least, the highly selective version of my lifestory that I just told does). Without my education, I would not be where I am. But isn’t that as it should be? Shouldn’t education (the husbandry of natural talent and ability), rather than one’s cultural background and inheritance, decide social position? And isn’t this a good argument for free higher education? If we make higher education free, we facilitate more people being able to take advantage of their natural talents and abilities. This should in turn help to neutralise the negative effects of social inequality. If we make it fee-paying, then personal and familial wealth will determine access to the benefits of higher education, which will in turn compound the negative effects of social inequality. The rich will just keep getting richer.

This is a simplistic argument, but it is one I want to explore over the remainder of this blogpost. It is an important one too. Thanks to the release of a recent report, it looks like Ireland, like many other countries, is set to undergo a debate about whether or not to introduce a system of student loans to pay for higher education. I find myself dispositionally opposed to such a regime — having witnessed some of its shortcomings while working in the UK. But I am relatively ignorant about the topic. I need to educate myself once again.

As a first step in that direction, I want to consider a recent article from Steiner Boyum about Thomas Piketty’s views on education and inheritance. Piketty is famous for his exhaustive empirical work on wealth and income inequality in the world today, much of it collated in his best-selling book Capital in the 21st Century. Throughout the book, Piketty sprinkles his empirical insights with some normative views about the undesirability of excessive inequality, and the preference for education, rather than inheritance, in determining one’s social position. Boyum, though acknowledging that Piketty is not a moral philosopher, takes issue with these normative views. I want to share Boyum’s analysis because I think doing so helps to highlight some important philosophical questions about the value of equality, and the role of free university education in ensuring equality.


1. Rastignac’s Dilemma: Piketty’s Meritocratic Luck Egalitarianism
Piketty loves Balzac and uses a incident from the novel Pere Goriot to illustrate the education/inheritance problem. One of the main characters in that novel is Eugene de Rastignac, a naive and poor young law student trying to make his way in Paris. Rastignac is attracted to the upper crust of Parisian society. He wants to enter their world. He lives in the same boarding house as Vautrin, a mysterious and criminally-inclined figure. Vautrin tries to convince Rastignac to give up studying the law and seduce a rich heiress instead. The latter, he suggests, is a more plausible route to the upper echelons of society than years of hard graft as a lawyer. Indeed, even if Rastignac turned out to be a successful lawyer, he wouldn’t be able to earn as much money as he would acquire from the heiress.

Piketty introduces this example because he thinks there is something morally suspicious about it. Piketty isn’t completely opposed to inequality in his work. He thinks some differences in wealth and social position are permissible. But he thinks the distribution mechanism for these differences is important. Living in a world in which being a wealthy heiress (or being married to one) is a surer route to wealth and success than being a hard-working professional is a morally inferior world. In other words, inheritance is a morally inferior distribution mechanism to education. The fact that Rastignac lives in a society where Vautrin’s suggestion makes sense speaks volumes about the justice of that society.

But what justifies this view? Boyum argues that the answer lies in meritocratic luck egalitarianism, which appears to sum-up Piketty’s approach to social justice. This approach can be encapsulated in the following principle:

Meritocratic Luck Egalitarianism: Differences in social position are just if they are the result of merit and not if they are the result of factors beyond your control.

This principle can be used as the basis for an argument in favour of education and against inheritance. As follows:


  • (1) Differences in social position are just if they are the result of merit and not if they are the result of factors beyond your control.
  • (2) Educational attainment is based on merit, not on factors beyond your control.
  • (3) Inheritance is based on factors beyond your control, not on merit.
  • (4) Therefore, a world in which differences in social position are determined by education is just; a world in which they are determined by inheritance is not.



Thus meritocratic luck egalitarianism gives us the moral grounding we need to support the negative reaction to Rastignac’s dilemma.


2. Problems with the Meritocratic Argument
But we cannot leave things at that. We need to know whether the argument is any good. While it has some superficial appeal — and while many philosophers embrace something akin to the meritocratic/luck view on social justice — there are certain problems. Boyum identifies three in particular.

The first two relate the meaning of the word ‘merit’. People use this to mean different things. In particular, Boyum notes that in the justice debate people sometimes use merit to refer to the combination of talent and effort and sometimes as a term that is interchangeable with ‘education’. If it is used in the first sense, then we run into the following problem:

Problem 1: The relationship between education and the combination of effort+talent is contingent and narrow.

That is to say: people can exercise effort and talent outside of the traditional education system; and the traditional education system may not be particularly good at cultivating and rewarding talent and effort. Higher education, in particular, might be a stifling and ineffective way for some people to cultivate their talents. This is because higher education institutions are often focused on a very narrow band of talents and abilities. They are primarily interested in people with the analytical and communicative skills that are needed to write academic essays, dissertations and theses, i.e. the people who can perform in a way that pleases academics. There are other abilities and talents and others ways to express and develop them.

If merit is understood in the second sense — i.e. as being equivalent to education — then we run into another problem:

Problem 2: The education system often compounds, rather than neutralises, the inequalities resulting from inheritance.

That is to say: on this understanding of merit, education may be no different from inheritance as a mechanism for distributing social positions. Educational systems the world over are bastions of the privileged elite. Better schooling and better ability to exploit schooling are often directly linked to one’s ability to pay. Inequality is baked into the system. The ideal educational system might avoid these problems, but none of us lives in an ideal system. To be fair, Piketty is aware of this problem and it is one of the reasons why he argues in favour of free higher education.But even then he is sceptical, noting that free higher education often functions as a subsidy to the already wealthy. Why? Because elite institutions often depend on subtle and prejudicial mechanisms for inclusion and exclusion that favour the wealthy over the working class.

These two problems are significant and they highlight the flaws with the second premise of the earlier argument. There is a deeper problem, however. This one goes to the heart of the meritocratic luck egalitarian view:

Problem 3: Why should we favour cognitive inheritance over wealth/income inheritance? Both are attributable to factors beyond our control.

That is to say: education is effectively a tool for privileging and rewarding cognitive ability. But one’s cognitive ability is ultimately determined by factors beyond one’s control. It is determined by your genetic inheritance and the contingencies of your educational environment. Even if we do have something called free will — and so even if we can control the development of our cognitive abilities — we usually only acquire this control when we reach maturity. During our early years — when education and environment make all the difference — we don’t yet have it. This means our educational attainment is often ultimately caused by factors beyond our control. There is, consequently, nothing much to distinguish between education and inheritance when it comes to the principles of meritocratic luck egalitarianism. They are both problematic. Recognition of this fact leads some people to embrace a radical view of social justice in which the goal should be to neutralise the effects of both wealth and cognitive inheritance.

Now, you might respond to this and say ‘Yes, I agree that both cognitive ability and inherited wealth are ultimately outside one’s control, but surely one’s cognitive ability is slightly more within one’s control than inherited wealth?’ You still have to exercise your agency to reap the benefits of cognitive inheritance; you don’t have to do the same for inherited wealth. The problem with this response is that it is not strictly speaking true. There are many people who squander away the benefits of their inherited wealth, filling their lives with addictive pleasures like drugs and gambling, or risking it all on ill-advised investments. You have to exercise agency in order to avoid these outcomes.

So, in the end, the proponent of meritocratic luck egalitarianism is in something of a bind. The relationship between education and the conditions for a just distribution of social opportunities is much looser than they might originally suppose. Education is itself susceptible to the same criticisms as inheritance; and, under some interpretations, may not be distinguishable from inheritance at all.


3. A Consequentialist Argument for Free Education
Where does this leave us? Boyum argues that it leaves us either embracing the radical view of social justice, or looking for a new argument in favour of education. He favours the latter option and thinks that a better argument would adopt a consequentialist approach to the value of education. This consequentialist view is ground in the social value of equality. In other words, it views greater equality as one of the metrics against which you can fairly assess the justness of a society. This can be a controversial view, so let’s first ask why equality is valuable.

Equality might be valuable for a number of reasons. It might be valuable for intrinsic reasons, i.e. because large differences in wealth, income or opportunity are just wrong in and of themselves; and/or it might be valuable for instrumental reasons, i.e. because small differences in wealth, income or opportunity are better drivers of economic progress, ensure optimum democratic governance, and allow us to take maximum advantage of the diverse talents within society. All of which, in turn, helps to improve our psychological well-being and happiness.

There are several philosophers who doubt the intrinsic value of equality. And I share some of their scepticism, at least when it comes to equality of outcomes. I think differences in wealth and income are tolerable if everybody is getting enough for a flourishing life (a position sometimes referred to as sufficientarianism). I also think some differences might be good for driving competitive innovation, which has benefits for society as a whole. That said, I think there is a limit to this value and that instrumental arguments in favour of greater equality are reasonably persuasive. A society in which the gap between poorest and richest is ever-expanding, and in which the richest consequently control access to political and economic opportunities, is less than ideal. The problem is that I’m not sure how big a gap is too much.

But set these problems to the side for now and assume that greater equality is valuable (for intrinsic or instrumental reasons). How does this support education over innovation? Here, Boyum turns to some of Piketty’s empirical work which finds that education is a driver of social equality. Indeed, Piketty singles out Scandinavian countries (particular during the 1970s) as being among the most egalitarian societies partly because of their free system of higher education. This suggests the following argument in favour of free higher education:


  • (5) Greater social equality is a good thing, i.e. it results in a better, more just society (for intrinsic or instrumental reasons).
  • (6) Free higher education is a driver of social equality.
  • (7) Therefore, free higher education is a good thing, it results in a better, more just society.


This is now an empirical argument, with the weight resting on the second premise (6 according to my numbering). Is the argument persuasive? I’m not sure. I would certainly like to think that it is true — I would like to believe that free education will open up important social opportunities to more people, and that this will in turn lead to a more progressive, better governed and more equal society — but I am under no illusions. I suspect free higher education might be a necessary condition for greater social equality, but not a sufficient one. Other policies would need to be in place to secure greater and wider participation in higher education; and there would always be people who will thrive more outside the system.

The argument is also clearly partial. It doesn’t address other potential benefits of a free system (i.e. beyond driving social equality), and it doesn’t clarify exactly what is meant by ‘free’. Higher education costs money. Somebody has to pay. So what does it really mean to say that it should be ‘free’? The only stipulation I have in relation to a definition of ‘free-ness’ in this context is that it should mean that the student doesn’t pay at the point of entry (via debt or otherwise). But who does, in fact, pay and at what point in time needs to be determined. Still, this argument — assuming the second premise pans out — could be part of cumulative case in favour of a free system.

Tuesday, July 19, 2016

Moral Enhancement and Moral Freedom: A Critical Analysis




The debate about moral neuroenhancement has taken off in the past decade. Although the term admits of several definitions, the debate primarily focuses on the ways in which human enhancement technologies could be used to ensure greater moral conformity, i.e. the conformity of human behaviour with moral norms. Imagine you have just witnessed a road rage incident. An irate driver, stuck in a traffic jam, jumped out of his car and proceeded to abuse the driver in the car behind him. We could all agree that this contravenes a moral norm. And we may well agree that the proximate cause of his outburst was a particular pattern of activity in the rage circuit of his brain. What if we could intervene in that circuit and prevent him from abusing his fellow motorists? Should we do it?

Proponents of moral neuroenhancement think we should — though they typically focus on much higher stakes scenarios. A popular criticism of their project has emerged. This criticism holds that trying to ensure moral conformity comes at the price of moral freedom. If our brains are prodded, poked and tweaked so that we never do the wrong thing, then we lose the ‘freedom to fall’ — i.e. the freedom to do evil. That would be a great shame. The freedom to do the wrong thing is, in itself, an important human value. We would lose it in the pursuit of greater moral conformity.

I find this line of argument intriguing — not least because it shares so much with the arguments made by theists in response to the infamous problem of evil. In this post, I want to look at Michael Hauskeller’s analysis and defence of this ‘freedom to fall’ objection. I base my discussion on two of his papers. The first was published a few years ago in The Philosophers’ Magazine under the title ‘The Little Alex Problem’; the second is due to be published in the Cambridge Quarterly Review of Healthcare Ethics under the title 'Is it desirable to be able to do the undesirable?'. The second paper is largely an expanded and more up to date version of the first. It presents very similar arguments. Although I read it before writing this post, I’ll still base most of my comments on the first paper (which I read more carefully).

I’ll break the remainder of the discussion down into four sections. First, I’ll introduce Hauskeller’s formulation of the freedom to fall objection. Second, I’ll talk about the value of freedom, drawing in particular on lessons from the theism-atheism debate. Third, I’ll ask the question: would moral neuroenhancement really undermine our freedom to fall? And fourth, I’ll examine Hauskeller’s retreat to a quasi-political account of freedom in his defence of the objection. I’ll explain why I’m less persuaded by this retreat than he appears to be.


1. The Freedom to Fall and the Little Alex Problem
Hauskeller uses a story to illustrate the freedom to fall objection. The story is fictional. It comes from Anthony Burgess’s (in)famous novel A Clockwork Orange. The novel tells us the story of “Little” Alex, a young man prone to exuberant acts of ultraviolence. Captured by the authorities, Alex undergoes a form of aversion therapy. He is given medication that makes him feel nauseous and then repeatedly exposed to violent imagery. His eyes are held open in order to force him to view the imagery (a still from the film version provides the opening image to this post). The therapy works. Once he leaves captivity, he still feels violent urges but these are quickly accompanied by feelings of nausea. As a result, he no longer acts out in violent ways. He has achieved moral conformity through a form of moral enhancement.


The novel takes an ambivalent attitude towards this conformity. One of the characters (a prison chaplain) suggests that in order to be truly good, Alex would have to choose to do the good. But due to the aversion therapy, this choice is taken away from him. The induced nausea effectively compels him to do the good. Indeed, the chaplain goes further and suggests that Alex’s induced goodness is not really good at all. It is better if a person can choose to do the bad than be forced to do the good. This is what Hauskeller calls the ‘Little Alex’ problem. He describes it like this:

This is what I call the “Little Alex” problem… it invites us to share a certain moral intuition (namely that it is in some unspecified way bad or wrong or inhuman to force people into goodness) and thus to accept the ensuing paradox that under certain conditions the bad is better than the good — because it is not only suggested that it is wrong to force people to be good (which is fairly uncontroversial) but also that the resulting goodness is somehow tainted and devaluated by the way it has been produced 
(Hauskeller 2013, 75)



To put the argument in more formal terms, we could say the following:


  • (1) It is morally better, all things considered, to have the freedom to do the bad (and actually act on that freedom) than to be forced to do the good.
  • (2) Moral neuroenhancement takes away the freedom to do the bad.
  • (3) Therefore, moral neuroenhancement is, in some sense, a morally inferior way of ensuring moral conformity.



This formulation is far from being logically watertight, but I think it captures the gist of the freedom to fall objection. Let’s now consider the first two premises in some detail.



2. Is it Good to be Free to do Evil?
The first premise of the argument makes a contentious value claim. It states that the freedom to do bad is such an important good that a world without it is worse than a world with it. In his 2013 article LINK, Hauskeller suggests that the proponent of premise one must accept something like the following value hierarchy:

Best World: A world in which we are free to do bad but choose to do good (i.e. there is both moral conformity and moral freedom)
2nd Best World: A world in which we are free to do bad and (sometimes) choose to do bad (i.e. there is moral freedom but not, necessarily, moral conformity)
3rd Best World: A world in which we always do good but are not free to do bad (i.e. there is moral conformity but no moral freedom)
Worst World: A world in which we are not free and do bad (i.e. there is neither moral conformity nor moral freedom).




In his more recent paper, Hauskeller proposes a similar but more complex hierarchy featuring 6 different levels (the two extra levels capture differences between ‘sometimes’ and ‘always’ doing good/bad). In that paper he notes that although the proponent of the ‘freedom to fall’ argument must place a world in which there is moral freedom and some bad above a world in which there is no moral freedom, there is no watertight argument in favour of this hierarchy of value. It is really a matter of moral intuitions and weighing competing values.

This seems right to me and is one place where proponents of the ‘freedom to fall’ argument can learn from the debate about the problem of evil. As is well-known, the problem of evil is the most famous atheological argument. It claims that the existence of evil is incompatible (in varying degrees) with the existence of a perfectly good god. Theists have responded to this argument in a variety of ways. One of the most popular is to promote the so-called ‘free will’ theodicy. This is an argument claiming that moral freedom is a great good and that it is not possible for God to create a world in which there is both moral freedom and no evil. In other words, it promotes a similar value hierarchy to that suggested (but not defended) by Hauskeller.

There has been much back-and-forth between theists and atheists as to whether moral freedom is such a great good and whether it requires the existence of evil. Many of the points that have been made in that debate would seem to apply equally well here. I will mention just two.

First, I have always found myself attracted to a line of argument mooted by Derk Pereboom and Steve Maitzen. This may be because I and something of a free will sceptic. Pereboom and Maitzen argue that in many cases of moral evaluation, the freedom to do bad is a morally weightless consideration, not just a morally outweighed one. In other words, when we evaluate a violent criminal who has just savagely murdered ten people, we don’t think that the fact that he murdered them freely speaks in his favour. His act is very bad, pure and simple; it is not slightly good and very bad. Admittedly, this isn’t much of an argument. It is an appeal to the intuitive judgments we exercise when assessing another’s moral conduct. Proponents of moral freedom can respond with their own intuitive judgments. One way they might do this is by pointing to cases of positive moral responsibility and note how in those cases we tend to think it does speak in someone’s favour if they acted freely. Indeed, the Little Alex case is possibly one such case. The only thing I would say about that is that it highlights a curious asymmetry in the moral value of freedom: it’s good when you do good, but weightless when you do bad. Either way these considerations are much less persuasive if you don’t think there is any meaningful reconciliation of freedom with moral responsibility.

Second, and far more importantly, non-theists have pointed out that in many contexts the freedom to do bad is massively outweighed by the value of moral conformity. Take the case of a remorseless serial killer who tortures and rapes young innocent children. Are we to suppose that allowing the serial killer the freedom to do bad outweighs the child’s right to live a torture and rape-free life? Is the world in which the serial killer freely does bad really a better world than the one in which he is forced to conform? It seems pretty unlikely. This example highlights the fact that moral freedom might be valuable in a limited range of cases (and if it is exercised in a good way) but that in certain ‘high stakes’ cases its value is outweighed by the need for moral conformity. It is open to the defender of moral enhancement to argue that its application should be limited to those ‘high stakes’ cases. Then it will all depend on how high the stakes are and whether moral enhancement can be applied selectively to address those high stakes cases.* According to some proponents of moral enhancement — e.g. Savulescu and Persson — the stakes are very high indeed. They are unlikely to be persuaded by premise one.

(For more on the problems with viewing moral freedom as a great good, I highly recommend Wes Morriston's paper 'What's so good about moral freedom?')


3. Is Moral Enhancement Really Incompatible with Moral Freedom?
Even if we granted premise (1), we might not grant premise (2). This premise claims that moral freedom is incompatible with moral enhancement, i.e. that if we ensure someone’s conformity through a technological intervention, then they are not really free. But how persuasive is this? It all seems to depend on what you understand by moral freedom and how you think moral enhancement works.

Suppose we take moral freedom to be equivalent to the concept of ‘free will’ (I’ll consider an alternative possibility in the next section). There are many different accounts of free will. Libertarian accounts of free will hold that freedom is only possible in an indeterministic world. The ‘will’ is something that sits outside the causal order of the universe and only jumps into that causal order when the agent makes a decision to act. It’s difficult for me to see how a proponent of libertarian free will could accept premise (2). All forms of moral enhancement will, presumably, operate on the causal networks inside the human brain. If the will is something that sits outside those causal network then it’s not clear how it is compromised by interventions into them. That said, I accept that there are some sophisticated emergentist and event-causal theories of libertarianism that might be disturbed by neural interventions of this sort, but I think their reasons for disturbance can be addressed by considering other theories of free will.

Other theories of free will are compatibilist in nature. They claim that free will is something situated within the causal order. An agent acts freely when their actions are produced by the right kind of mental-neural mechanism. There are many different accounts of compatibilist free will. I have discussed most of them on this blog before. The leading ones argue that an agent can act freely if they are reasons-responsive and/or their actions are consistent with their character and higher order preferences.

Moral enhancement could undermine compatibilist free will so understood. But it all depends on the modality of the enhancement. In the Little Alex case, the aversion therapy causes him to feel nauseous whenever he entertains violent thoughts. This is inconsistent with some versions of compatibilism. From the description, it seems like Alex’s character is still a violent one and that he has higher-order preferences for doing bad things, it’s just that he is unable to express those aspects of his character thanks to his nausea. He is blocked from acting freely. But aversion therapy is hardly the only game in town. Other modalities of moral enhancement might work by altering the agent’s desires and preferences such that they no longer wish to act violently. Still others might work by changing their ability to appreciate and process different reasons for action, thus improving their reasons-responsivity. Although not written with moral enhancement in mind, Maslen, Pugh and Savulescu’s paper on using DBS to treat Anorexia Nervosa highlights some of these possibilities. Furthermore, there is no reason to think that moral enhancement would work perfectly or would remove an agent’s ability to think about doing bad things. It might fail to ensure moral conformity in some instances and people might continue to entertain horrendous thoughts.

Finally, what if an agent freely chooses to undergo moral enhancement? In that case we might argue that he has also freely chosen all his resulting good behaviour. He has pre-committed to being good. To use the classic example, he is like Odysseseus tying himself to the mast of his ship: he is limiting his agency at future moments in time through an act of freedom at an earlier moment in time. The modality of enhancement doesn’t matter then: all that matters is that he isn’t forced into undergoing the enhancement. Hauskeller acknowledges this possibility in his papers, but goes on to suggest that they may involve a dubious form of self-enslavement. This is where the politics of freedom come into play.


4. Freedom, Domination and Self-Enslavement
Another way to defend premise (2) is to analyse it in terms of political, not metaphysical, freedom. Metaphysical freedom is about our moral agency and responsibility; political freedom is about how others relate to and express their wills over us. It is about protecting us from others so as to meet the conditions for a just and mutually prosperous political community — one that respects the fundamental moral equality of its citizens. Consequently, accounts of political freedom not so much about free will as they are about ensuring that people can develop and exercise their agency without being manipulated and dominated by others. So, for example, I might argue that I am politically unfree in exercising my vote, if the law requires me to vote for a particular party. In that case, others have chosen for me. Their will dominates my own. I am subordinate to them.

This political version of freedom provides a promising basis for a defence of premise (2). One problem with moral enhancement technology might be that others decide whether it should be used on us. Our parents could genetically manipulate us to be kinder. Our governments may insist on us taking a course of moral enhancement drugs to become safer citizens. It may become a conditional requirement for accessing key legal rights and entitlements, and so on. The morally enhanced person would be in a politically different position from the naturally good person:

The most conspicuous difference between the naturally good and the morally enhanced is that the latter have been engineered to fell, think, and behave in a certain way. Someone else has decided for them what is evil and what is not, and has programmed them accordingly, which undermines, as Jurgen Habermas has argued, their ability to see themselves as moral agents, equal to those who decided how they were going to be. The point is not so much that they have lost control over how they feel and think (perhaps we never had such control in the first place), but rather that others have gained control over them. They have changed…from something that has grown and come to be by nature, unpredictably, uncontrolled, and behind, as it were a veil of ignorance, into something that has been deliberately made, even manufactured, that is, a product. 
(Hauskeller 2013, 78-79)

There is a lot going on in this quote. But the gist of it is clear. The problem with moral enhancement is that it creates an asymmetry of power. We are supposed to live together as moral equals: no one individual is supposed to be morally superior to another. But moral enhancement allows one individual or group to shape the moral will of another.

But what if there is no other individual or group making these decisions for you? What if you voluntarily undergo moral enhancement? Hauskeller argues that the same inequality of power argument applies to this case:

…we can easily extend [this] argument to cases where we voluntarily choose to submit to a moral enhancement procedure whose ultimate purpose is to deprive us of the very possibility to do wrong. The asymmetry would then persist between our present (and future) self and our previous self, which to our present self is another. The event would be similar to the case where someone voluntarily signed a contract that made them a slave for the rest of their lives. 
(Hauskeller 2013, 79)

What should we make of this argument? It privileges the belief that freedom from the yoke of others is what matters to moral agency — that we should be left to grow and develop into moral agents through natural processes — not manipulated and manufactured into moral saints (even by ourselves). But I’m not sure we should be swayed by these claims. Three points seem apposite to me.

First, a general problem I have with this line of argument is the assumption that it is better to be free from the manipulation of others than it is to be free from other sorts of manipulation. The reality is that our moral behaviour is the product of many things. Our genetic endowment, our social context, our education, our environment, various contingent accidents of personal history, all play an important part. It’s not obvious to me why we should single out causal influences that originate in other agents for particular ire. In other words, the presumption that it is better that we naturally grow and develop into moral agents seems problematic to me. Our natural development and growth — assuming there is a coherent concept of the ‘natural’ at play here — is not intrinsically good. It’s not something that necessarily worth saving. At the very least, the benefits of moral conformity would weigh (perhaps heavily) against the desirability of natural growth and development.

Second, I’m not sure I buy the claim that induced moral enhancement involves problematic asymmetries of power. If anything, I think it could be used to correct for asymmetries of power. To some extent this will depend on the modality of enhancement and the benefits it reaps, but the point can be made more generally. Think about it this way: The entire educational system rests upon asymmetries of power, particularly the education of young children. This education often involves a moral component. Do we rail against it because of the asymmetries of power? Not really. Indeed, we often deem education necessary because it ultimately helps to correct for asymmetries of power. It allows children to develop the capacities they need to become the true moral equals of others. If moral enhancement works by enhancing our capacities to appreciate and respond to moral reasons, or by altering our desires to do good, then it might help to build the capacities that correct for asymmetries of power. It might actually enable effective self control and autonomy. In other words, I’m not convinced that being moral enhanced means that you are problematically enslaved or beholden to the will of others.

Third, I’m not convinced that self-enslavement is a bad thing. Every decision we make enslaves our future selves in at least some minimal sense. Choosing to go to school in one place, rather than another, enslaves the choices your future self can make about what courses to take and career paths to pursue. Is that a bad thing? If the choices ultimately shape our desires — if they result in us really wanting to pursue a particular future course of action — then I’m not sure that I see the problem. Steve Petersen has made this point in relation to robot slaves. If a robot is designed in such a way that it really really wants to do the ironing, then maybe getting it to do the ironing is not so bad from the perspective of the robot (this last bit is important — it might be bad from a societal perspective because of how it affects or expresses our attitudes towards other, but that’s not relevant here since we are talking about self-enslavement). Likewise, if by choosing to undergo moral enhancement at one point in time, I turn myself into someone who really really want to do morally good things at a later moment in time, I’m not convinced that I’m living some inferior life as a result.

That’s all I have to say on the topic for now.

* Though note: if the stakes are sufficiently high, non-selective application might also be plausibly justified.

Saturday, July 16, 2016

Episode #7 - Brett Frischmann on Reverse Turing Tests and Machine-like Humans

frischmann

This is the 7th episode of the Algocracy and Transhumanism Podcast. In this episode I talk to Brett Frischmann about his work on human-focused Turing Tests. Brett is a Professor of Law at Cardozo Law School in New York City. He writes a lot about technology and law, and is currently in the midst of co-authoring a book with Evan Selinger (my guest in Episode 4) entitled Being Human in the 21st Century (Cambridge University Press 2017). We have a long and wide-ranging conversation about what it means to be a machine; what it means to be a human; and how the current techno-social environment is changing who we are.

You can listen to the episode below or download it at this link. You can also subscribe on Stitcher and iTunes (via RSS Feed).



Show Notes


  • 0:00 - 2:24 - Introduction to Brett and his work
  • 2:24- 15:20 - Classic Turing Tests and their value
  • 15:20 - 23:27 - Approaching the Turing Line from the other side (or the concept of a 'Reverse Turing Test')
  • 23:27 - 32:40 - How environments can make machines more human-like and humans more machine-like
  • 32:40 - 37:20 - Criteria for a Reverse Turing Test
  • 37:20 - 44:15 - A simple example of a Reverse Turing Test based on mathematical ability
  • 44:15 - 54:20 - Common sense as the basis for a Reverse Turing Test
  • 54:20 - 1:08:10 - Is technology eroding our common sense?
  • 1:08:10 - 1:13:00 - Rationality as the basis for a Reverse Turing Test
  • 1:13:00 - 1:26:03 - The philosophy of nudging and the creation of machine-like humans
  • 1:26:03 - End - Surveillance creep and the surveillance machine
 

Relevant Links

Wednesday, July 13, 2016

Reverse Turing Tests: Are Humans Becoming More Machine-Like?




Everyone knows about the Turing Test. It was first proposed by Alan Turing in his famous 1950 paper ‘On Computing Machinery and Intelligence’. The paper started with the question ‘Can a machine think?’. Turing noted that philosophers would be inclined to answer that question by hunting for a definition. They would identify the necessary and sufficient conditions for thinking and then they would try to see whether machines met those conditions. They would probably do this by closely investigating the ordinary language uses of the term ‘thinking’ and engaging in a series of rational reflections on those uses. At least, Oxbridge philosophers in the 1950s would have been inclined to do it this way.

Turing thought this approach was unsatisfactory. He proposed an alternative test for machine intelligence. The test was based on a parlour game — the imitation game. A tester would be placed in one room with a keyboard and a screen. They would carry out a conversation with an unknown other via this keyboard and screen. The ‘unknown others’ would be in one of two rooms. One would contain a machine and the other would contain a human. Both would be capable of communicating with the tester via typed messages on the screen. Turing’s claim was that if the tester could carry out a long, detailed conversation with a machine without knowing whether it was talking to a machine or a human, we could say that the machine was capable of thinking.

For better or worse, Turing’s test has taken hold of the popular imagination. The annual Loebner prize involves a version of the Turing Test. Computer scientists compete to create chat-bots that they hope will fool the human testers. The belief seems to be that if a ‘machine’ succeeds in this test we will have learned something important about what it means to be human.

Popular though the Turing Test is, few people have written about it in interesting and novel ways. Most people think about the Turing Test from the ‘human-first’ side. That is to say, they start with what it means to be human and work from there to various speculations about creating human-like machines. What if we thought about it from the other side? What if we started with what it means to be a machine and then worked towards various speculations about creating machine-like humans?

That is what Brett Frischmann encourages us to do in his fascinating paper ‘Human-Focused Turing Tests’. He invites us to shift our perspective on the Turing Test and consider what it means to be a machine. From there, he wonders whether current methods of socio-technical engineering are making us more machine-like? In this post, I want to share some of Frischmann’s main ideas. I won’t be able to do justice to the rich smorgasbord of ideas he serves up his paper, but I might be able to give you a small sampling.

(Note: This post serves as a companion to a podcast interview I recently did with Brett. That interview covers the ideas in far more detail and will be posted here soon)


1. Machine-Like Properties and the Turing Line
Let’s start by talking about the Turing Line. This is something that is implicit in the original Turing Test. It is the line that separates humans from machines. The assumption underlying the Turing Test is that there is some set of properties or attributes that define this line. The main property or attribute being the ability to think. When we see an entity with the ability to think, then we know we are on one side of the line, not the other.




But what kind of a line is the Turing Line? How should we think about it? Turing himself obviously must have thought that the line was porous. If you created a clever enough machine, it could cross the line. Frischmann suggests that the line could be ‘bright’ or ‘fuzzy’, i.e. there could be a very clear distinction between humans and machines or the distinction could be fuzzy. Furthermore, he suggests that the line could be one that shifts over time. Back in the distant past of AI we might have thought that the ability to play chess to Grandmaster level marked the dividing line. Nowadays we tend to focus on other attributes (e.g. emotional intelligence).

The Turing Line is useful because it helps us to think about the different perspectives we can take on the Turing Test. For better or worse, most people have approached the test from the machine-side of the line. The assumption seems to have been that we start with machines lying quite some distance from the line, but over time, as they grow more technologically sophisticated, they get closer. Frischmann argues that we should also think about it from the other side of the line. In doing so, we might learn that changes in the technological and social environment are pushing humans in the direction of machines.

When we think about it from that side of the line, we get into the mindset needed for a Reverse Turing Test. Instead of thinking about the properties or attributes that are distinctively human, we start thinking about the properties and attributes that are distinctly machine-like. These properties and attributes are likely to reflect cultural biases and collective beliefs about machines. So, for instance, we might say that a distinctive property of machines is their unemotional or relentlessly logical approach to problem-solving. We would then check to see whether there are any humans that share those properties. If there are, then we might be inclined to say that those humans are machine-like. This doesn’t mean they are actually indistinguishable from machines, but it might imply that they are closer to the Turing Line than other human beings.

Here’s a simple pop-cultural example that might help you to think about it. In his most iconic movie role, Arnold Schwarznegger played a machine - the Terminator. If I recall correctly, Arnie wasn’t originally supposed to play the Terminator. He auditioned for the role of one of the humans. But the director, James Cameron, was impressed by Arnie’s insights into what it would take to act as the machine. Arnie emphasised the rigid, mechanical movements and laboured style of speech that would be indicative of machine-likeness. Indeed, he argued with Cameron over one of the character’s most famous lines: ‘I’ll be back!’. Arnie thought that a machine wouldn’t use the elision ‘I’ll’; it would say ‘I will be back’. Cameron managed to persuade him otherwise.

What is interesting about the example is the way in which both Arnie and Cameron made assumptions about machine-like properties or attributes. Their artistic goal was to use these stereotypical properties to provide cues to the audience that they were not watching a human. Maybe Arnie and Cameron were completely wrong about these attributes, but the fact that they were able and willing to make guesses as to what is distinctively machine-like shows us how we might go about constructing a Reverse Turing Test.


2. The Importance of the Test Environment
We’ll talk about the construction of an actual Reverse Turing Test in a moment. Before then, I just want to draw attention to another critical, and sometimes overlooked factor, in the original Turing Test: the importance of the test environment in establishing whether or not something crosses the Turing Line.

In day to day life, there is a whole bundle of properties we associate with ‘humans’ and whole other bundle that we associate with ‘machines’. For example, humans are made of flesh and bone; they have digestive and excretory systems; they sweat and smell; they talk in different pitches and tones; they have eyes and ears and hair; they wear clothes; have jobs; laugh with their friends and so on. Machines, on the other hand, are usually made of metal and silicon; they have no digestive and excretory systems; they rely on electricity (or some other power source) for their survival; they don’t have human sensory systems; and so on. I could rely on some or all of these properties to help me distinguish between machines and humans.

Turing thought it was very important that we not be allowed to do the same in the Turing Test. The test environment should be constructed in such a way that extraneous or confounding attributes be excluded. The relevant attribute was the capacity to think — to carry out a detailed and meaningful conversation — not the capacity to sweat and smell. As Frischmann puts it:

Turing imposed significant constraints on the means of observation and communication. He did so because he was interested in a particular capability — to think like a human — and wanted to be sure that the evidence gathered through the application of the test was relevant and capable of supporting inferences about that particular capacity.
(Frischmann 2014, 16)

Frischmann then goes on:

But I want to make sure we see how much work is done by the constructed environment — the rules and rooms — that Turing built. 
(Frischmann 2014, 16)

Quite. The constructed environment makes it a little bit easier, though not too easy, for a machine to pass the Turing Test. It means that the tester isn’t distracted by extraneous variables.

This seems reasonable given the original aims of the Test. That said, some disagree with the original construction and claim that some of the variables Turing excluded need to be factored back in. We can argue the details if we like, but that would miss the important point from the present perspective. The important point is that not only can the construction of the test environment make it less difficult for machines to appear human-like, it can also make it less difficult for humans to appear machine-like. Indeed, this is true of the original Turing Test. In some reported instances humans have been judged to be machines through their conversational patterns.

This insight is crucial when it comes to thinking about the broader questions arising from technology and social engineering. Could it be that the current techno-social environment is encouraging humans to act in a machine-like manner? Are we being nudged, incentivised, manipulated and biased into expressing more machine-like properties? Could we be in the midst of rampant dehumanisation? This is something that a Reverse Turing Test can help to reveal.


3. So how could we construct a Reverse Turing Test?
That’s enough set-up. Let’s turn to the critical question: how might we go about creating an actual Reverse Turing Test? Remember the method: find some property we associate with machines but not humans and see whether humans exhibit that property. Adopting that method, Frischmann proposes four possible criteria that could provide the basis for a Reverse Turing Test. I’ll go through them each briefly.




The first criterion is mathematical computation. If there is one thing that machines are good at it, it is performing mathematical computations. Modern computers are much faster and more reliable than humans at doing this. So we could use this ability to draw the line between machines and humans. We would have to level the playing field to some extent. We could assign a certain number of mathematical computations to be performed in a given timespan and we could slow the computer down to some extent. We could then follow the basic set-up of the original Turing Test and get both the machine and the human to output their solutions to those mathematical computations on a screen.

In performing this test, we would probably expect the human to get more wrong answers and display some noticeable downward trend in performance over time. If the human did not, then we might be inclined to say that they are machine-like, at least in this respect. Would this tell us anything of deeper metaphysical or moral significance? Probably not. There are some humans who are exceptionally good at computations. We don’t necessarily think they are unhuman or de-humanised by that fact.

The second criterion is random number generation. Here, the test would be to see how good machines and humans are at producing random sequences of numbers. From what we know, neither are particularly good at producing truly random sequences of numbers, but they tend to fail in different directions. Humans have a tendency to space out numbers rather than clump together sequences of the same number. Thus, using the set-up of the traditional Turing Test once again, we might expect to see more clumping in the machine-produced sequence. But, again, if a human failed this test it probably wouldn’t tell us anything of deeper metaphysical or moral significance. It might just tell us that they know a lot about randomness.

These first two criteria are really just offered as a proof of concept. The third and fourth are more interesting.

The third criterion is common sense. This is a little bit obscure and Frischmann takes a very long time explaining it in the paper, but the basic gist of the idea will be familiar to AI researchers. It is well-known that in day-to-day problem solving humans rely on a shared set of beliefs and assumptions about how the world works in order to get by. These shared beliefs and assumptions form what we might call ‘folk’ wisdom. The beliefs and assumptions are often not explicitly stated. We are often only made aware of them when they are absent. But it is a real struggle for machines to gain access to this ‘folk’ wisdom. Designers of machines have to make explicit the tacit world of folk wisdom and they often fail to do so. The result is that machines frequently lack what we call common sense.
Nevertheless, and at the same time, there are many humans that supposedly lack common sense so it’s quite possible to use these as a criterion in a Reverse Turing Test. The details of the test set-up would be complicated. But one interesting suggestion from Frischmann’s paper is that the body of folk wisdom that we refer to as common sense may itself be subject to technologically induced change. Indeed, a loss of common sense may result from overreliance on technological aids.

Frischmann uses the example of Google maps to illustrate this idea in the paper. Instead of relying on traditional methods for getting one’s bearings and finding out where one has to go, more and more humans are outsourcing this task to apps like Google maps. The result is that they often sacrifice their own common sense. They trust the app too much and ignore obvious external cues that suggest they are going the wrong way. This may be one of the ways in which our techno-social environment is contributing to a form of dehumanisation.

The fourth criterion is rationality. Rationality is here understood in its typical economic/decision-theoretical sense. A decision-maker is rational if they have transitive preferences that they seek to maximise. It is relatively easy to program a machine to follow the rules of rationality. Indeed, the ability to follow such rules in a rigid, algorithmic fashion, is often take to be distinctive property of machine-likeness. Humans are much less rational in their behaviour. A rich and diverse experimental literature has revealed various biases and heuristics that humans rely upon when making decisions. These biases and heuristics cause us to deviate from the expectations of rational choice theory. Much of this experimental literature relies on humans being presented with simple choice problems or vignettes. These choice problems could provide the basis for a Reverse Turing Test. The test subjects could be presented with a series of choice problems. We would expect the machine to output more ‘rational’ choices when compared to the human. If the human is indistinguishable from the machine in their choices, we might be inclined to call them ‘machine-like’.

This Reverse Turing Test has some ethical and political significance. The biases and heuristics that define human reasoning are often essential to what we deem morally and socially acceptable conduct. Resolute utility maximisers few friends in the world of ethical theory. Thus, to say that a human is too machine-like in their rationality might be to pass ethical judgment on their character and behaviour. Furthermore, this is another respect in which the techno-social environment might be encouraging us to become more machine-like. Frischmann spends a lot of time talking about the ‘nudge’ ethos in current public-policy. The goal of the nudge philosophy is to get humans to better approximate the rules of rationality by constructing ‘choice architectures’ that take advantage of their natural biases. So, for example, many employer retirement programs in the US are constructed so that you automatically pay a certain percentage of your income into your retirement fund and this percentage escalates over time. You have to opt out of this program rather than opt in. This gets you to do the rational thing (plan for your retirement) by taking advantage of the human tendency towards laziness. In a techno-social environment dominated by the nudge philosophy, humans may appear very machine-like indeed.


4. Conclusion
That’s it for this post. To recap, the idea behind the Reverse Turing Test is that instead of thinking about the ways in which machines can be human-like we should also think about the ways in which humans can be machine-like. This requires us to reverse our perspective on the Turing Line. We need to start thinking about the properties and attributes we associate with machines and see how human beings might express those properties and attributes in particular environments. When we do all this, we acquire a new perspective on our current techno-social reality. We begin to see the ways in which technologies and social philosophies might encourage humans to be more machine-like. This might be a good thing in some respects, but it might also contribute to a culture of dehumanisation.

If you find this idea interesting, I recommend reading Brett’s original paper.

Monday, July 11, 2016

Interview on Robot Overlordz Podcast about Algocracy




I recently had the pleasure of being a guest on the Robot Overlordz Podcast. It was my fourth time appearing on the show. On this occasion I was discussing my own podcast (available here for those who don't know) and my Algocracy and Transhumanism project.

You can listen at this link.





Thursday, July 7, 2016

Moral Uncertainty and Moral Enhancement





[This is the rough draft of a paper I presented at the RIP Moral Enhancement Conference at Exeter on the 7th July 2016]

Some people are frightened of the future. They think humanity is teetering on the brink. Something radical must be done to avoid falling over the edge. This is the message underlying Ingmar Persson and Julian Savulescu’s book Unfit for the Future. In it they argue that humanity faces several significant existential risks (e.g. anthropocentric climate change, weapons of mass destruction, loss of biodiversity etc.). They argue that in order to overcome these risks it is not enough to improve our technology and our political institutions. We will also need to improve ourselves. Specifically, they argue that we may need to morally enhance ourselves in order to deal with the impending crises. We need to be less myopic and self-centred in our policy-making. We need to be more impartial, wise and just.

It is a fascinating idea albeit one that has been widely critiqued. But what I find most interesting about it is the structure of the argument Persson and Savulescu make. It rests on two important claims. The first is that the future is a scary place, full of uncertainty and risk. The second is that in order to avert the risk we must enhance ourselves. Thus, the argument draws an important link between concerns about an uncertainty and risk, and the development of enhancement technologies. In this paper, I want to further explore that link.

I do so by making three main arguments. First, I argue that uncertainty about the future comes in two major forms: (i) factual uncertainty and (ii) moral uncertainty. This is not a novel claim. Several philosophers have argued that moral uncertainty is distinct from factual uncertainty and that we should take it more seriously in our practical reasoning. What is particularly interesting about those encouraging us to take moral uncertainty more seriously is their tendency to endorse asymmetry arguments. These arguments claim that in certain contexts (e.g. the decision to abort a foetus or to kill and eat an animal) the moral uncertainty is stacked decisively in favour of one course of action. The consequence is that even if we cannot precisely quantify the degree of uncertainty, we should favour one course of action if we wish to minimise the risk of doing bad things.

Second, I argue that some arguments against human enhancement can be reconceived as moral risk/uncertainty asymmetry arguments. I take the work of Nicholas Agar to be a particularly strong example of this style of argumentation. Agar objects to human enhancement on the grounds that it could cause us to lose the internal goods that are associated with the use of normal human capacities. I suggest that Agar’s concerns about losing internal goods can be reframed as an argument about the imbalance of moral uncertainty involved in the decision to enhance. In other words, Agar objects to enhancement because he thinks the benefits of doing so are likely to be limited and the (moral) risks great. Thus the uncertainty inherent in the choice is decisively stacked against enhancement.

Third, I close by arguing that this style of asymmetry argument is not particularly persuasive. This is because the argument only works if it ignores other competing moral risks that are inherent in the decision not to enhance. This is why Persson and Savulescu’s argument is so interesting (to me at any rate): they emphasise some of these other risks that weigh in favour of enhancement. When you add their argument to that of Agar you end up with a much more balanced equation: there is no decisive, uncertainty-based argument, either for or against human enhancement.


1. Moral Uncertainty and the Asymmetry Argument

Let’s start by explaining the distinction between moral and factual uncertainty. Suppose you are a farmer and one of your animals is sick. You go to the vet and she gives you some medication. She tells you that this medication is successful in 90% of cases, but that in 10% of cases it proves fatal to the animal. Should you give the animal the medication? The answer will probably depend on a number of factors, but in the description given you face an uncertain (technically risky)* future. The animal could die or it could live. The uncertainty involved here is factual in nature. There is no doubt in your mind about what the right thing to do is (you want the animal to live, not die). The doubt is simply to do with the efficacy of the medication.




Contrast that with another case. Suppose you are a farmer who has recently been reading up the literature on the morality of killing and eating animals. You are not entirely convinced that your life’s work has been morally wrong, but you do accept that it is possible for animals to have a moral status that makes killing and eating them wrong. In other words, you are now morally uncertain as to the right course of action. You might be 90% convinced that it is okay to kill and eat your livestock; but accept a 10% probability that this is a grave moral wrong (the numbers don’t matter; the imbalance does). This is very different from uncertainty as to the efficacy of the medication. The uncertainty is longer about the means to the morally desired end; it is about the end itself.



Admittedly, the distinction between moral uncertainty and factual uncertainty is not perfect. Moral realists might want to argue that there are moral facts and hence there is no distinction to be drawn. But I suspect that even moral realists believe that moral facts (i.e. facts about what is right/wrong or good/bad) are distinct from other kinds of facts (e.g. facts about the weather or the state of one’s digestion). The ability to draw that distinction is all that is relevant here. Moral uncertainty involves uncertainty about facts relating to what is right/wrong and good/bad; factual uncertainty is uncertainty about anything else.

I’ll say no more about the distinction here. The key question is whether moral uncertainty is something that we should take seriously in our decision-making. Most people think that we should take factual uncertainty seriously. The classic example is in prescriptions about gambling or playing the lottery. It is unsure whether you will win the lottery or not. But decision theorists will tell you that the odds are stacked against you and that this should guide your behaviour. They will tell you that even if the money you might win would be a good thing,** your probability of winning is so low as to make the decision to play irrational. Should we make similar prescriptions in cases where the moral rightness/wrongness of your action (or the goodness/badness of the outcome) is uncertain?

The major impediment to doing so is our inability to precisely quantify the probabilities attached to our moral uncertainties. We could make subjective probability assessments (pick some range of prior probabilities and update) but this is likely to be unsatisfactory. Nevertheless, some philosophers insist that there are cases in which the moral uncertainties (whatever they may be) stack decisively in favour of one course of action over another. These are cases of what Weatherson (who is critical of the idea) calls risk asymmetry.

Here’s an example. You are out one night for dinner. There are two options on the menu. The first is a juicy ribeye steak; the second is a vegetarian chickpea curry. You are pretty sure that eating meat is morally acceptable, but think there is some chance that it is a grave moral wrong. You like chickpea curry, but think that steak is much tastier. But you also know that eating meat is not nutritionally necessary.

To put it more formally, you know that you have two options (a) eat the steak or (b) eat the chickpea curry. And when deciding between them you know that you could be in one of two moral worlds:

W1: Eating meat is morally permissible; meat is tasty but not nutritionally essential.
W2: Eating meat is a grave moral wrong; meat is tasty but not nutritionally essential.

You think it is more likely that you are in W1, but accept a non-negligible risk that you are in W2. Which option should you pick? (I have mapped out this decision problem on the decision tree below).



Proponents of the asymmetry argument would claim that in this scenario you should eat the chickpea curry, not the steak. Why? Because if it turns out that you are in W2, and you eat the meat, you will have done something that is gravely morally wrong (perhaps on a par with killing and eating a human being). If, on the other hand, it turns out that you are in W1, and you eat the meat, then you have not done something that is particularly morally good. It’s permissible but no more. In other words, there is a sense in which eating the chickpea curry weakly dominates eating the steak across all the morally relevant possible worlds. You minimise your chances of doing something that is gravely morally wrong by going vegetarian. (Yes: this is effectively a moral version of Pascal’s wager)


That’s the gist of the asymmetry argument. It can be applied in other moral contexts. Some people use asymmetry arguments to claim that you should avoid aborting a foetus; some people use them to argue that you should give significant proportions of your income to charities in the developing world. Across all these contexts, asymmetry arguments seem to share four features:


  • (i) You have (at least) two options A or B.
  • (ii) You are not sure which moral world you are in (W1 or W2).
  • (iii) A (or B) is either neutral or not particularly good in both W1 and W2.
  • (iv) A (or B) is a serious moral wrong (or bad) if you are in W2.


As long as the risk of being in W2 is non-negligible, you should avoid A (or B).
What I want to argue now is that these four features are also present in certain objections to human enhancement. Hence those objections can be reframed as moral asymmetry arguments.


2. Asymmetry and Human Enhancement: The Case of Nicholas Agar
The best (but not the only) example of this comes from the work of Nicholas Agar. His 2013 book Truly Human Enhancement is the most up-to-date expression of his views. The gist of the argument in the book is that we should refrain from radical forms of human enhancement because if we don’t we run the risk of losing touch with important moral values, and not gaining anything particularly wonderful in return. To explain how this fits within the asymmetry argument mold, I’ll have to spend some time outlining the concepts and ideas Agar uses to motivate his case.

Agar’s concern is with the prudential axiological value of human enhancement. He wants to know whether the use of enhancement technologies will make life better for the people who are enhanced. In this manner, he is concerned about an intersection between enhancement and moral value, but not with moral enhancement as that term has come to be used in the enhancement debate (the term ‘moral enhancement’ is usually used to refer to the effect of enhancement on right/wrong conduct, not with its axiological effect). I think it is interesting that the term ‘moral enhancement’ is limited in this way, but I won’t dwell on it here. I’ll come back to the intersections between Agar’s argument and the more typical moral enhancement debate later in this article.

Agar follows the traditional view that enhancement technologies are targeted at improving human capacities beyond functional norms. So when he asks the question ‘will enhancement make life better for the people who are enhanced?’, he is really asking the question ‘will the improvement of human capacities beyond the functional norm make life better for those whose capacities are improved?’. In this respect he is drawing a distinction between radical and non-radical forms of enhancement. Take any human capacity — e.g. the capacity for memory, numerical reasoning, problem-solving, empathy. For the normal human population, there will be some variation in the strength of those capacities. Some people will have better memories than others. Some will display more empathy. Typically, the distribution of these abilities follows a bell curve. This bell-curve defines the range of normal human capacities. For Agar, non-radical enhancement involves the improvement of capacities within this normal range. Radical enhancement involves the enhancement of capacities beyond this normal range. Agar’s argument is about the prudential axiology of radical enhancement, i.e. that which moves us beyond the normal range.

When it comes to assessing the value of human capacities, Agar thinks that we must distinguish between two types of goods that are associated with the utilisation of those capacities. The first are the external goods. These are the goods that result from the successful deployment of the capacity. For example, the human capacity for intelligence or problem solving can produce certain goods: new technologies that make our lives safer, more enjoyable, and healthier. Enhancing human capacity beyond the normal range might be prudentially valuable because it helps us to get more of these external goods. These external goods are to be contrasted with internal goods. These are goods that are intrinsic to, constituted/exemplified by, the deployment of the capacity. For instance, the capacity for numerical reasoning might produce the intrinsic good of understanding a complex mathematical problem; or the capacity for empathy might produce the intrinsic good of sharing someone else’s perspective and understanding the world through their eyes.

The distinction between internal and external goods can be tricky. It derives from the work of Alasdair McIntyre. He explains it by reference to chess. In playing the game of chess, there are certain external goods that I might be able to achieve. If I am good at it, I might be able to win tournaments, prize money, fame and ardour. These are all products of my chess-playing abilities. At the same time, there are certain internal goods associated with the practice of playing chess well. There is the strategic thought, the flash of insight when you see a brilliant move, the rational reflection on endgame and openings. These goods are not mere consequences of playing chess. They are intrinsic to the process.

Agar argues that what is true for chess is true for the deployment of human capacities more generally. Using a particular capacity can produce external goods and it can exemplify internal goods. I have noted on a previous occasion that Agar isn’t as clear as he could be about the relationship between human capacities and internal and external goods. The relationship between capacities and external goods is pretty clear: capacities are used to produce outcomes that can be good for us. The relationship between capacities and internal goods is less clear. I think the best way to understand the idea is that our capacities allow us to engage in certain activities or modes of being that instantiate the kinds of internal goods that McIntyre appeals to.

The internal/external goods distinction is critical to Agar’s case against enhancement. He notes that although internal and external goods often go together; they can also pull apart. Getting more and more of an external good might require us to forgo or lose sight of an internal good. So, for example, using a calculator might make you better able to arrive at the correct mathematical result, but it also forces you to forgo the intrinsic good of understanding and solving a mathematical problem for yourself. Similarly, attaching rollerblades to the ends of your legs might make you go faster from A to B, but it prevents you from realising the intrinsic goods of running. Note how both of these examples involve forms of technological enhancement: the calculator in one instance and the rollerblades in the other. This is telling. Agar’s main argument is that if we engage in radical forms of human enhancement, we will forgo more and more of the internal goods associated with different kinds of activities and modes of being. He thinks this applies across the board: the kinds of relationships we currently find valuable will be sacrificed for something different; new sporting activities will have to be invented as old ones lose their value; new forms of music and art will be required along with new jobs and intellectual activities. Indeed, Agar also argues that our sense of self and personal identity (our story to ourselves about the things that are valuable to us now) will be disrupted by this process. In short, radical enhancement will force us to give up many (if not most) of the internal goods that currently make our lives valuable.

And for what? Why would we be even tempted to forgo all these internal goods? Two arguments are proffered by proponents of radical enhancement. The first is that enhancing human capacities beyond the normal range will allow us to secure the more important external goods. These external goods include things like more advanced scientific discoveries; and increased wisdom and capacity for making morally and existentially significant policy choices. In other words, the external goods include solving the problems of climate change, and proliferation of weapons of mass destruction — the very things that Persson and Savulescu highlight in their argument for moral enhancement. The second argument is that even if we do lose the old internal goods, we will find new (possibly better) ones to replace them. Thus if you read the work of, say, Nick Bostrom you’ll find him waxing lyrical about the radically new forms of art and music that will be possible in the posthuman future. In other words, the internal goods post-radical enhancement might be even better than those pre-radical enhancement.

For Agar, these arguments hold little water. The second argument rests on largely speculative post-enhancement internal goods. Even if those speculations turn out to be correct you would still have to accept the loss of the old pre-enhancement internal goods. Furthermore, these new goods wouldn’t be are goods, i.e. the ones that shape our current evaluative frameworks. They would be different goods — ones that only really make sense to posthumans, not to us right now. And the first argument rests on a false dichotomy. It’s not like failing to enhance ourselves means that we must forgo the appealed-to external goods. On the contrary, there are perfectly good methods of achieving those goods without radically enhancing our capacities. I discussed this aspect of Agar’s case against enhancement at length in an earlier post. I’ll offer a brief summary here.

Agar’s point is that radical enhancement of human capacities requires the integration of technology into human biology (be it through brain implants or neuroprosthetics or psychopharmacology or nanotech). That’s how you enhance capacities beyond the normal human range. But the integration of technology into human biology is risky. Why do it when we can just create external devices that either automate an activity or can be used as tools by human beings to achieve the desired outcomes? These external technologies can help us realise all the essential external goods, without requiring us to radically enhance ourselves and thereby forgo existing internal goods. Agar uses a thought experiment to illustrate his point:

The Pyramid Builders: Suppose you are a Pharaoh building a pyramid. This takes a huge amount of back-breaking labour from ordinary human workers (or slaves). Clearly some investment in worker enhancement would be desirable. But there are two ways of going about it. You could either invest in human enhancement technologies, looking into drugs or other supplements to increase the strength, stamina and endurance of workers, maybe even creating robotic limbs that graft onto their current limbs. Or you could invest in other enhancing technologies such as machines to sculpt and haul the stone blocks needed for construction.
Which investment strategy do you choose?

The question is rhetorical. Agar thinks it is obvious that we would choose the latter and that we have continually done so throughout human history. His argument then is that when it comes to securing external goods, we are like the pharoahs. We could risk going down the internal enhancement route, but why risk it when there is a perfectly good alternative? We can have the best of both worlds. We can avoid the calamities mentioned by Persson and Savulescu through better external technologies; and we can keep all the internal goods we currently value.

My claim is that in making this argument, Agar is deploying a kind of moral risk asymmetry argument. But it is difficult to see this because the argument is far more complex than the earlier example involving vegetarianism. It blends factual and moral uncertainty together to make an interesting case against enhancement. But at its core — I submit — it is not about factual uncertainty. It is about uncertainty as what goods are important to the well-lived life and how that uncertainty is decisively stacked in favour of one course of action. Agar is conceding that there could be new internal goods post-enhancement (there is a non-negligible probability of this). But there is also a non-negligible risk that they involve sacrificing our existence evaluative frameworks.

This does map onto the structure of the asymmetry argument that I outlined earlier:

(i) We can choose between two options: (a) pursue radical human enhancement (i.e. enhance human capacities beyond the normal range) or (b) do not pursue radical enhancement.
(ii) When choosing, we could be in one of two moral worlds:
W1: There are important irreplaceable internal goods associated with current levels of human capacity; there are important external goods that we could realise through enhancement; there are no compensating internal goods associated with enhanced levels of human capacity.
W2: There are important irreplaceable internal goods associated with current levels of human capacity; there are important external goods that we could realise through enhancement; there could be new compensating (better?) internal goods associated with enhanced levels of human capacity.
(iii) Option (a) is seriously axiologically flawed if we are in W1 and not particularly good if we are in W2. This is because in W1 radical enhancement entails losing all the important irreplaceable internal goods, for little obvious gain (we could have used external technologies to secure the external goods); and in W2 it is still unnecessary, causes us to lose existing goods, and substitutes in one set of possibly better internal goods.
(iv) Option (b) is axiologically superior if we are in W1 and not obviously axiologically inferior if we are in W2. This is because in W1 it allows us to retain the existing internal goods and without forgoing the external goods (thanks to external technologies). While in W2 it also leaves us effectively as we were. We keep the goods we have instead of punting for an unknown set of alternatives.



The upshot is that radical enhancement doesn’t look like a good bet in any moral world. There is no doubt that factual uncertainty plays a significant role in this argument — there are uncertainties as to the likely effects of technology on human life — but there is also little doubt that moral uncertainty is also playing a role — it is because we don’t know whether existing internal goods provide the optimum complement for a good life that we should be wary about losing them. Our future, radically enhanced selves, might have very different evaluative frameworks (ways of assessing what is worthwhile and what is not) from our own. What we now deem important and valuable might be completely unimportant to them. So why sacrifice a current, reasonably well-known evaluative framework, for a completely speculative one, when there is little gain?


3. Is this style of argument persuasive?
I hope I have convinced you that moral uncertainty — specifically in the form of axiological uncertainty — plays a role in Agar’s argument against the desirability of human enhancement. Let me close then by asking the obvious question: is the argument any good? I think not, and in explaining why I think this I hope to return to the original discussion of Persson and Savulescu. My contention is that Agar’s argument only succeeds if it involves an incomplete specification of the decision problem which we face. In particular, if it takes an overly benign view of external technologies. In effect, this is exactly Persson and Savulescu warn against in Unfit for the Future, but with a slight twist.

Let me build up to this argument slowly. As a general rule, one should always be somewhat sceptical of arguments that make use of decision-theoretical models. These models are always simplifications. They highlight some of the salient features of a decision problem while at the same time obscuring or downplaying others. This doesn’t make them worse than other modes of arguing for or against a particular course of action (all human reasoning involves abstraction and simplification), but there is a danger that people are fooled by the apparent rigour and formality of the model. At the same time, there is a significant advantage to dressing up an argument in the formal decision-theoretical garb: doing so makes it easier to identify what is being left out of the analysis. That’s one reason why I think my reconstruction of Agar’s argument has value. Agar doesn’t strictly couch his argument in decision-theoretical terms. But when you do so you see more clearly where he might have made a misstep.

I think his major misstep is in suggesting that the failure to radically enhance is relatively risk free: that externalising technologies can help us to achieve the desired external goods without forgoing the internal goods. I think there is a much more intimate and complex relationship between external technologies and existing internal goods. You can see this most clearly in the debate about automation. I think it is fair to say — and I think Agar would agree — that one’s job is often a source of both internal and external goods. You work to get an income; to gain social status; to provide goods and services that are valuable to society. These are all external goods. At the same time, your work can also be a source of internal goods: the mastery of some skill and the sense of satisfaction and accomplishment associated with the performance of the skill (the analogy to McIntyre’s chess player is almost exact). Now suppose your job is becoming more competitive: to keep achieving the same level of external goods you will have to radically improve your productivity and performance. Agar’s claim is that you could do this by using external technologies (e.g. robot assistance or advanced tools). This would allow you to achieve the external goods without forgoing the internal goods. That is the logic underlying the Pharaoh thought experiment (though, to be clear, I doubt Agar thinks that the experience of the pyramid builders is replete with internal goods).

But this seems very wrong. The use of external technologies to achieve the desired external goods of work does not necessarily leave intact the current internal goods of work. In many instances, the technologies replace or takeover from the human workers. The human workers then lose all the internal goods that were intrinsic to their work. They might find new jobs and there might be new internal goods associated with those jobs, but that’s besides the point. They lose touch with the internal goods of their previous jobs. That’s exactly the kind of thing Agar would like us to avoid by favouring external technologies over enhancement technologies. But he cannot always have his cake and eat it. Indeed, the modern trend is in favour of more and more externalising technologies — ones that sever the link between human activity and desired outcomes. Smart machines, machine learning, artificial intelligence, autonomous robots, predictive analytics, and on and on. All these technological developments tend to takeover from humans in discrete domains of activity. They are often favoured on the grounds that they are more effective and more efficient than those humans in achieving the external goods associated with those activities. So soon there will be no need for skilled drivers, lawyers, surgeons, accountants, and teachers. Robots will do a better job than humans ever could.

This will involve a radical shift in existing evaluative frameworks. Currently, much of human self-worth and meaning is tied up in performing activities that make a genuine difference to the world. Sometimes those activities are directly linked to paid employment; sometimes they are not. Either way, they are under threat from the rise of smart machines. If the machines takeover, we will have to change our priorities. We will have to take a different approach to finding meaning in our lives. This sounds like it might involve the kind of radical shift that Agar is concerned about.

What could help us to avoid this shift? Well, ironically, one thing that could help is radical human enhancement. By enhancing our capacities we could ‘run faster to stay in one place’. We could contend with the increasing demands of our jobs (or whatever), and that way retain the internal goods we currently cherish. Indeed, radical enhancement might be essential if we are to do so. In the end then, Agar’s argument fails because it ignores the negative impact of external technologies on internal goods. Once that negative impact is factored in, the asymmetry that does the heavy-lifting in Agar’s argument dissolves.

Let me close with some final thoughts on the relevance of this Persson and Savulescu’s case for moral enhancement. As I noted at the outset, this argument is also very much premised on claims about risk and uncertainty. They think that current technological developments pose significant existential threats to humans and that the only way to resolve these problems is to pursue moral enhancement of those humans. But this argument is not as robust a defence of human enhancement as it might appear to be. The argument assumes that humans will maintain their relevance in political and social decision-making processes. But interestingly we might be able to address the problems they identify by removing humans from those processes. Better smart technologies might make better moral decisions. Why bother keeping humans in the loop?

So, somewhat ironically, it may be that it is only when you take Agar’s concerns about losing internal goods seriously, that you make a robust case for maintaining human participation in social decision-making. And it may be that it is only then that the case for moral enhancement of those humans can flourish.


*Throughout this paper I ignore the technical distinctions between risk and uncertainty. Most of the examples given in the paper involve uncertainty as opposed to risk because you cannot precisely quantify the degree of uncertainty involved in the decision. But many of the papers on moral uncertainty ignore this technicality and so I follow suit. 

** Yes, I know some people will disagree with this. If so, then they are adding moral uncertainty into the decision-making problem. They are suggesting that the moral value of having lots of money (particularly if you get it suddenly and unexpectedly) is unclear.