Wednesday, May 24, 2017

Advice on Publishing Peer Review Articles




I was recently asked to give a short, ten minute presentation on writing and publishing peer review articles. The presentation was aimed at PhD students. In preparing for the talk, I realised how difficult it is to distill my thoughts on the process into just ten minutes. I have a love-hate relationship with publishing for peer review. It is essential to my life as an academic, but I sometimes feel trapped by the publication ‘game’, and I often feel that the benefits are often minimal and ephemeral. I could probably talk for several hours about these feelings without getting to any practical advice.

Anyway, since I didn’t have several hours, I decided I would focus my talk on eight key ‘tips’, divided broadly into three main categories (perspective, process, and promotion). None of these tips deals with how to actually write an article (I have dealt with that topic on a previous occasion). Instead, they focus on the attitude toward the process and how to respond to reviewer’s comments. I thought it might be worth sharing them here.


A. Perspective
It is important to approach the peer review process with the right attitude. I have three tips for cultivating the right attitude:

(1) Don’t lose sight of ‘why’: This is the most important thing. As a budding academic, it is very easy to get trapped in the ‘game’ of publication. As you begin to succeed in publishing you become acutely aware of your total number of publications. Very few academics can keep track of the substance of what their colleagues write, but they can all keep track of the number of pieces they publish. And so your number becomes the currency of self-worth. Try to avoid thinking in this way. If you become obsessed with your number, you will never be happy. I speak from experience. I once set myself the target of publishing 20 peer reviewed articles, thinking that if I reached that target I would have ‘arrived’ as an academic. But once I reached the 20-article target, I realised that the 30-article target wasn’t too far away. I needed to knuckle down and reach that too. I just as soon realised how silly I was being. I had lost sight of why I was publishing in the first place. Publishing is not an end in itself. There are reasons for doing it. The most important of those reasons — and the ones that sustain you in the long-run — are the intrinsic joys/pleasures you experience in researching, thinking and writing about a topic that interests you. The other reasons are more instrumental in nature. They are quite important too, but for more practical reasons. After all, publication is a gateway to achieving academic impact, social impact, public engagement and career advancement.

(2) Prepare for failure: The average article is rejected. You are unlikely to be above average. It’s possible that you are, but don’t bet on it. The important thing is that you learn to expect failure and frame it in a positive way. Following Paul Silvia, I would suggest that you have the goal of becoming ‘the most rejected author in your department/peer group’. If you are being rejected, at least you haven’t given up. Giving up is worse than being rejected. (I gave this advice previously. On that occasion I suggested that it was the most important thing to bear in mind when publishing. I no longer think that is true. I think remembering why you are publishing is the most important thing. This might reflect a degree of maturity on my part and an increasing sense of detachment from the need to publish.)

(3) Don’t fetishise failure: Don’t assume that you can learn too much from your failures. Sometimes you can, but most of the time you can’t. Academic failure is overdetermined. What I mean by this, is that there are probably many factors that prevented your article from being accepted for publication, no one of which was necessarily fatal or would be fatal if you were to resubmit the article elsewhere. Editors and reviewers are looking for reasons to reject your paper. Their default is ‘reject’. They have to set this default to maintain the prestige of their journal [thanks to Ashley Piggins for emphasising this point to me]. The reasons for rejection provided by reviewers often do not overlap. If you addressed every objection they raised before sending your article on to another journal, you would probably end up with an incoherent article. If you are rejected by a journal, look over the reviewer reports (if any), see if there are any consistent criticisms or comments that strike you as being particularly astute, revise the article in light of those comments, and then send it off to another journal. If there are no such comments, just send it off to another journal without substantive revisions. Persistence is the name of the game. I am now willing to resubmit the same piece to several journals (sometimes as many as 4 or 5) before giving up on it.

B. Process
You must deal with the process of submitting to journals and responding to reviewer’s comments in the right way. The most important thing here, of course, is to submit a high-quality piece, i.e. something that is well-written, full of persuasive arguments, and makes an original contribution to the literature. I don’t think there is a perfect formula for doing that. But there are a few other things to keep in mind:

(4) Have at least 3-4 target journals: This really follows from my previous bit of advice (“Don’t fetishise failure”). I always start writing articles by having at least 3-4 target journals in mind. I don’t think you should be too wedded to one target journal. You should aim for something of reasonably high quality, but don’t predicate your well-being on having your article accepted by the top journal in your field. That’s something that will come with time and persistence. I also don’t think it is worth revising your article for your target journal’s ‘house style’. I have never had an article desk-rejected because I failed to format it in house style. As long as the article is a good fit for your target journal and you have written and referenced it well, it stands a chance. You can worry about house style after you have been accepted.

(5) Be meticulous in responding to reviewers’ comments: If you are lucky enough to be asked for revisions, be sure to take the process seriously. You should always prepare a separate ‘response to reviewers’ document as well as a revised manuscript. In this document, you should respond to everything the reviewer has highlighted and pinpoint exactly where in the revised draft you have addressed what they have said. Speaking as someone who has reviewed many manuscripts, I feel pretty confident in saying that reviewers are lazy. They don’t want to have to read your article again. They only want to read that parts that are relevant to the comments they made and check to see whether you have taken them seriously. This is all I ever do when I read a revised manuscript.

(6) Be courteous in responding to reviewers’ comments: Remember that reviewers have egos; they want to be flattered. They will have taken time out of their busy schedules to read you article. They will have raised what they take to be important criticisms or concerns about your article. You should always thank them for their ‘thoughtful’, ‘insightful’, and ‘penetrating’ comments. This is one area of life where you cannot be too obsequious.

(7) Pick your battles: Sometimes reviewers will say things with which you fundamentally disagree. You don’t have to bow down and accept everything they say. You should stand your ground when you think it is appropriate to do so. But when doing this be sure to acknowledge that the reviewer is raising a reasonable point (and always consider the possibility that the fault lies in how you originally worded or phrased what you wrote) and be sure to make concessions to them in other ways. To give a somewhat trivial example, I feel pretty strongly that academic articles shouldn’t be dry and devoid of ‘colour’. One of the ways in which I try to provide colour is by using well-known cultural or fictional stories to illustrate the key points I am making. This is one of the principles on which I stand firm. I once had a reviewer who wanted me to take a cultural reference out of an article because it was unnecessary to the point I was making. I stood my ground in my response, explaining at some length why I felt the example was stylistically valuable, even if logically unnecessary, and further discussing the importance of lively academic style. At the same time, I accepted pretty much everything else they reviewer had to say. Fortunately, they were gracious in their response, saying that they enjoyed my ‘spirited’ defence of the example, and accepting the article for publication. (It was this article, in case you were wondering).


C. Promotion
If you get an article accepted for publication, you should celebrate the success (particularly if it is your first acceptance), but you should also:

(8) Remember that it doesn’t end with publication: If you care about your research and writing, you won’t want it to languish unread in a pay-walled academic journal. You will want to promote it and share it with others. There are a variety of ways to do this, and discussing them all would probably warrant an entire thesis in and of itself. I personally use a combination of strategies: sharing open access penultimate versions of the text on various academic repositories; blogging; social media; podcasting; and interviews with journalists. I have never issued a ‘press release’ for anything I have written. I find I get enough attention from journalists anyway, but I think there probably is value in doing so and I may experiment with this in the future.


Bonus: Can you fast-track publications?
It takes a long time to write and publish for peer review. It is easy to get disheartened if you experience a lot of rejection. I am not sure that there is any way to truly ‘fast-track’ the process, but if you are hungry for an acceptance, I would suggest two strategies:

Write a response piece: i.e. write an article for a particular journal that responds, in detail, to another article that recently appeared in the same journal. This was how I got my first couple of acceptances and I think it can be very effective. In reality, of course, every academic article is a ‘response’ piece (they all respond to some aspect of the literature), it’s just that most are not explicitly labeled as such. What I am calling a ‘response piece’ is an article that is noteworthy for its academic narrowness (it only responds to one particular article) and journal specificity (it is really only appropriate for one journal). Both of those features limit its overall value. It is likely to have a more limited audience and is unlikely to achieve long-term impact. But it can provide invaluable experience of the peer review process.

Collaborate: In some disciplines collaboration is common; in others it is rare. I come from one of the latter disciplines. Nearly everything I have published has been solo-authored, but I have recently started to collaborate with others and I am beginning to appreciate its virtues. I think collaboration can work to accelerate the writing and publishing process, provided you collaborate with the right people. Some people are really frustrating to collaborate with (I’m pretty sure I am one of those people); some people are a delight. Obviously, you should pick a collaborator who shares some relevant research interest with you. On top of that, I recommend finding someone who is more productive and more ambitious than you are: they are likely to write fast and will push you outside your comfort zone. Furthermore, collaborating with them is far more likely to elicit engagement than simply asking them to provide feedback on something you have written. That said, I don’t think you should aim too high with your potential collaborators, at least when you are starting out. Pick people you know and who are broadly within your peer group. Don’t aim for the most renowned professor in your field, unless they happen to be your supervisor or a close friend. Again, you can build up to that.

Okay, so those are all my tips. To reiterate what I said at the outset, these tips only address part of the process. They don’t engage with the substance of your article and that really is the most important thing. Still, I hope some of you find them useful. The handout below summarises everything discussed above.







Monday, May 22, 2017

Episode #23 - Liu on Responsibility and Discrimination in Autonomous Weapons and Self-Driving Cars

s200_dr_hin-yan.liu.jpg

In this episode I talk to Hin-Yan Liu. Hin-Yan is an Associate Professor of Law at the University of Copenhagen. His research interests lie at the frontiers of emerging technology governance, and in the law and policy of existential risks. His core agenda focuses upon the myriad challenges posed by artificial intelligence (AI) and robotics regulation. We talk about responsibility gaps in the deployment of autonomous weapons and crash optimisation algorithms for self-driving cars.

You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 1:03 - What is an autonomous weapon?
  • 4:14 - The responsibility gap in the autonomous weapons debate
  • 7:20 - The circumstantial responsibility gap
  • 13:44 - The conceptual responsibility gap
  • 21:00 - A tracing solution to the conceptual problem?
  • 27:47 - Should we use strict liability standards to plug the gap(s)?
  • 29:48 - What can we learn from the child soldiers debate
  • 33:02 - Crash optimisation algorithms for self-driving cars
  • 36:15 - Could self-driving cars give rise to structural discrimination?
  • 46:10 - Why it may not be easy to solve the structural discrimination problem
  • 49:35 - The Immunity Device Thought Experiment
  • 54:12 - Distinctions between the immunity device and other forms of insurance
  • 59:30 - What's missing from the self-driving car debate?
 

Links




Friday, May 19, 2017

The Right to Attention in an Age of Distraction




We are living through a crisis of attention that is now widely remarked upon, usually in the context of some complaint or other about technology.

That’s how Matthew Crawford starts his 2015 book The World Beyond Your Head, his inquiry into the self in an age of distraction. He was prompted to write the book by a profound sense of unease over how the ‘attentional commons’ was being hijacked by advertising and digital media. One day, he was paying for groceries using a credit card. He swiped the card on the machine and waited for a prompt to enter his details to appear on the screen. He was surprised to find that he was shown advertisements while he waited for the prompt. Somebody had decided that this moment — the moment between swiping your card and inputting your details — was a moment when they had a captive audience and that they could capitalise on it. Crawford noticed that these intrusions into our attentional commons were everywhere. We live, after all, in an attentional economy, where grabbing and holding someone’s attention is highly prized.

There is something disturbing about this trend. What we pay attention to, in large part, determines the quality of our lives. If our attention is monopolised by things that make us unhappy, anxious, sad, self-conscious, petty, jealous (and so on), our lives may end up worse than they might otherwise be. I am sure we have all shared the sense that the social media platforms, video and news websites, and advertisements that currently vie for our attention can have a tendency to do these very things. I find I have become obsessed with the number of retweets I receive. I constantly check my Facebook feed to see if I have any new notifications. I’m always tempted to watch one last one last funny cat video. My attention is thus swallowed whole by shallow and frivolous things. I am distracted away from experiences and activities that are ultimately more satisfying.

Given this state of affairs, perhaps it is time that we recognised a right to attentional protection? In other words, a right to do with our attention as we please, and a corresponding duty to protect our attentional ecosphere from intrusions that are captivating, but ultimately shallow and unfulfilling. I want to consider the argument in favour of recognising that right in this post. I do so by looking at the arguments that can be made in favour of three propositions:

Proposition 1: Attention is valuable and hence something worthy of protection.

Proposition 2: Attention is increasingly under threat, i.e. there is greater need/cause for protecting attention nowadays.

Proposition 3: We should (consequently) recognise a right to attentional protection (doing so might be politically and practically useful).

My analysis of these propositions is my own, but is heavily influenced by the work of others. Jasper L. Tran’s article ‘The Right to Attention’ is probably the main source and provides perhaps the best introduction to the topic of attentional rights. He casts a wide net, discussing the importance of attention across a number of domains. But there is something of an emerging zeitgeist when it comes to the protection of attention. Tristan Harris, Tim Wu, Matthew Crawford and Adam Alter are just some of the people who have recently written about or advocated for the importance of attention in the modern era.


1. Attention is Valuable
It would probably help if we started with a definition of attention. Here’s a possible one:

Attention = focused conscious awareness.

We all live in a stream of consciousness (occasionally interrupted by sleep, concussion, and coma). This stream of consciousness has different qualitative elements. Some things we are never consciously aware of — they are unseen and unknown; some things we are only dimly aware of — they hover in the background, ready to be brought into the light; some things we are acutely aware of — they are in the spotlight. The spotlight is our attention. As I sit writing this, I am dimly aware of some birds singing in the background. If I force myself, I can pay attention to their songs, but I’m not really paying attention to them. The screen of my laptop is where my attention lies. That’s where my thoughts are being translated into words. It’s where the spotlight shines.

This definition of attention is relatively uncontroversial. Tran, in his article on the right to attention, argues that there is, in fact, little disagreement about the definition of attention across different disciplines. He notes, for example, that psychologists define it as ‘the concentration of awareness’, and economists define it as ‘focused mental engagement’. There is little to choose between these definitions.

So granting that the definition is on the right track, does it help us to identify the value of attention? Perhaps. Think for a moment about the things that make life worth living — the experiences, capacities, resources (etc.) that make for a flourishing existence. Philosophers have thought long and hard about these things. They have identified many candidate elements of the good life. But lurking behind them all — and taking pride of place in many accounts of moral status — is the capacity for conscious awareness. It is our ability to experience the world, to experience pleasure and pain, hope and despair, joy and suffering, that makes what we do morally salient. A rock is not conscious. If you split the rock with a pickaxe you are not making its existence any worse. If you do the same thing to a human being, it’s rather different. You are making the human’s life go worse. This is because the human being is conscious. Cracking open the human skull with a pickaxe will almost certainly cause the human great suffering and, possibly, end its stream of consciousness (the very thing that makes other valuable things possible).

That consciousness is central to what makes life worth living is fairly widely accepted. The only disputes tend to relate to how wide the net of consciousness expands (are animals conscious? do we make their lives worse by killing and eating them?). Given that attention is simply a specific form of consciousness (focused conscious awareness), it would seem to follow that attention is valuable. A simple argument can be made:


  • (1) Consciousness is valuable (hence worth protecting).
  • (2) Attention is a form of consciousness (focused conscious awareness).
  • (3) Therefore, attention is valuable (hence worth protecting).


But this argument throws up a problem. If attention is merely a form of conscious awareness, then what is the point in talking specifically about a right to attentional protection? Shouldn’t we just focus on consciousness more generally?

I think there is some value to focusing specifically on attention. Part of the reason for this is practical and political (I talk about this later); part of the reason is more fundamental and axiological. As I suggested in my definition, there are different levels or grades of conscious awareness. Attention is the highest grade. It has a particular importance in our lives. What we pay attention to, in a very real sense, determines the quality of our lives. Paying attention to the right things makes for higher levels of satisfaction and contentment, and it is only in certain states of acutely focused awareness that we achieve the most rewarding states of consciousness.

I have a couple of examples to support this point. Both are originally taken from Cal Newport’s book Deep Work, which I enjoyed reading over the past year. The central thesis of Newport’s book is that certain kinds of work are more valuable and satisfying than others. In particular, he argues that engaging in ‘deep work’ (which he defines as ‘activity performed in a state of distraction-free concentration that pushes your cognitive capacities to their limit and produce new value, insight etc’) is better than ‘shallow work’ (which is the opposite). In chapter 3 of his book, he sets out to defend this claim by discussing how deep work makes life more meaningful. The value of attention features heavily in his argument. He discusses the work of two authors who have highlighted this.

The first is the science reporter Winifred Gallagher. In her 2009 book Rapt, she made the case for the role of attention in the well-lived life. She wrote the book after being diagnosed with an advanced form of cancer. As she coped with her treatment and diagnosis, she had a revelation. She realised that the disease was trying to monopolise her attention and that her outlook and emotional well-being was suffering as a result. By systematically training herself to focus on other things (simple day-to-day pleasures and the like) she could prevent this from happening. The circumstances of her life (her disease, its prognosis) were bad; but the attentional focus could be good and that ultimately counted for more. She then set out to research the science behind her revelation, discovering in the process that there was considerable empirical support for her revelation. Her conclusion is neatly summarised in the following quote:

Like fingers point to the moon, other diverse disciplines from anthropology to education, behavioral economics to family counseling, similarly suggest that the skillful management of attention is the sine qua non of the good life and the key to improving virtually every aspect of your experience. 
(Gallagher 2009, 2)

This chimes with my own experience. I find that my outlook and sense of well-being is far more affected by what I pay attention to on a daily basis than on what I achieve or by improvements my overall life circumstances. Those things are important, don’t get me wrong, but they count for less than we might think. The benefits of achievements are often short-lived. You bask in the glory for a few moments but quickly move on to the next goal. Improvements in life circumstances quickly become the new baseline of expectation. The benefits of what you pay attention to are more sustainable. A life in which you focus on important and pleasurable things is a good life. This gains additional support when we consider how certain forms of torture work (they often work by forcing you to pay attention to displeasurable things) and how people tout the benefits of meditation (by focusing your attention on the here and now you can improve your psychological well-being).

The other author Newport uses to support his thesis is the psychologist Mihaly Csikszentmihalyi, who is best-known for his work on the concept of ‘flow’. Csikszentmihalyi set out to understand what it is that makes people really happy, i.e. what daily activities make them feel good. Were people happiest at work or at play? Interestingly, Csikszentmihalyi found that people were often happiest at work. Why was this? His answer was that work enabled people to enter states of concentrated, focused awareness that were intensely pleasurable and rewarding. He called these ‘flow states’.

He subsequently developed a theory of flow. The theory holds that you enter into a flow state when you are engaging in some activity that pushes your cognitive capacities to their limits. In other words, when you are doing something that tests your abilities but that is completely beyond your abilities. Engaging in such activities fills your attentional sphere. They are so demanding that you cannot focus on anything else. This means you don’t have time to pay attention to things that might make you unhappy or put you ill at ease. A flow state is perhaps the highest state of attentional focus and, if Csiksentmihalyi is to be believed, the one that is central to the fulfilling life.

So we have here two arguments for thinking that attention, rather than consciousness more generally, is worthy of special consideration. What we pay attention to is central to our emotional well-being and outlook on life, and certain attentional states are intensely pleasurable and rewarding (Csikszentmihalyi’s argument). This does not mean that attention is all that matters in the good life (our health, income, friends etc. are also important) but it does suggest that attention is of particular importance. Furthermore, it suggests that protecting our attention has two important components:

Content protection: ensuring that we pay attention to things that make our lives go better (things that are meaningful and contribute to well-being) and that we are not constantly distracted by things that are trivial and unimportant.

Capacity protection: ensuring that we acquire and retain the capacity for extreme concentrated awareness (i.e. the capacity to enter flow states).


2. Attention is Under Threat and Needs Protection
You may not be entirely satisfied with the preceding argument, but set aside your objections for now. If we assume that attention is valuable and worth protecting, we must confront the next question: why is it worth protecting now? After all, if attention is valuable surely it has been valuable for the entire history of humanity? What is special about the present moment that demands a right to attentional protection?

There’s a simple answer and a more complex one. The simple answer is that no one who is currently concerned with attention would deny that it has always been valuable and worthy of protection. We probably didn’t recognise it before now because we didn’t have the conceptual vocabulary to effectively articulate the right and the political and social climate that would be receptive to a rights-claim of this sort. The more complex answer returns us to the opening quote: the one I took from Matthew Crawford. There is something about the present moment that seems to involve a ‘crisis of attention’. Our attentional ecosphere has become dominated by smart devices, addictive apps, social media services, and ubiquitous advertising. This is making it increasingly difficult to pay attention to things that matter and to retain the ability to focus.

We live in an attentional economy, where thousands upon thousands of actors compete, second by second, for our attention. This competition has driven some extraordinary innovation in the tools and techniques of attentional manipulation. We are getting really good at distracting people and disrupting their attention. Adam Alter’s recent book Irresistible documents the various tools and techniques that are used to grab and hold our attention. He identifies six key ‘ingredients’ that are needed to create an experience that holds our attention. He then explains how modern technologies make it easier to create such experiences. I’ll explain by going through each of the six ingredients:

Goals: Doing something with a target or end state in mind makes it more likely that it will grab your attention. It gives your efforts a purpose. Modern information technology has made it easier to identify and track our achievements of certain goals. It has also made seemingly arbitrary or meaningless goals more salient and attention-grabbing. Goals such as getting 2,000 Instagram followers, or beating your Strava segment times, are not only novel, they are also more easily brought into our attentional spotlights.

Feedback: Getting feedback on what you do tells you what is worth doing (what you are good at) and hence what is worthy of your attention. Modern technology makes it easier to get this feedback. Tracking and surveillance software can give you precise, quantifiable data about your actions, and social media platforms allow others to comment, criticise and cajole us into trying the same thing over and over. What’s more, designers of games and apps often provide attention-grabbing feedback that is relatively unimportant (known as ‘juicing’ in game design) and can mask losses as wins (e.g. loud noises, badges, flashing lights). This further engrains an activity in our attentional spotlight.

Progress: Having the sense that you are getting better at something often makes it more attention-grabbing. The ideal is to create an experience with extremely low barriers to entry (anyone get started and enjoy) but which then rewards time and effort put in by making the experience more challenging. Alter gives the example of Super Mario Bros as a game that had this ideal mix. He then notes that contemporary game designers use similar design principles to hook people into particular games on smartphones and social media platforms (Farmville, Candy Crush, Kim Kardashian’s Hollywood). They often then exploit this attentional-hook by adding in-game purchases that are necessary if you wish to make progress in the game environment.

Escalation: Having the sense that you are triumphing over adversity and that the stakes are being constantly raised often makes something more attention-grabbing. To be honest, I’m not entirely sure what the distinction is between this and the previous one, but as best I can tell it has to do with encouraging someone to believe they are acquiring mastery over a particular set of skills (as opposed to just giving them a sense of progress). Again, Alter highlights how modern game designers are experts at doing this. Adopting Csikszentmihalyi idea of flow, he notes how they create game environments that get people to operate just outside their comfort zones. This makes for a more immersive and rewarding experience. Alter also argues that humans have certain ‘stopping rules’ (cues that encourage them to end a particular behaviour) and that technology erodes or undermines these stopping rules.

Cliffhangers: Having the sense that a task or experience has not yet been completed can make it more attention-grabbing. This idea goes back to the work of the Russian psychologist Bluma Zeignarik who did experiments revealing that when you open a ‘task loop’ in your mind, it continues to occupy considerable mental real estate until it is closed off (i.e. until you complete the task). This has become known as the ‘Zeignarik Effect’. Alter notes how modern media (particularly serial television shows and podcasts) exploit this effect to encourage ‘binge’ watching/listening. The ‘autoplay’ features on Netflix and Youtube also take advantage of this: they automatically open loops and present you with the next episode/video to sate your desire for more.

Social Interaction: Sharing an experience with others and getting feedback from them can make it more attention-grabbing. Suffice to say, social media platforms such as Twitter, Facebook and Instagram are excellent at facilitating social interaction of the most addictive kind. They allow for both positive and negative feedback, and they provide that feedback on an inconsistent schedule.



To reiterate, and to be absolutely clear, there is nothing necessarily technological about these six ingredients. You could engineer attention-grabbing experiences and products using these six ingredients in an offline, non-technological world. Indeed, Tim Wu, in his recent book The Attention Merchants highlights the many ways in which this has been done throughout human history, suggesting in particular that religions were the original innovators in attentional engineering. Alter’s point is simply that technology makes it easier to bring these six features together to make for particularly absorbing experiences.

But is this a bad thing? Not necessarily. Here we run into a major problem with the argument in favour of a right to attention. As noted earlier, attention is central to the well-lived life. Paying attention to the right things, and being completely immersed in their pursuit, is a good thing. Consequently, using the six features outlined by Alter to engineer immersive and attention-grabbing experiences is not necessarily a bad thing. If the experiences you have engineered are good (make individual lives better), then you might be making the world a better place. To suggest that we are living through a ‘crisis of attention’, and that this crisis warrants special protection of attention, requires some additional, and potentially controversial, argumentative footwork.

First, you have to argue that the kinds of attention-grabbing experiences that are being fed to us through our digital devices are, in some sense, worse or inferior than the experiences we might be having without them. One way to do this would be to channel the spirit of John Stuart Mill and suggest that there are ‘higher’ and ‘lower’ experiences and that, in the main, technology is a fetid swamp of lower quality experiences. I think there is some plausibility to this, but it is complicated. You could argue that being totally immersed in video games - to the exclusion of much else - is ‘lower’ because you are not achieving anything of intrinsic worth. The time spent playing the game is time that could be spent (say) finding a cure for cancer and making the world a better place. You could also argue that the jockeying for position on social media platforms cultivates vicious (as opposed to virtuous) character traits (e.g. competitiveness, jealousy, narcissism). But you probably couldn’t argue that all technologically-mediated experiences are ‘lower’. Some may involve the pursuit of higher pleasures and goods. A blanket dismissal of digital media would be wrong.

Second, you would have to argue that the vast array of potentially absorbing experiences on offer is deeply distracting and hence corrosive of the ability to concentrate and achieve flow states. This seems like an easier argument to make. One thing that definitely appears to be true about the modern age is that it is distraction-rich. There are so many games, movies, podcasts, and social media services that are competing for our attention that it becomes hard to focus on any one of them. We get trapped in what Fred Armisen and Carrie Brownstein called the ‘technology loop’ in their amusing sketch from the TV series Portlandia.

 

In this sense, it doesn’t really matter whether the experiences that are being mediated through these devices are intrinsically worthwhile (whether they consist of the ‘higher’ pleasures); the distraction-rich environment provided by the devices prevents you from ever truly experiencing them.
If these three arguments are correct — if it is easier to engineer attention-grabbing experiences; if the majority of the experiences involve ‘lower’ pleasures/pursuits; and if the environment is too distraction rich — then we may well be living through an acute crisis of attention in the present era.


3. Why a ‘right’ to attentional protection?
You could accept the first two propositions and still disagree with the third. It could be the case, after all, that attention is valuable and is under threat but that it is neither desirable nor useful to recognise a specific ‘right’ to attentional protection. Further argumentation is needed. Fortunately, this argumentation is not too difficult to find.

One reason for favouring a ‘right’ to attentional protection is simply that doing so is the normatively/morally appropriate thing to do. Look at how other rights-claims are normatively justified. They are usually justified on the basis that recognising the right in question is fundamental to our status as human beings (to our ‘dignity’, to use the common phrase) or because doing so leads to better consequences for humankind. The right to property, for example, can be justified on Lockean ‘natural’ right grounds (that it is fundamental to our nature as human beings to acquire ownership over our material resources) or on practical economic grounds (the economy runs better when we recognise the right because it incentivises people to do things that increase social welfare).

Presumably similar justifications are available for the right to attentional protection. If the likes of Winifred Gallagher and Mihalyi Csikszentmihalyi are correct, for example, then the skilful management of attention is integral to living a truly satisfying human life (it is the ‘sine qua non’ of the good life, to use Gallagher’s phrase). Protecting this ability to manage attention would, thus, seem in keeping with the requirements of human dignity and overall social well-being.

But normative justifications of this sort are probably not enough. It’s possible that we could ensure our dignity as attentional beings, and improve the societal attention-quotient, without recognising a specific ‘right’ to attentional protection. To justify the ‘right’ would seem to require a more practical set of arguments. Fortunately, this is possible too. You can favour the notion of a ‘right’ to attention on the grounds that doing so will be politically and practically useful. Contemporary political and legal discourse is enamoured with the language of rights. To recognise something as a right carries a lot of force in public debate. If we seriously think that attention is valuable and under threat, it may, consequently, be much to our advantage to recognise a right to attentional protection. We are more likely to be heard and taken seriously if we do.

On top of that, if a right to attentional protection does get recognised in law, it carries further practical significance. To understand this, it is worth stepping back for a moment and considering what a legally protected right really is. The classic analysis of legal rights was conducted by William Hohfeld. Hohfeld noted that claims to the effect that such-and-such a right exists usually breakdown into a number of more specific sub-claims.

Hohfeld’s complete analysis is a little bit complicated but the gist of it is readily graspable. As he saw it, there were four specific ‘incidents’ or components to rights (not all of which were present in every claim about the existence of a right). It’s best if we understand these with an example. Take the right to bodily integrity. According to Hohfeld’s analysis, this could be made up of four distinct incidents:

Privilege: The freedom/liberty to do with your body as you please.

Claim: A duty imposed on others not to interfere with or alter your body in any way (this is what we usually associate with the use of the term 'right').

Power: The legally recognised ability to waive your claim-right (e.g. through informed consent) and allow others to interfere with your body.

Immunity: The legally recognised protection against others waiving or altering your claim-right (i.e. not to be forced to give up your claim right).

Privileges and claims are first order incidents: they regulate and determine your conduct and the conduct of others. Powers and immunities are second-order incidents: they regulate and determine the content of the first order incidents. Using this four-part model, you can map the relationship between the different elements of the right to bodily integrity using the following diagram.

Diagram taken from Stanford Encyclopedia of Philosophy article on 'Rights'


All rights can be understood as combinations of these four incidents, but not all contain all four. For example, you could have a claim right (against interference by another) without necessarily having a power or immunity. Similarly, the basic elements of a right can be qualified in many important ways. For instance, the privilege to do with your body as you please is limited in many countries to exclude the right to sell sexual services or body parts. Likewise, the immunity against others interfering with your claim right might be a qualified immunity: if a law is passed through a legally legitimate mechanism that eliminates the claim, you may no longer be entitled to it. Some rights can be quite limited and qualified; some rights can be given the strongest possible protections.

This Hohfeldian analysis is helpful in the present context. It allows us to sharpen and deepen our thinking about the right to attentional protection. What kind of right is it? Which incidents does it invoke? Here’s my first pass at both of these questions:

Privilege: The liberty to focus and manage your attention as you see fit.

Claim: A duty on others not to interfere with or hijack your attention, and your capacity to pay attention.

Power: The legally recognised power to waive your claim-right to attentional protection, i.e. to allow others (people or experiences) to enter your attentional spotlight.

Immunity: The legally recognised protection against others waiving your claim right to attention (e.g. by selling off a claim to your attention to others).



I think these four incidents would need to be qualified in certain ways. The arguments I outlined earlier in relation to the precarious nature of attention in the modern era would seem to imply some degree of paternalism when it comes to the protection of attention. The fear, after all, is that modern technology is particularly good at hijacking our attention and that we are not the best protectors of our own attention. This would seem to qualify the privilege over attention. Furthermore, and as Jasper Tran notes this in his article, there may be a duty to pay attention to certain things in certain contexts (e.g. a jury member has a civic and legal duty to pay attention to the evidence being presented at trial). Thus, there cannot be an unqualified privilege to pay attention to whatever you like (and, correlatively, to ignore whatever you like).

All that said, it would seem that the right to attentional protection does warrant reasonably robust recognition and enforcement. After all, attention is, if the arguments earlier on were correct, integral to human well-being.


4. Objections and Outstanding Issues
To this point, I have been looking at the case in favour of the right to attentional protection. I’m going to conclude by switching tack and considering various problems with the idea. I’ll look at four issues in particular. I have some thoughts about each of them, but I’m not going to kid you and pretend that I know what to do with each of them. They pose some serious challenges that would need to worked out before a defence of the right to attentional protectional became fully persuasive.

The first issue is that the right to attentional protection might conflict with other important and already recognised rights. Tran discusses the obvious one in his article: the freedom of speech. If I have a right to speak my mind, surely that necessarily entails a right to invade your attentional ecosphere? Or, if not a right to invade, at least a right to try and grab your attention. There seems to be some tension here. Tran responds to this by arguing that the right to attention and the right to freedom of expression are analytically distinct: you have a right to speak your mind but not a right to have others pay attention to you. That’s certainly true. But the analytical distinction ignores the practical reality. If you have people out there speaking their minds, it would be difficult to insure that they don’t, at least occasionally, trespass on someone else’s attention. That said, how much weight is ascribed to the freedom of expression varies a bit from jurisdiction to jurisdiction, and commercial products have always been subject to more stringent regulations than, say, journalism, literature or other works of art. Furthermore, clashes of rights are common and the mere fact that one right will clash with another doesn’t, in itself, provide reason to reject the existence of that right.

The second issue concerns the practicality of protecting attention. You might argue that it is impossible to really protect someone’s attention from interference or hijacking. To be alive and conscious, after all, is to be bombarded by demands on your attention. How could we possibly hope to protect you from all those demands? The simple answer to this is that we couldn’t. To argue that there is right to attentional protection does not mean that there is a right to be protected from all interferences with your attention. That would be absurd. Analogous areas of the law have dealt with this problem. Take the right to bodily integrity again. Most legal systems impose a duty on others not to apply force to your body. This is usually protected by way of laws on assault and battery. But, of course, simply being alive and going out in society entails that sometimes people will bump into you and apply force to your body. Legal systems typically don’t recognise such everyday bumps and collisions as part of the duty not to apply force. They save their energies from more serious interferences or infringements. A similar approach could be adopted in the case of a right to attentional protection.

The third issue concerns the redundancy of the right to attentional protection (i.e. its overlap with other pre-existing rights). There are a lot of rights already recognised and protected by the law. Some people may argue that there is an over-proliferation of rights-claims and this dilutes and attenuates their usefulness. In the case of the right to attentional protection, you could argue that this is already adequately protected by things like the right to privacy and bodily integrity, the freedom of conscience, and the restrictions on fraud, manipulation and coercion that already populate the legal system. This is probably the objection with which I a most sympathetic. I do worry that much of what is distinctive and interesting about the right to attention is already covered by existing rights and legal doctrines. That said, there mere fact that there are already mechanisms in place to protect the right does not mean that the right should not be recognised. Recognising the right may provide a useful way to organise and group those existing mechanisms toward a particular purpose. Furthermore, I do think there is something distinctive about attention and its value in human life that is not quite captured by pre-existing rights. It may be worth using the label so as to organise and motivate people to care about it.

Finally, even if we do recognise a right to attentional protection, there are a variety of questions to be asked about the mechanisms through which the right is protected. One big question concerns who should be tasked with recognising and protecting against violations of the right. Should it be up to the individual whose right is interfered with? Or should there be a particular government agency (or third sector charity) tasked with doing so? Or some combination of both? Giving the job to someone other than the individual might be problematic insofar as it is paternalistic and censorious: it would involve third parties arguing that a particular attentional interference was harmful to the individual in question. There are also then questions about the legal remedies that should be available to ensure that attention is protected. Should the individual have a right to sue an app-maker or social-media provider for hijacking their attention? Or should some system of licensing and regulatory sanction apply? One possibility, that I quite like, is there should be dedicated public spaces that are free from the most egregious forms of attentional manipulation. That might be one way for the state to discharge its duty to protect attention.

Suffice to say, there is a lot to be worked out if we ever did agree to recognise a right to attentional protection.


Wednesday, May 17, 2017

Robots and Retribution (Podcast Interview)




I had the great pleasure of being interviewed by David Edmonds (co-host of Philosophy Bites and author of several excellent books on the history of philosophy with John Eidinow) for his new podcast Philosophy 24/7. We spoke about robots and retribution.

The conversation was based around the arguments in my paper 'Robots, Law and the Retribution Gap' (which you can read for free here).

I highly recommend the Philosophy 24/7 podcast. If you have any interest in applied/practical philosophy, you should subscribe to it now. Some of my other favourite episodes include:






Monday, May 15, 2017

Cognitive Scarcity and Artificial Intelligence: How Assistive AI Could Alleviate Inequality




By Miles Brundage (FHI, Oxford University) and John Danaher (NUI Galway)

(Be sure to check out Miles's other work on his website and over at the Future of Humanity Institute, where he is currently a research fellow. You can also follow him on twitter @Miles_Brundage)


The rise of the robots and the end of work. The superintelligence control problem and the end of humanity. The headlines seem to write themselves. The growth of artificial intelligence undoubtedly brings with it many perils. But it also brings many promises. In this article, we focus on the promise of widely distributed assistive artificial intelligences (i.e. AI assistants). We argue that the wide availability of such AI assistants could help to address one aspect of our growing inequality problem. We make this argument by adopting Mullainathan and Shafir’s framework for thinking about the psychological effects of scarcity. We offer our argument as a counterbalance to the many claims made about the inequality-boosting powers of automation and AI.


1. The Double Effect of Income Scarcity
Achieving some degree of distributive justice is a central goal of contemporary societies. In very abstract terms, this requires a just distribution of the benefits and burdens of social life. If some people earn a lot of money, we might argue that they should be taxed at a higher rate to ensure that the less well off receive some compensating measure of well-being. Tax revenues could then be used to provide social benefits to those who lack them through no fault of their own. Admittedly, some societies pay little more than lip service to the ideals of distributive justice; but in many cases it is a genuine, if elusive, goal. When pressed, many would say that they are committed to the idea that there should be equal opportunities and a fair distribution of benefits and burdens for all. They simply differ in their understanding of equality and fairness.

Various forms of inequality impact on our ability to achieve distributive justice. Income inequality is one of them. Income inequality is a major concern right now. The gap between the rich and the poor seems to be growing (Atkinson 2015; Piketty 2014). And this is, in part, exacerbated by advances in automation. Whether automation is causing long-term structural unemployment is a matter of some controversy. Several authors have argued that it is or that it soon will (Brynjolfsson and McAfee 2014; Ford 2015; Chace 2016). Others are more sceptical. But they sometimes agree that it is having a polarising effect on the job market and the income associated with jobs that are still available to humans. For example, David Autor argues that advances in automation are having a disproportionate impact on routine work (typically middle-income middle-skill work): the routine nature of such work makes it amenable to computer programs (using traditional 'top down' methods or programming or bottom up machine learning methods) performing the associated tasks. This forces workers into two other categories of work: non-routine abstract work and and non-routine manual work. Abstract work is creative, problem-solving work which requires high levels of education and is usually well-rewarded. Manual work is skilled dexterous physical work. It usually does not require high levels of education and is typically poorly-paid and highly-precarious (i.e. short-term, contract-based work). The problem is that there are fewer jobs available at the abstract (and high-paid) end of the jobs-spectrum. The result is that workers displaced by advances in automation tend to be pushed into the manual (and lower-paid) end.

If these polarising trends continue, more and more people will suffer from income-related scarcity. They will find it harder to get work that pays well; and the work they do get will tend to be precarious and insecure. This should be troubling to anyone who cares about distributive justice. The critical question becomes: how can we address the problems caused by income-related scarcity in such a way that there is a just distribution of the benefits and burdens of social life?

What is often neglected in debates about this question is the double effect of income-related scarcity. Research suggests that the poor don’t just suffer from all the problems we might expect to ensue from a lack of income (inability to pay bills, shortage of material resources, reduced ability to plan for the future), they also suffer a dramatic cognitive impact. The work of Sendhil Mullainathan and Eldar Shafir is clear on this point (2014a; 2014b; 2012). To put it bluntly, they argue that having an insufficient income doesn’t just make you poor, it makes you stupid, too.

That’s a loaded way of putting it, of course. Their, more nuanced, view is that income-scarcity puts a tax on your cognitive bandwidth. ‘Bandwidth’ is general term they use to describe your ability to focus on tasks, solve problems, exercise control, pay attention, remember, plan and so forth. It comes in two main flavours:

Bandwidth1- Fluid intelligence, i.e. the ability to use working memory to engage in problem-solving behaviour. This is the kind of cognitive ability that is measured by standard psychological tests like Raven’s Progressive Matrices.

Bandwidth2 - Executive control, i.e. the ability to pay attention, manage cognitive resources, initiate and inhibit action. This is the kind of ability that is often tested by getting people to delay gratification (e.g. the infamous Marshmallow test).

Mullainathan and Shafir’s main contention, backed up by a series of experimental and field studies, is that being poor detrimentally narrows both kinds of cognitive bandwidth. If you have less money, you tend to be uniquely sensitive to stimuli relating to price. This leads to a cognitive tunnelling effect. This means that you are very good at paying attention to anything relating to money in your environment. But this tunnelling effect means that you have reduced sensitivity to everything else. This results in less fluid intelligence and less executive control. The effects can be quite dramatic. In one study, performed in a busy shopping mall in New Jersey, low-income and high-income subjects were primed with a vignette that made them think about raising different sums of money ($150 and $1500) and were then tested on fluid intelligence and executive control. While higher-income subjects performed equally well in both instances, those with lower incomes did not. They performed significantly worse when primed to think about raising $1500. Indeed, the impact on fluid intelligence was as high as 13-14 IQ points.




Mullainathan and Shafir have supported and expanded on these findings in a range of other studies. They argue that the tax on bandwidth doesn’t just hold for income-related scarcity. It holds for other kinds of scarcity, too. People who are hungry are more likely to pay attention to food-related stimuli, with consequent effects on their intelligence and executive control. The same goes for those who are busy and hence time-poor. There is, it seems, a general psychological impact of scarcity. The question we ask here is: Can AI help mitigate that impact?



2. Could AI address the tax on cognitive bandwidth?
To answer this we need to ask another question: What does AI do? There are competing definitions of artificial intelligence. Much of the early debate focused on whether machines could think and act like humans. Nowadays the definition seems to have shifted (at least amongst AI researchers) to whether machines can solve particular tasks or problems, e.g. facial recognition, voice recognition, language translation, pattern matching and classification, playing and winning complex games like chess or Go, planning and plotting routes for cars, driving cars and so on. Many of the tasks performed by modern AIs are cognitive in their character. They involve processing and making use of information to achieve some goal state, such as a high chance of winning the game of Go or having correctly labeled pictures.

The cognitive character of AI throws up an interesting possibility: Could AI be used to address the tax on cognitive bandwidth that is associated with scarcity? And could this, in turn, help us to edge closer to the ideals of distributive justice?

Mullainathan and Shafir’s research suggests that the tax on bandwidth is a major hurdle to resolving problems of inequality. People who have a scarcity mindset are often lured into accepting short-term solutions to their scarcity-related problems. This is because they suffer from immediate forms of scarcity: not enough money to get through the week, not enough food to get through the day. They will often adopt the quickest and most convenient solutions to those problems. One classic example of this is the tendency for the poor to take out short-term high-interest loans: they borrow heavily from their future to pay for their present. This can create a negative feedback loop, making it even more difficult to help them out of their position.

If this is (at least partially) a function of the tax on cognitive bandwidth, then perhaps the wide distribution of assistive AI could create some cognitive slack, and perhaps this could address some of the problems of inequality. An argument along the following lines suggests itself:


  • (1) Poverty (or scarcity more generally) imposes a tax on cognitive bandwidth which has deleterious consequences: the poor are less able to plan for the future effectively; they are more susceptible to short-term fixes to their problems; and their problem-solving and fluid intelligence is negatively impacted. (Mullainathan & Shafir’s thesis)

  • (2) These consequences exacerbate problems with income inequality (negative feedback problem).

  • (3) Personal AI assistants could address/replace/make-up for the tax on cognitive bandwidth.

  • (4) Therefore, personal AI assistants could redress the deleterious consequences of cognitive scarcity.

  • (5) Therefore, personal AI assistants could reduce some of the problems associated with income inequality.


This argument is not intended to be formally valid. It is informal and defeasible in nature. In schematic terms, it is saying that poverty (or scarcity) results in X (the tax in bandwidth) which in turn causes Y (the exacerbation of income inequality); Personal AI assistants could prevent X; therefore this should in turn prevent Y. Any first year philosophy student could tell you that this is formally invalid. Just because you prevent X from happening doesn’t mean that Y won’t happen. We are aware of that. Our argument is a more modest one: if you can block off one of the causal contributors to increased income inequality, perhaps you can help to alleviate the problem. Other things could, of course, further compound the problem. But personal AI assistants could help.

How might they do this? This is the central claim of the argument (premise 3). We use the term ‘personal AI assistants’ to refer to any AI system that provides assistance to an individual in performing routine or non-routine cognitive tasks. The assistance could range from basic information search, to decision-support, to fully automated/outsourced cognitive labour. The tasks on which the AI provides assistance could vary enormously (as they already do). They could include elements of personal day-to-day finance, such as budgeting, planning expenditure, shopping lists, advice on personal finance and so forth. Decision-support AI of this sort could help the poor to avoid exacerbating their financial woes due to their reduced cognitive bandwidth. The assistive functions need not be limited to personal finance, of course; we simply use this example as it is particularly pertinent in the present discussion. Support from AI could also help in non-finance-related aspects of an individual’s life. If, as Mullainathan and Shafir argue, the scarcity-induced tax on cognitive bandwidth has negative effects on an individual’s problem-solving capabilities, then it could presumably impact negatively on their work or their ability to find work. Assistive AI could plausibly help to redress these deficits, too.

There is nothing outlandish about the possibility of personal AI assistants of this sort. First generation variants of them are widely available. Apple’s Siri, Google’s Assistant, Microsoft’s Cortana, and Amazon’s Echo are only the most obvious and ‘personalised’ versions. Most people own devices that are capable of channeling these technologies into their daily lives. Admittedly, this first generation of AI has its weaknesses. But we can expect the technology to improve. And, if we take the argument being made here seriously, it might be appropriate to act now to encourage makers of this technology to invest in forms of AI that will provide assistance on the crucial cognitive functions that are most impacted by poverty.

A few further remarks can be made about what sorts of AI capabilities would enable personal AI assistants to contribute more meaningfully to cognitive slack than they do currently, and what these capabilities might look like. Having such an understanding and noting research trends in the direction of such capabilities may give us additional reasons to believe Premise 3 may one day be true. A current problem with AI assistants is that they lack common sense reasoning abilities, and as such may misunderstand queries, or fail to flag potential scheduling conflicts (or opportunities) that are not easily resolvable using their existing core competencies (e.g. simply noting that a meeting taking place in another location will require travel to that location from where the user is now). On top of this, affective computing, aimed at developing computational systems that can sensibly respond to and in turn shape human emotions, is currently developing rapidly, but is also not yet in a stage of development where AI assistants can reliably identify cues of emotional distress or comfort. Nevertheless, developing such a capacity is plausible eventually. Finally, natural language understanding is not yet at the level of capability that would allow accurate summarization and prioritization of emails, text messages, and so forth, but this is also plausible eventually.

To ground the claims of plausibility above, consider a simple argument: humans are, like purported future AIs, also computational systems (though perhaps with greater and more efficient use of processing power than those available to computers today). Brains compute appropriate responses to one’s environment, and have “wetware” analogues of the sensors and effectors that AI and robotic systems currently use. Such humans are often capable of being very effective personal assistants, providing cognitive slack for their human employers. If scientific progress continues and we eventually have a well-developed account of the physical processes through which such human cognitive assistance occurs, we would expect to be able to replicate that functionality in machines.

This suggests a sort of lower bound on the potential that AI could have to alleviate the tax on cognitive bandwidth: that lower bound is the level of support provided by the best human personal assistants today. Currently, many rich and powerful people have access to either one or many assistants (or, more generally, staff) and are able to process more information, perform more tasks, and otherwise be more effective in their lives thanks to offloading much of their sensing, cognition, and action to others. Given the functional equivalency between the human mind and an advanced AI, we could easily imagine personal assistant AIs of the future that are at least as powerful as the single best personal assistant or staff member, or a team thereof.

One point glossed over in this discussion is the matter of physicality. Human assistants often perform tasks in the real world, and a personal assistant AI on a smartphone wouldn’t necessarily be able to do all the same things, even with the same cognitive abilities. This requires revising the lower bound on the alleviation provided by AI assistant to the level of assistance that could be provided solely through digital means by a human assistant. But this may not be a radical revision. Indeed, speaking from prior experience working as someone’s assistant, one of the authors (Miles) notes that a lot of tasks can be done to alleviate cognitive scarcity for another person simply through reading, analyzing, and writing emails; receiving and making calls; sending and receiving tasks; updating their calendar; and so forth. Such tasks do not require physical effectors.

With always-on access to one’s user, a personal assistant AI could periodically chime in with important updates, ask focused questions regarding the relative priorities of tasks, suggest possible drafts (or revisions) of emails, flag likely-important missed messages, etc. Today, assistant AIs do only a small fraction of these capabilities, and often do them poorly. The lower bound of human-level performance is not perfection - an assistant (be they human or machine) cannot necessarily predict every evaluation their principal would make of events, tasks, people, etc. in advance, and there are inherent trade-offs between reducing the time spent asking for the principal’s feedback (and possibly annoying them), on the one hand, and getting things done effectively behind the scenes, on the other.

Remember, this is just a rough lower bound for the slack that could be created by personal AI assistant. It could also be that personal AI assistants exceed the abilities of humans in various ways. Already, Google and other search engines are vastly better and faster than humans at processing billions and trillions of pieces of information and presenting relevant links (and, often, relevant facts or direct answers to a query), and they do not need to sleep. In other areas, as well, AI already vastly exceeds human capabilities. So it is easy to imagine that the scarcity-alleviating effect of AI could be far greater than that of a human assistant or team thereof for every person.


3. Conclusion: Weaknesses of the argument?
That’s the bones of the argument we wish to make. Is it any good? There are several issues that would need to be addressed to make it fully persuasive.

For starters, we would need to confirm that AI assistants do actually alleviate the tax on bandwidth. That will require empirical analysis. It is impossible to empirically assess future technologies, but there is probably much to be learned from studying the alleviating effects of existing AI assistants and search engines, and by evaluating the impact of human assistants on cognitive bandwidth. We would also need to compare any positive results that can be found in these studies with the putative negative results identified by other authors. Nicholas Carr, for example, has argued that the use of automating technologies leads to cognitive degeneration (i.e. a disenhancement of cognitive ability). Our argument may provide an important counterpoint to Carr’s thesis, suggesting that alleviating some cognitive burdens can alleviate other negative aspects of inequality, but perhaps there are tradeoffs here in terms of cognitive degeneration that would need to be assessed.

In addition to this, there are a range of concerns people might have about our argument even if AI can be proven to provide cognitive slack. AI assistants at the moment are constructed by large-scale corporate enterprises and are intended at least in part to support their corporate interests. Those interests may not align with the agenda we have outlined in this post. So we need to work hard to ensure that the right kinds of assistive AI are developed and made widely available. There is a danger that if the technology is not widely distributed it will just make the (cognitively) rich even richer. One suggestion here is that perhaps we should view AI assistance as a public good or a form of social welfare, and support its responsible development and free diffusion as such? Furthermore, there may be unintended consequences associated with the wide availability of AI assistance that we don’t fully appreciate right now. An obvious one would be a ‘treadmill’ effect: if cognitive slack is created by this technology, people will simply be burdened (taxed) with other cognitive tasks to stretch them to their limitations once more.

Despite these concerns, we think the argument we have sketched is provocative and may provide an interesting corrective to the current doomsaying around AI and social inequality. We welcome critical feedback on its key premises and encourage people to articulate further objections in the comments section.







Saturday, May 13, 2017

The Art of Academia (Index)



[Updated August 2019]

Over the years, I have written a number of posts that can be loosely grouped under the label 'how to' or 'self help'. I am not a big fan of this genre of writing. The internet is replete with self-help gurus and lifestyle engineers who will promise to maximise your potential, leverage your capacities, and help you put several dents in the universe.

Nevertheless, I do find it occasionally useful to step back from the substance of what I am doing (the intellectual questions that pique my curiosity and that I try to answer) to reflect on the processes underlying what I do, and seeing if they can be improved. On top of that, in that past couple of years, I have been teaching an undergraduate module on how to research, write, and present. This means that I have been forced to articulate the methods underlying what I do for a wider audience.

Anyway, I thought I would group together all of the writing I have done on what I am calling the 'The Art of Academia' (i.e. the art of research, writing, teaching and building an academic career). Some people might find these posts useful. If you would like a PDF of the various 'posters' included in these articles, you can download it here.
























Friday, May 12, 2017

Forthcoming in September 2017




with MIT Press

Neil McArthur (UManitoba) and I have been working on this book for the past couple of years. I'm pleased to announce that it now has a release date and that you can pre-order it on Amazon (US and UK). Here's the description and table of contents.

Sexbots are coming. Given the pace of technological advances, it is inevitable that realistic robots specifically designed for people's sexual gratification will be developed in the not-too-distant future. Despite popular culture's fascination with the topic, and the emergence of the much-publicized Campaign Against Sex Robots, there has been little academic research on the social, philosophical, moral, and legal implications of robot sex. This book fills the gap, offering perspectives from philosophy, psychology, religious studies, economics, and law on the possible future of robot-human sexual relationships.


 ~ Table of Contents ~ 


I. Introducing Robot Sex 

  • 1. 'Should we be thinking about robot sex?' by John Danaher 
  • 2. 'On the very idea of sex with robots?' by Mark Migotti and Nicole Wyatt 


II. Defending Robot Sex

  • 3. 'The case for sex robots' by Neil McArthur 
  • 5. 'Sexual rights, disability and sex robots' by Ezio di Nucci 


III. Challenging Robot Sex 

  • 6. 'Religious perspectives on sex with robots' by Noreen Hertzfeld 
  • 7. 'The Symbolic-Consequences argument in the sex robot debate' by John Danaher
  • 8. Legal and moral implications of child sex robots' by Litska Strikwerda 


IV. The Robot's Perspective 

  • 9. 'Is it good for them? Ethical concern for the sexbots' by Steve Petersen
  • 10. 'Was it good for you too? New natural law theory and the paradox of sex robots' by Joshua Goldstein 


V. The Possibility of Robot Love 

  • 11. 'Automatic sweethearts for transhumanists' by Michael Hauskeller
  • 12. 'From sex robots to love robots: Is mutual love with a robot possible?' by Sven Nyholm and Lily Eva Frank 


VI. The Future of Robot Sex 

  • 13. 'Intimacy, Bonding, and Sex Robots: Examining Empirical Results and Exploring Ethical Ramifications' by Matthias Scheutz and Thomas Arnold
  • 14. 'Deus sex machina: Loving robot sex workers and the allure of an insincere kiss' by Julie Carpenter
  • 15. 'Sex robot induced social change: An economic perspective' by Marina Adshade