Blog

Ethics and Technology – An Interview with Professor Tom Sorell

Ethics and Technology – An Interview with Professor Tom Sorell

Reported by Kate McNeil, PaCCS Communications Officer

In late September, I sat down with Professor Tom Sorell, Professor of Politics and Philosophy at the University of Warwick, and Head of the Interdisciplinary Ethics Research Group in PAIS. Professor Sorell was a Global Uncertainties Leadership Fellow from 2013-2016.

This interview has been edited and condensed for clarity and conciseness.

Kate McNeil: Thank you for taking the time to speak with me about your work. Would you mind getting started by telling our readers a little bit about your research interests, and your work with the Global Uncertainties project?

Professor Tom Sorell: I work across the whole field of ethics and technology, and the leadership aspect of my Global Uncertainties Fellowship involved looking at projects that had previously been funded by PaCCS in tech development. I spoke to people involved in those projects about ethical issues they could have considered in that work.

On the research side, at the time of my fellowship I was working on ethical issues that arose from counter terrorism and the fight against organized crime. I was primarily looking at a body of literature focused on preventive justice, and I was especially interested in legal writers who worry about preparatory offenses – offenses to do with steps on the way to say, a terrorist act. For example, visiting certain extremist websites, or buying certain potentially dangerous things for making bombs.

What is the crux of the debate on that issue?

A range of preparatory offenses are recognized, and there are a lot of legal theorists who think there are too many. They are concerned that criminalising preparatory acts is a step on the way to the precognition of offenses, where people are punished for things they haven’t done yet. Preparatory offenses are sometimes viewed as steps on the way to penalizing the thoughts people have, rather than the actions they are committing. Criminal justice is typically backward looking –penalising acts that have already been committed.  Preventive justice changes the perspective in time and, according to some, erodes the norms of criminal justice. My own view is that preventive justice is defensible, because preventing harm is a goal of justice, and criminalising preparatory acts can sometimes prevent serious harm.

You have written about online grooming as well – was there overlap between the philosophical problems you encountered in these lines of inquiry?

Online grooming is not usually organized crime, though it can be. But yes, grooming is as preparatory offence. Some academics argue that grooming a child, as opposed to having sex with a child, isn’t necessarily harmful, so having a preparatory offense might be unjust. Meanwhile, in the UK, there has been a proliferation of preparatory offenses. For example, since 2017, it has been an offense under the Sexual Offences Act to have a sexualized conversation with a child. This sets the threshold for prosecution much lower than the threshold for grooming.

I think that preventive justice can be justified for a large range of offenses, including grooming and that’s what I’ve argued in a whole range of papers. I’m not so sure about criminalising sexualized conversations.

More broadly, how varied are the types of ethical challenges that your research deals with? Are many of the ethical issues which underpin the challenges you research similar across subject areas?

Ethics issues can vary a lot. To give you an example, a completely different type of project I’m working on is the Pericles Project, a European Horizon 2020 project. Most of that work is on research ethics, which is a completely different kettle of fish.  That project involves interviews with ex-extremists and some current extremists, and there are many issues that can arise. Research ethics tells you to promise them confidentiality, but the law says that you must report things with security implications. So that’s one issue. Moreover, cooperation with researchers could look like betrayal or cooperation with authorities to those in an extremist community, putting the lives of research subjects at risk. The whole area is fraught with moral problems, because it can put people in danger.

This piece will be featured as part of our October cyber series. You have done a lot of work that relates to cyberspace, and you are a featured researcher in the cybersecurity collaboration space on the RISC Academic Marketplace. Is there anything about that aspect of your work that you would like to speak to?

Cyber ethics is a semi-independent line of research that I’ve been following. I’ve done a lot of cyber ethics work at different times, and there is a whole host of a range of things I have been involved in.

This line of inquiry has gone beyond examining just the actions of trolls, and groups like Anonymous and WikiLeaks, to include things like cyberstalking.

I’ve also done work on cyber fraud. Recently, I was involved in a project that was funded by EPSRC that was focused on training machines to recognize whether some posts on romance websites are fraudulent or not. The idea was to try to identify features of the languages of those posts, and feature of the images that would be characteristic of fraudulent romance scam posts. The project, which finished last year, was very successful at producing algorithms that could recognize some of these things much better than human beings could.

What does an ethicist bring to a project like that?

When people go on romance scam sites, they’re made to fall in love with various fraudsters, often West African fraudsters. Then these people try to get them to send money on various pretexts. Refusing can look like betraying a loved one. So, people pay and pay again, sometimes losing hundreds of thousands of pounds.

It’s one thing for somebody to be a victim of fraud once, but some people go into these scams again and again and ignore many warnings from police and other people that they are probably being defrauded. In conjunction with Professor Monica Whitty, a cyberpsychologist, I have looked at the ethical issue  whether victims share responsibility with fraudsters  when they are defrauded many times. What do we say about these people? Are they co-responsible for the problems they are in, or not? It’s a tricky area to write about, because it is possibly victim blaming.

What were some of the findings of that inquiry?

There are different cases. We learned that there are cases where people have good excuses to be defrauded more than once. There are psychological factors involved in romance scamming, and some personalities are more vulnerable to hyper-personal computer contact than others. This hyper-personal computer contact is one of the contexts in which defrauding takes place.

The norms of being in love and the norms of being a rational evaluator of evidence conflict at times. If someone you are in love with is asking you for money, but the police come along and suggest that you are being defrauded, you may say ‘no, no, this isn’t fraud… I know this person, and I am in love with them.’ Here, you have a clash of two different norms. One is a norm of proportioning your belief to evidence, and the other norm is being loyal to someone you love and not thinking the worst of them lightly.

It’s this clash of norms that makes repeat scamming victimhood less bad looking than you might think. Because people are doing the right thing morally by their loved ones, even though they are violating what we call in philosophy an epistemic norm about belief formation. So that’s one thing we wrote about that I thought was interesting. It’s a quite new idea in this field.

What are some of the big challenges you’re beginning to work on in the intersections between ethics and technology? Are there challenges you want others to be thinking about?

One example of a theme that comes up comes up when you deal with this sort of stuff is, should there be a human in the loop? When machines are producing judgements or warnings – or whatever else they might need to produce – should this be submitted to a human being? If so, when and what is the human supposed to do?

There is a lot of technology being developed, for example, that distinguishes between normal and abnormal movements in protected areas in a city or infrastructure site. How do you train a machine to recognize something that is normal or abnormal? What should happen as a result of the judgement of the machine or algorithm when something is abnormal? Sometimes, the answer to that question is “refer it to a human being.” But how much discretion do they have?

These issues are becoming more pressing as we carry on with AI. It used to be that we had machines identifying something as abnormal, and a human being would then decide on what action if any to take. Now, you can have much more sophisticated machine responses. A verdict on a situation could lead to something being prioritized in a police control room, for example. These human in the loop issues have become more pressing the quicker machines are, and the more urgent some of the responses are to things machines are saying.

As those types of technologies are used in a more diverse array of fields, have the ethical issues encountered in the course of their use become more complex or varied?

Sometimes more varied, but the technology used is often a refinement of things that exist in some form already. For example, controversial technology for tracking people through CCTV camera output has been improving. That has obvious value for investigations of crime. It speeds up a problem that previously would have involved lots of policemen looking through lots and lots of camera output. Now a machine can do it without wasting a human’s time. So that’s an example of something that was done in the past, but which can now be done much quicker by machines with real benefit to investigations.

So, it’s not necessarily about new ethical challenges, but rather the consideration of new technology being introduced?

It’s possible that the new capabilities of machines will reduce the ethical challenges. It depends on how accurate the technology is. There are some things machines can spot more reliably than humans can. This means their use can sometimes deal with problems of human bias or human attention spans which sometimes create injustices.

•••

This article was written as part of the PaCCS in focus cyber series, which is running throughout the month of October in conjunction with European Cyber Security Month. The next post in this series, an interview with Dr Damien Van Puyvelde about his new book ‘Cybersecurity: Politics, Governance and Conflict in Cyberspace’, will be released on Tuesday, October 22nd. Other posts in this series can be found here.