Think

Artificial Intelligence and Data Harvesting: An Interview with Carissa Véliz

An exploration of the risks and benefits of AI, particular regarding privacy.

Carissa is here interviewed by THINK editor Stephen Law.

STEPHEN LAW: Your recent book, Privacy is Power: How and Why You Should Take Back Control of Your Data, addresses issues in digital privacy and surveillance, and how internet companies are harvesting more and more of our personal data. What data are these companies harvesting, and for what purpose? What ethical issues does their activity raise?

CARISSA VÉLIZ: All the data you can possibly imagine: what you search for, what you eat, how fast you drive, who you sleep with, your weight, your car and other possessions, how much you earn, how much you spend, your health record, your location data, and much, much, more. They collect so much data to earn money. Sometimes they sell that data to insurance companies, banks, prospective employers, governments, or marketing companies. Sometimes they use that data to sell access to you through personalized ads.

The data economy raises all kinds of ethical issues. Arguably, you are not consenting to that data collection, because much of it happens without you knowing about it, and even when you formally ‘consent’, it is not really informed consent, because you can’t possibly know what kinds of inferences will be made from that data or where it might end up. And data collection is not harmless. It can have grave consequences, from you being denied a loan, or a job, or housing, to social consequences like having our democracies damaged, through data firms like Cambridge Analytica trying to sway elections using personalized propaganda. Having so much personal data stored is also a national security risk, as it can be used for intelligence purposes.

SL: What is Artificial Intelligence? Should we be particularly concerned about the application of Artificial Intelligence to the harvesting of personal data? Could you give a concrete example of how AI is being used?

CV: Artificial intelligence (AI), roughly, is when algorithms display behaviour that is either intelligent or mimics intelligence.

One of the reasons to be concerned about AI is how it’s being used to make inferences about people. For instance, AI can be used to infer sexual orientation or other sensitive information about people from data that doesn’t seem all that sensitive, like music taste. Other concerns about AI using personal data to make decisions have less to do with privacy and more to do with bias, discrimination and unfairness.

SL: What should we, as individuals, do to protect ourselves against invasion of our privacy? And what should governments do?

CV: We can use privacy-friendly devices and apps. Instead of Google Search, use DuckDuckGo; instead of WhatsApp, use Signal; instead of Gmail, use ProtonMail. We can ask companies to delete our data. We can respect other people’s privacy to create a respectful culture. Governments should ban the trade in personal data. We don’t buy or sell votes, and for many of the same reasons, we shouldn’t buy or sell personal data.

SL: Can you illustrate how bias, discrimination and unfairness might result from applying AI to our personal data?

CV: There are many examples. A few years back Amazon designed an algorithm to hire employees, and the algorithm turned out to be sexist; it was biased against women. What happened was that the algorithm used historical data, and in the past ten years, Amazon had mostly hired men, so anything on a CV that made it stand out as being that of a woman (e.g. having been a part of the women’s soccer team) signalled to the algorithm that that type of person was not the kind of person who had been a successful Amazon employee.

SL: Taking a large step back, what’s distinctive about the contribution that you, as a philosopher, bring to the discussion of these issues?

CV: A few things. Philosophers can offer conceptual analyses that can be useful in making ethical decisions about public matters. Philosophical analysis can lead to better decisions, and to better explaining (and justifying) a decision once it’s made. Conceptual analyses can sharpen debates, shorten them, sometimes make them less repetitive and inconclusive.

Conceptual analyses include:

  • Clarifying concepts: Make sure people are talking about the same thing. On occasion, such clarification may lead to problems dissolving (Wittgenstein) – some disagreements amount to misunderstandings.
  • Providing nuance: Like other disciplines, academic ethics has also developed a precise technical language that can provide more nuance than ordinary language about morality (e.g. permissible, impermissible, required, supererogatory, etc.). Implications: some proposals seem like a good idea until we cash-out undesirable theoretical or practical implications.
  • Contradictions: Public discourse, from the media to Parliament, is filled with fallacies. Philosophers can identify faulty arguments.
  • Questions of fact vs value: A continuing source of confusion in public debates is whether something is a fact. Consider the example of death. We used to think that whether someone is dead was a medical or biological question. Then came bioethics and successfully argued that it is partly a question of value (what do we mean by death? the death of the body? Of the person? Of consciousness?). From the point of view of ethics, the most important question has become: when does someone lose the rights and interests typical of a living person?

On the theoretical side:

  • Ethical theories can be helpful guidelines when thinking about new practical cases. In turn, sometimes practical cases make evident the limits or mistakes of our theories and help us improve theories. Those improved theories can be useful for future cases. One of the results that can be appreciated is progress throughout the history of philosophy: consensus is reached in some issues, and even when it is not, the theories that result from decades of debate are much more polished than their original versions. Today’s consequentialism is much more nuanced than, say, Bentham’s.

Philosophers can also be good at identifying moral problems. Before the development of bioethics, many medical practices that today are analysed under the lens of ethics were not thought to be ethically problematic. For example, not informing patients of their diagnosis, randomizing patients to treatment or placebo without informing them that they were involved in research, allowing students to practise invasive examinations on anaesthetized patients without their consent. All these things used to be done by the medical profession without a second thought. The first step for improving ethical practices is identifying moral problems in the first place.

Philosophers can also inspire moral thought by encouraging public debates on important questions. And philosophy can also offer its experience in matters of ethics, from normative ethics to medical ethics, business ethics, and beyond.

SL: As AI develops further, what would you be most concerned about? What are the most significant moral issues AI raises, beyond digital privacy?

CV: In a nutshell, we have to think about how to design AI in a way that, both in the short and the long run, we can look back and be happy that we developed it in the first place. And by ‘we’ I mean society. It’s not enough for AI to be profitable for a few people. AI has to benefit humankind. Without good governance, we could be worse off having AI than if we’d never invented the thing in the first place. It could lead to growing inequality, unfairness (including racism and sexism), and to the destruction of our natural resources, among other problems. It could even bring down democracy.

  • Authors

    Carissa Véliz is an award-winning author, an Associate Professor at the Institute for Ethics in AI, and a Fellow at Hertford College, University of Oxford.

    Her work focuses on digital ethics, with an emphasis on privacy and AI ethics, as well as practical ethics, political philosophy, and public policy.

    Dr Stephen Law, editor of Think journal. Previously Reader in Philosophy at Heythrop College University of London, Stephen Law is now based at Oxford University Department of Continuing Education and researches in the philosophies of mind, language, metaphysics, and religion.

  • First Published

    Think , Volume 22 , Issue 63 , Spring 2023 , pp. 59 - 62
    DOI: https://doi.org/10.1017/S1477175622000215