AI and Digital Ethics

Hot topics: ai and ethics logo, showing digital text and computer coding

Data is increasingly fueling the economy, politics, and everyday life. Our financial transactions, movements, communications, relationships, and interactions with governments and businesses all generate data that is being collected, stored, sold, bought, and acquired through other means by data brokers, governments, and corporations interested in profiling individuals. As data sets grow to become Big Data, and as artificial intelligence becomes more sophisticated in collecting and analysing data, the opportunities ahead seem infinite. From climate science to healthcare and policing, AI could significantly enhance our problem-solving capacities. The risks, however, are also great, as the information being handled about individuals is sometimes extremely sensitive. Governments and companies need to address a number of ethical questions and find methods to capitalise on information while designing best practices in order to respect people's privacy and maintain their trust. 

OUC researchers are currently working on ethical issues revolving around the development of AI, and the collection and management of data.

Resources

Constantinescu, M. & Crisp, R., (2022), 'Can Robotic AI Systems Be Virtuous and Why Does This Matter?', International Journal of Social Robotics, Vol: 14, pp1547–1557

Douglas, T., (2022), '(When) Is Adblocking Wrongful?'. in Véliz, C., (Ed.) Oxford Handbook of Digital Ethics (Oxford University Press)

Giubilini, A. and Savulescu, J., (2018), 'The artificial moral advisor: The 'ideal observer' meets artificial intelligence', Philosophy and Technology, Vol: 31(2): 169–188 [Open Access PMC6004274]

Maslen, H. & Savulescu, J. (forthcoming), 'The Ethics of Virtual Reality and Teleprescence’, in Tony Prescott, Nathan Lepora and Paul Verschure (eds.), Living Machines: A Handbook of Research in Biomimetic and Biohybrid Systems, Oxford University Press.

Minerva, F. and Giubilini, A., (2023), 'Is AI the future of mental healthcare?', Topoi : an International Review of Philosophy, Vol: 42, pp 809–817  [PMC10230127

Veliz, C., (2021), 'Moral Zombies: Why Algorithms Are Not Moral Agents', AI and Society, Vol: 36(2) 487–497 [PMC7613994]

Véliz, C., (2020), 'Not the doctor’s business: Privacy, personal responsibility, and data rights in medical settings', Bioethics, Vol: 34(7): 712-718 [PMC7587002]

Véliz, C. and Grunewald, P., (2018), 'Protecting data privacy is key to a smart energy future', Nature Energy, Vol: 3: 702–704 (freely available)

Savulescu, J. & Maslen, H. (2014), ‘Moral Artificial Intelligence: Moral AI?’, in Jan Romportl, Eva Zackova, Jozef Kelemen (eds.) Beyond Artificial Intelligence: The Disappearing Human-Machine Divide, Springer.

Walsh, T., Levy, N., Bell, G., Elliot, A., Wood, F., Maclaurin, J. and Mareels, I., (2019), Report: The Effective and Ethical Development of Artificial IntelligenceThis project examined the potential that artificial intelligence (AI) technologies have in enhancing Australia’s wellbeing, lifting the economy, improving environmental sustainability and creating a more equitable, inclusive and fair society. Placing society at the core of AI development, the report analyses the opportunities, challenges and prospects that AI technologies present, and explores considerations such as workforce, education, human rights and our regulatory environment.

Rainey, S., (2021), 'Ambient Intelligence' Practical Ethics in the News (18 May)

Véliz, C., (2019), 'Privacy is a collective concern', New Statesman (22 October).

Véliz, C., (2019), 'Privacy is power', Aeon  (2 September). [reprinted in El País]

Levy, N., (2019), 'Will Australia miss the opportunity to cash in on the AI revolution?', ABC Radio (30 July).

Levy, N., (2019), 'AI is coming, whether Australia has the policies to deal with it or not, report warns', ABC.net (29 July).

Edmonds, D., (2019), 'Can computer profiles cut crime?', BBC Radio 4 Analysis. (30 June).

Véliz, C., (2019), 'Inteligencia artificial: ¿progreso o retroceso? (Artificial intelligence: progress or setback?)', El País (14 June).

Edmonds, D., (2018), 'Cars without drivers still need a moral compass. But what kind?', The Guardian (14 November).

Maslen, H., (2018), 'Ethics and the brave new brain', All In The Mind - ABC Radio National (23 September).  Transcript and audio available here. 

Zohny, H., and Savulescu, J., (2018), 'Ethical AI Kills Too: An Assessment of the Lords report on AI in the UK'. Oxford Martin School Online News. (19 April)

Véliz, C., (2018), 'Tus datos son tóxicos' [Your data is toxic]. El Pais. The trail of information that users leave on the Internet can be used against them. In the digital age, protecting privacy is the only way to achieve a free society (6 April).

Véliz, C., (2018), 'Common Sense for A.I. Is a Great Idea - but it’s harder than it sounds'. slate.com (19 March).

Véliz, C., 2018, Al Jazeera Media ViewInterviewed in connection with privacy issues relating to Strava (fitness tracking app) after discovery of a major flaw in its global heatmap.  Highly sensitive data, location etc, collected on military personnel was found to be very easy to de-anonymize​ (29 January)

Rainey, S., (2018), 'Artificial Intelligence, the Singularity, and the Future'., Panel Discussion,  Philosophy Now Festival 2018 (20 January). 

Carissa Véliz, 7 January 2018, '¿Confiar tus desnudos a Facebook?', [would you trust Facebook with nude photos?]

Jonathan Pugh, June 16, 2017, Using AI to Predict Criminal Offending: What Makes it ‘Accurate’, and What Makes it ‘Ethical’

Carissa Véliz, 11 January 2017, Why you might want to think twice about surrendering online privacy for the sake of convenience

Carissa Véliz,14 April 2016, The Challenge of Determining Whether an A.I. Is Sentient

Carissa Véliz, 30 January 2016 (with Julia Powles), How Europe is fighting to change tech companies' 'wrecking ball' ethics

Hannah Maslen, October 28, 2015, Virtually reality? The value of virtual activities and remote interaction 

Carissa Véliz, 2 June 2015, What to do with Google—nothing, break it up, nationalise it, turn it into a public utility, treat it as a public space, or something else?

Carissa Véliz,15 April 2015, Should airline pilots have less medical privacy?

Carissa Véliz, 2 February 2015, Facebook’s new Terms of Service: Choosing between your privacy and your relationships

Hannah Maslen, 16 May 2014, On the ‘right to be forgotten’

Hannah Maslen, 26 March 2014, Computer vision and emotional privacy

Hannah Maslen, 12 April 2013, Strict-ish liability? An experiment in the law as algorithm

Hannah Maslen, 15 March 2013, A reply to ‘Facebook: You are your ‘Likes”

The Practical Ethics Video Series makes the most important and complex debates in practical ethics accessible to a wide audience through brief interviews with high profile philosophers in Oxford.  Video interviews on this and other topics can be found on our YouTube channel.


Professor Mark Coeckelbergh (2023), 'Is AI bad for democracy? Analyzing AI’s impact on epistemic agency', St Cross Special Ethics Seminar, St Cross College, Oxford (9 March 2023)

Cases such as Cambridge Analytica or the use of AI by the Chinese government suggest that the use of artificial intelligence (AI) creates some risks for democracy. This paper analyzes these risks by using the concept of epistemic agency and argues that the use of AI risks to influence the formation and the revision of beliefs in at least three ways: the direct, intended manipulation of beliefs, the type of knowledge offered, and the creation and maintenance of epistemic bubbles. It then suggests some implications for research and policy.


Marcello Ienca (2021),  'Do We Need Mental Privacy? The Ethics of Mind Reading Reloaded', Work-in-Progress talk to members of the Oxford Uehiro Centre (8 November)


Pugh, J., (2020), Ethics in AI Seminar - Does AI threaten Human Autonomy (With Dr Carina Prunkl and Jessica Morley, chaired by Professor Peter Millican). Humanities Cultural Programme (26 November 2020)


Practical Ethics Video Series: Is AI dangerous? Interview with Professor Colin Gavaghan (6 December 2017)

#AI will soon be omnipresent in our everyday lives, but it raises all sorts of  legal and ethical questions: Who should be responsible when it causes harm? Should a human being always take the final decision? Should it be more reliable than humans, before we use it?  Professor Colin Gavaghan (Otago) highlights the most important challenges raised by AI, especially in the context of criminal justice and policing, and #selfdrivingcars.


Practical Ethics Video Series: Could we use an app to act morally? Interview with Walter Sinnott-Armstrong (23 February 2016)

Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin Visiting Fellow) plans to develop a computer system (and a phone app) that will help us gain knowledge about human moral judgment and that will make moral judgment better. But will this moral AI make us morally lazy? Will it be abused? Could this moral AI take over the world? Professor Armstrong explains…