Data is increasingly fueling the economy, politics, and everyday life. Our financial transactions, movements, communications, relationships, and interactions with governments and businesses all generate data that is being collected, stored, sold, bought, and acquired through other means by data brokers, governments, and corporations interested in profiling individuals. As data sets grow to become Big Data, and as artificial intelligence becomes more sophisticated in collecting and analysing data, the opportunities ahead seem infinite. From climate science to healthcare and policing, AI could significantly enhance our problem-solving capacities. The risks, however, are also great, as the information being handled about individuals is sometimes extremely sensitive. Governments and companies need to address a number of ethical questions and find methods to capitalise on information while designing best practices in order to respect people's privacy and maintain their trust.
OUC researchers are currently working on ethical issues revolving around the development of AI, and the collection and management of data.
Maslen, H. & Savulescu, J. (forthcoming), 'The Ethics of Virtual Reality and Teleprescence’, in Tony Prescott, Nathan Lepora and Paul Verschure (eds.), Living Machines: A Handbook of Research in Biomimetic and Biohybrid Systems, Oxford University Press.
Walsh, T., Levy, N., Bell, G., Elliot, A., Wood, F., Maclaurin, J. and Mareels, I., (2019), Report: The Effective and Ethical Development of Artificial Intelligence. This project examined the potential that artificial intelligence (AI) technologies have in enhancing Australia’s wellbeing, lifting the economy, improving environmental sustainability and creating a more equitable, inclusive and fair society. Placing society at the core of AI development, the report analyses the opportunities, challenges and prospects that AI technologies present, and explores considerations such as workforce, education, human rights and our regulatory environment.
Véliz, C., (2018), 'Tus datos son tóxicos' [Your data is toxic]. El Pais. The trail of information that users leave on the Internet can be used against them. In the digital age, protecting privacy is the only way to achieve a free society (6 April).
Véliz, C., 2018, Al Jazeera Media View, Interviewed in connection with privacy issues relating to Strava (fitness tracking app) after discovery of a major flaw in its global heatmap. Highly sensitive data, location etc, collected on military personnel was found to be very easy to de-anonymize (29 January)
The Practical Ethics Video Series makes the most important and complex debates in practical ethics accessible to a wide audience through brief interviews with high profile philosophers in Oxford. Video interviews on this and other topics can be found on our YouTube channel.
Cases such as Cambridge Analytica or the use of AI by the Chinese government suggest that the use of artificial intelligence (AI) creates some risks for democracy. This paper analyzes these risks by using the concept of epistemic agency and argues that the use of AI risks to influence the formation and the revision of beliefs in at least three ways: the direct, intended manipulation of beliefs, the type of knowledge offered, and the creation and maintenance of epistemic bubbles. It then suggests some implications for research and policy.
Practical Ethics Video Series: Is AI dangerous? Interview with Professor Colin Gavaghan (6 December 2017)
#AI will soon be omnipresent in our everyday lives, but it raises all sorts of legal and ethical questions: Who should be responsible when it causes harm? Should a human being always take the final decision? Should it be more reliable than humans, before we use it? Professor Colin Gavaghan (Otago) highlights the most important challenges raised by AI, especially in the context of criminal justice and policing, and #selfdrivingcars.
Practical Ethics Video Series: Could we use an app to act morally? Interview with Walter Sinnott-Armstrong (23 February 2016)
Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin Visiting Fellow) plans to develop a computer system (and a phone app) that will help us gain knowledge about human moral judgment and that will make moral judgment better. But will this moral AI make us morally lazy? Will it be abused? Could this moral AI take over the world? Professor Armstrong explains…