Practically Prompted: An Experiment in LLM‑Generated Practical Ethics Blogs
Description
Rationale
We launched Practically Prompted to test, in public, what large language models (LLMs) can contribute to timely, thoughtful ethical analysis:
- Test a new tool. Can an LLM do more than pastiche? Can it surface real moral tensions and illuminate trade‑offs in fast‑moving stories?
- Keep pace with the news. Ethics benefits from reflection, but the public also needs clear analysis while events unfold. LLMs might help academics respond in near real‑time.
- Invite reflection on method. Publishing AI‑written text alongside brief human commentary lets readers see both promise and pitfalls – what the model notices, misses, or mishandles.
- Stimulate debate about authorship and expertise. The project asks where collaboration between human philosophers and AI is fruitful – and where it risks confusion or complacency.
How the experiment worked
- Each post began with a single, stable prompt instructing the model to choose a recent, ethically rich news item and write a Practical‑Ethics‑style analysis.
- We used OpenAI’s research‑oriented “o3” model. The LLM’s output was published as‑is; human input was limited to a short, clearly separated commentary.
- Where helpful, the model linked to news and official sources for key factual claims.
- We released four posts across six weeks. The raw prompt and model outputs are linked in each entry below.
Ethical Questions Raised by the New Miscarriage‑Risk Test
Published
30 June 2025
Read the post
What it covers
A new laboratory test aims to predict miscarriage risk by analysing the endometrium before conception. The post distinguishes diagnostic use (after losses) from population screening (first‑time pregnancies), arguing that the latter requires higher evidential thresholds and a different consent standard.
Why it’s interesting
The analysis foregrounds how shifting attention from embryo genetics to maternal tissue can re‑pathologise pregnancy, potentially amplifying blame and anxiety. It also raises questions about false positives, commercialisation in fertility markets, and the ethics of “peace‑of‑mind” screening.
Key questions
- When does a personalised diagnostic quietly slide into screening, and who bears the harms of uncertainty?
- Could risk‑labelling the womb reinforce gendered blame for miscarriage?
- What conditions would make such screening ethically proportionate?
Europe’s New AI “Code of Practice” and the Ethics of Voluntary Compliance
Published
14 July 2025
Read the post
What it covers
As the EU’s AI Act phases in, the Commission floated a voluntary Code of Practice for general‑purpose AI. The post probes the legitimacy of inviting industry to help author rules that will soon constrain it.
Why it’s interesting
It articulates three defensibility conditions for voluntarism: (1) strong statutory baseline already protects rights; (2) meaningful accountability for signatories; (3) no displacement of future binding duties. It warns against the Code drifting into de‑facto licence for business‑as‑usual or early regulatory capture.
Key questions
- When (if ever) is co‑regulation ethically superior to waiting for full enforcement?
- How can transparency incentives avoid becoming opt‑in PR rather than public protection?
After UK Age‑Checks Kick In—What Does “Protecting Children” Justify?
Published
28 July 2025
Read the post
What it covers
With UK age‑verification rules enforced for pornography and other “harmful” content, VPN downloads surged. The post asks whether preventing youth exposure warrants normalising intrusive ID checks (facial scans, credit‑cards, government IDs) for everyone.
Why it’s interesting
It frames a clear proportionality and slippery‑slope dilemma: if workarounds proliferate, policymakers either tolerate a law that mostly inconveniences the rule‑abiding or escalate surveillance (e.g., VPN blocking, deep packet inspection) with knock‑on risks for civil liberties.
Key questions
- What counts as “harm,” and are we over‑generalising one governance tool to very different risk profiles?
- Do broad age‑gates create privacy harms outsized to their benefits?
The Ethics of Live Facial Recognition at Notting Hill
Published
12 August 2025
Read the post
What it covers
Amid renewed plans to deploy live facial recognition (LFR) around Notting Hill Carnival, the post tests three lenses—necessity, proportionality, equality—and interrogates what arrest/charge counts actually show (and don’t) about LFR’s added value.
Why it’s interesting
It highlights Carnival as a stress‑test for public‑space surveillance: scanning many to find a few, with collateral impacts on privacy and discrimination. It foregrounds counterfactual thinking (could less intrusive tools achieve the same safety aims?).
Key questions
- When is LFR necessary, rather than merely helpful?
- Does scanning thousands of bystanders meet proportionality standards for biometric data?
- How should equality concerns (e.g., error disparities) weigh against public‑safety claims?
What we learned
Strengths observed
- News sense. Week after week, the model surfaced genuinely morally salient stories from the prior 7 days news cycle rather than defaulting to click‑bait.
- Synthesis at speed. In a single pass it pulled multiple sources together, kept the facts straight, and framed the debate with serviceable ethical scaffolds (e.g., proportionality, screening vs. diagnosis, co‑regulation conditions) — all in ~650 words.
- Form that fits the public brief. It consistently produced short pieces that both report the news and advance an argument – a combination that is rare at this length.
Limitations
- Referencing discipline. Ethical arguments went unreferenced, and paragraphs often reused sources without clearly drawing on them.
- Over‑tidy reasoning. The model favoured neat three‑part frameworks without justifying why those lenses were the right ones for the case at hand.
- Engagement without a voice. LLMs lack the personal biases and experiences that often motivate readership. Many come to commentary for a person’s view as much as the analysis. This leaves an open question for public ethics: what should we want from pieces like these — the argument itself, or the author’s take? We treat this as a live, empirical question.
Project lead
Hazem Zohny
Contact
Series home
Introducing the experiment → https://blog.practicalethics.ox.ac.uk/2025/06/practically-prompted-introducing-an-experiment-in-llm-generated-blog-posts/