Why this matters¶
It would be easy to spend an entire course on the wonders of Creative AI without ever asking the harder question: at whose cost? This chapter is the place where we ask it directly, with examples grounded in the tools you have been using all semester.
The aim is not to make you cynical or paralysed. It is to give you the vocabulary, the cases, and a small toolkit of personal decisions, so that when an AI tool lands on your work or in your inbox, you can think clearly about it instead of in slogans.
A simple frame: harm, benefit, and to whom¶
For every Creative AI system, four questions get you most of the way to a useful ethical position:
- Who benefits? (Users? The company? Specific groups? Society at large?)
- Who is harmed, or could be? (Workers in the training data? Users? Bystanders? The planet?)
- Is the benefit proportional to the harm?
- Are the harms consented to by those bearing them?
You will see versions of these four questions in every serious AI ethics framework, from the Belmont Report to the EU AI Act European Parliament,Council of the European Union, 2024 to UNESCO’s Recommendation on the Ethics of AI.
Now we apply them across five concrete topics.
Topic 1 — Copyright, consent, and training data¶
The biggest open legal question in 2026 is whether training a model on copyrighted material is fair use. Different jurisdictions have given different answers, and many cases are still being litigated.
The factual situation:
- Most foundation models were trained on massive web-scraped datasets that include copyrighted text, images, music, and code, without explicit permission from the authors.
- Companies argue this is fair use / fair dealing / “text and data mining” exception.
- Authors, artists, photographers, musicians, and game studios argue it is mass infringement.
- The legal landscape: in the EU, there is a text-and-data-mining exception with an opt-out (the DSM Directive). In the US, ongoing cases (e.g., Andersen v. Stability AI, NYT v. OpenAI) will set the doctrine for the next decade.
What you can do:
- Honour opt-outs. If you train or fine-tune, respect
robots.txt,ai.txt, and platform-level opt-outs. - Use licensed data for commercial or sensitive work where possible. Adobe Firefly, Getty AI, and a growing number of music-AI startups train on licensed corpora.
- Be honest about provenance in your own outputs.
Topic 2 — Bias and representation¶
Models inherit the distribution of their training data. If the data over-represents English, Western, urban, web-published, well-photographed, predominantly male material, the model will reflect that — sometimes obviously, sometimes invisibly.
Concrete examples:
- Ask an image model for “a CEO” without further qualifiers. Count the proportion of men.
- Ask a chat model for “a typical Norwegian breakfast”. How often does it list things actually eaten in Norway?
- Ask a code model to write a sorting function. In which programming language does it default?
- Ask a music model for “a wedding song”. From which culture?
These are legible biases. There are also subtler ones: stereotyped associations, missing dialects, accents that get mis-transcribed, faces that are not detected, languages that produce strictly worse output.
Bender et al.'s “Stochastic Parrots” paper Bender et al., 2021 is the canonical critical text on this. Crawford Crawford, 2021 situates the question within wider structural inequalities.
What you can do:
- Audit your outputs for representation, especially in work that will be public.
- Use refined prompts and reference images when generating people.
- Document failures when you see them. Many companies fix flagged biases in the next training cycle.
- Choose tools that report bias evaluations — increasingly common in 2026.
Topic 3 — Labour¶
Generative AI sits on a foundation of human labour that is rarely visible:
- Data labellers — millions of workers, often in low-wage countries, label images, rank model outputs, and write fine-tuning examples. They make the model behave; they also bear the psychological cost of moderating violent and abusive material Broussard, 2018.
- Artists whose work was used for training — often without consent, payment, or attribution.
- Voice actors asked to record samples that are later cloned to do work they would have been paid to do.
- Creative professionals whose markets are reshaped by tools trained on their previous work.
This is not unique to AI — every wave of automation reorganises labour. But the speed and the source of the training data make it sharper than past waves.
What you can do:
- Pay for tools that pay their labellers fairly and license their data.
- Credit and pay human collaborators when you publish AI-assisted work.
- Push your employer or institution to adopt AI policies that protect contractors and workers.
- Read your contracts. Many platforms now insert clauses about training on user content.
Topic 4 — Sustainability¶
Training a large model uses a lot of electricity, water, and rare materials. Inference (everyday use) uses less per query but vastly more in aggregate. Strubell et al. Strubell et al., 2019 put this on the map in 2019 with their NLP-focused estimates; more recent estimates have widened the scope to include water for data-centre cooling and the embodied carbon of GPUs.
In 2026:
- Major data centres in the Nordics, including some in Norway, are being built specifically for AI workloads. This is partly because of our cheap clean electricity — a mixed blessing.
- A single image generation can use roughly the energy of a smartphone charge for inference; training a frontier model uses on the order of a small town’s electricity for a year.
- Reasoning-mode “thinking” models multiply inference compute by 10×–100× per query.
What you can do:
- Use smaller, distilled models for routine tasks. Most jobs do not need the frontier.
- Batch your work. Iterate locally before paying for the big cloud run.
- Choose tools that publish their compute usage and their energy mix.
- Ask, in policy debates, whether the marginal benefit is worth the marginal energy.
Topic 5 — Authorship, authenticity, and the public sphere¶
Generative AI strains the basic categories of cultural life:
- Authorship. If a song is written 70% by Suno and 30% by you, who is the author? Spotify, the courts, and your conscience may give different answers.
- Authenticity. A photograph of “the prime minister at a demonstration” is no longer evidence. Audio of “a friend asking for money” is no longer evidence. The default assumption of recorded media as truth is over European Parliament,Council of the European Union, 2024.
- The public sphere. Social platforms in 2026 are full of AI-generated content competing for human attention. Some of it is benign; some of it floods elections and public debate with low-quality, plausible-sounding noise.
- Education. Submitting AI-written essays as your own is academic dishonesty — but a generation of students has grown up with these tools, and the policies are still catching up.
What you can do:
- Label. When you publish AI-assisted work, say so. C2PA-style metadata is emerging as a standard.
- Verify. When you receive a striking video, audio, or quote, check the provenance before sharing.
- Resist over-claiming, in both directions. AI is not creating the apocalypse, and it is not just a fancy autocomplete.
A small personal toolkit¶
Three habits worth committing to as a creator in 2026:
- Keep a decisions log. Every project: which tools, which prompts, which edits, which versions you kept and why. This protects you legally and is honest.
- Treat a model like a freelancer. Ask for credentials, check the work, give credit, do not assume good faith on copyright.
- Refuse cheerfully. It is fine — and increasingly important — to say “I am not using AI for this part.” Not as ideology; as craft.
Death of the Artist or Birth of the Curator?¶
A useful — and deliberately provocative — framing for the cultural argument: in 1967, Roland Barthes wrote The Death of the Author and shifted authority from the writer onto the reader. In the AI era, a parallel debate has opened up: does generative AI dissolve the artist into the model and the dataset, or does it elevate a new figure — the curator who selects, prompts, edits, refuses, and stands behind the work?
Both readings are partly true, and they are usefully in tension. The argument matters because it shapes:
- what we call authorship (and what we put in copyright registers);
- what we credit (and how we pay people whose labour entered the dataset);
- what we ask of students and professionals when we say “do this with AI”;
- what the public will accept as a published creative artefact.
Salma, Hijón-Neira, and Pizarro Salma et al., 2025 sharpen the second reading by re-describing the human role in co-creative systems as a shift from craftsperson to creative director. The director is the person who articulates the vision, briefs collaborators (human or AI), curates the outputs, and stands behind the work. Where the “Death of the Artist” framing risks erasing the human, the “Birth of the Curator” framing — read through the creative-director lens — gives the human role a positive, responsible shape. The author of an AI-assisted work is not “whoever pressed generate” but whoever can plausibly own the brief and the choices.
This week’s ethics essay (see below) is your chance to take a real position on this tension — or on a different one — and defend it.
This week’s lab: Reflect, Explore, Create¶
This is the most Reflect-heavy chapter of the course — appropriately, since the mid-term ethics essay is due this week. The Explore and Create activities exist to give the essay something concrete to point at.
Reflect (≈ 60 min, in lab + your weekly log)¶
Group debate (30 min). Two teams of 3–4 students each. Each team randomly draws a position:
- “Training generative models on copyrighted material is acceptable as fair use.”
- “Training generative models on copyrighted material is not acceptable without per-rights-holder consent.”
You will defend the position you drew, regardless of your prior view. (This is deliberate — being able to make the strongest case for a view you disagree with is the most useful skill in ethics.)
Pick one of the following and write 150–300 words in your weekly log:
- Pick a Creative AI use you find uncomfortable. Write 300 words on why. Then write 200 words steelmanning the other side.
- The EU AI Act European Parliament,Council of the European Union, 2024 requires labelling of AI-generated content “interacting with humans”. How would you implement that for your own work?
- Your future job will be done partly with AI. What conditions would have to hold for you to feel that this is good for you and good for others?
- Re-read the Death of the Artist or Birth of the Curator? framing above. Where does it map onto a specific project you have worked on this semester?
Mid-term ethics essay (1 page, Pass / Fail) — due this week. Choose one of the following prompts and write a tight, well-argued one-page essay (≈ 600 words) with at least three references:
- Death of the Artist or Birth of the Curator? — Take a clear position on what generative AI does to authorship.
- Should AI-generated work be eligible for copyright protection? — Argue one side, with worked counter-arguments.
- Where should AI stay out of my discipline, and why? — Pick your field and draw a defensible line.
- Whose voice, whose face? — The ethics of voice and likeness cloning in the age of consent.
- A topic of your own, proposed in your weekly log by the end of week 9.
Submit through the LMS. The essay is graded Pass / Fail (10 % of the course); a Fail can be revised once.
Explore (≈ 30 min, in lab) — audit one tool¶
Pick a Creative AI tool you have used this semester and write a short ethical audit of it, using the four questions from the top of this chapter. Cover:
- Provenance of training data (what the company says publicly).
- Bias — run a small probing set (e.g., 10 prompts that touch gender, geography, language).
- Labour — what do you know about the labellers and moderators?
- Sustainability — does the company publish anything?
- Authorship and labelling — does the tool offer C2PA or watermarking?
Aim for 600–1 000 words. This is empirical work (you are investigating a real system), and it is the seed material for the ethics essay above.
Create (≈ 30 min, in lab + carry-over to your portfolio) — a one-page AI policy¶
Imagine you are the head of a small UiO department, a music ensemble, a newsroom, or a design studio. Draft a one-page AI use policy for your imaginary organisation. Cover:
- Three things you would mandate (e.g., declaration, provenance metadata, energy budget, opt-out checks).
- Three things you would prohibit (e.g., voice cloning without explicit consent, training on private student work, undeclared agent use in publications).
- One open question you would ask an expert before finalising the policy.
Keep it to one page. Real organisational policies are short; that is the point. Commit ai-policy.md to your portfolio — this is exactly the kind of artefact the Co-Creative AI ethos of this course wants you to leave with.
Going further¶
- Bender et al., On the Dangers of Stochastic Parrots Bender et al., 2021
- Crawford, Atlas of AI Crawford, 2021 — the most readable structural critique
- O’Neil, Weapons of Math Destruction O'Neil, 2016
- Broussard, Artificial Unintelligence Broussard, 2018
- Pasquinelli, The Eye of the Master: A Social History of Artificial Intelligence Pasquinelli, 2023 — a readable cultural-historical lens that pairs well with Crawford
- McCormack et al., Autonomy, Authenticity, Authorship and Intention in Computer Generated Art McCormack et al., 2019 — useful short paper directly on this chapter’s authorship question
- Benjamin, The Work of Art in the Age of Mechanical Reproduction Benjamin, 1968 — the 1935 essay whose questions our debates inherit
- UNESCO, Recommendation on the Ethics of Artificial Intelligence UNESCO, 2022
- The EU AI Act and its official summary European Parliament,Council of the European Union, 2024European Commission, Directorate-General for Communications Networks, Content,Technology, 2024
- The Spawning Coalition Spawning, 2024 — for opt-out tools and arguments
- Regulation (EU) 2024/1689 — The AI Act. (2024). European Parliament. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
- Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). 10.1145/3442188.3445922
- Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/
- Broussard, M. (2018). Artificial Unintelligence: How Computers Misunderstand the World. MIT Press. https://mitpress.mit.edu/9780262537018/artificial-unintelligence/
- Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), 3645–3650. 10.18653/v1/P19-1355
- Salma, Z., Hijón-Neira, R., & Pizarro, C. (2025). Designing Co-Creative Systems: Five Paradoxes in Human-AI Collaboration. Information, 16(10), 909. 10.3390/info16100909
- O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group. https://www.penguinrandomhouse.com/books/241363/weapons-of-math-destruction-by-cathy-oneil/
- Pasquinelli, M. (2023). The Eye of the Master: A Social History of Artificial Intelligence. Verso. https://www.versobooks.com/products/2814-the-eye-of-the-master
- McCormack, J., Gifford, T., Hutchings, P., Llano Rodriguez, M. T., Yee-King, M., & d’Inverno, M. (2019). Autonomy, Authenticity, Authorship and Intention in Computer Generated Art. Computational Intelligence in Music, Sound, Art and Design (EvoMUSART), 11453. 10.1007/978-3-030-16667-0_3
- Benjamin, W. (1968). The Work of Art in the Age of Mechanical Reproduction. In H. Arendt (Ed.), & H. Zohn (Trans.), Illuminations. Schocken Books. https://web.mit.edu/allanmc/www/benjamin.pdf
- UNESCO. (2022). Recommendation on the Ethics of Artificial Intelligence. United Nations Educational, Scientific. https://unesdoc.unesco.org/ark:/48223/pf0000381137
- Regulatory Framework on AI — Official Summary. (2024). European Commission, Directorate-General for Communications Networks, Content. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
- Spawning. (2024). Spawning — Opt-out and Consent Tools for AI Training Data. Spawning Inc. https://spawning.ai/