Why this matters¶
We have spent eleven weeks looking at what Creative AI is. We close the semester by looking at what it might become, what you will make, and what to do with all the work and prompts and decisions you have accumulated.
This is the lightest reading of the semester. The 2 hours of practice this week are spent presenting your final project to the class.
Three futures, none of which is the future¶
It is worth being honest: no one knows where this is going. The most accurate prediction in 2022 about 2026 was made by very few people. Prediction in 2026 about 2030 will be at least as hard.
Still, three “futures” sketches are useful, not because any of them will come true, but because they map out the space of where we might end up. (This kind of “scenarios” exercise comes from strategic foresight and is widely used in policy-making.)
Future A — “AI as electricity”¶
Generative AI fades into the background, the way the internet did in the 2000s and the cloud did in the 2010s. Every tool has AI features; nobody talks about them as AI features any more. The technology becomes infrastructure. The interesting work moves up the stack to design, story, taste, and ethics.
Creators in this future spend less time on production craft (which is automated) and more on direction, curation, and editorial judgement. Smaller studios produce work that used to require larger teams. New genres emerge from the cheapness of iteration.
Future B — “AI as collaborator”¶
Generative AI stays in the foreground as a distinct kind of collaborator. Tools have personalities, opinions, and styles. Studios hire specific AI characters the same way they hire specific actors. Reputation systems emerge for both human and AI contributors. The legal and labour frameworks accommodate this hybrid mode.
Creators learn a new craft: casting. You know which model has which sensibility, which can do which kind of work, which to pair with which human collaborator. The line between making and directing becomes more obviously continuous.
Future C — “AI as flood”¶
Generative AI scales beyond human capacity to attend to or evaluate its output. Most online content is AI-generated; most of it is mediocre and most of it is competing for attention. Search degrades, social platforms degrade, public discourse degrades. Human-made and human-curated work commands a premium in the same way handmade goods do today.
Creators in this future organise into trust networks, where provenance, slowness, and verifiable authenticity are the value proposition. Institutions like libraries, universities, and public broadcasters become more important, not less.
These three futures are not exclusive. Bits of all three are visible already in 2026. The point of the exercise is not to bet on one; it is to ask, in each, what does your discipline look like?
What stays human¶
Whatever the future, some things stay human longer than others. A non-exhaustive list of things that current and near-future AI cannot do well:
- Sit in a room with another person and read their face for ten minutes. Therapists, teachers, social workers, nurses, mentors.
- Live performance that depends on the audience being in the same physical space.
- Long, situated, embodied research — fieldwork, ethnography, clinical care.
- Care for the very young, the very old, the very sick.
- Take moral responsibility for a piece of work or a decision.
- Be liable in a court.
These are not “AI-proof” categories in the sense that AI cannot affect them — it can and will — but they are categories where the centre of gravity stays human for at least the medium term.
Your final project does not need to be in any of these categories. But it is worth thinking, this week, about which parts of your own work and life sit closest to them.
Reading the next decade¶
A few habits that help, regardless of what happens:
- Read primary sources. A model card, a research paper, an EU regulation — these are easier to read than the takes about them, and they are more reliable.
- Watch benchmarks fall. Papers with Code, Stanford’s AI Index, and Epoch AI track capabilities over time. Look at the trend, not the snapshot.
- Build small things. A weekend project tells you more than a year of essays.
- Talk to people in other disciplines. Lawyers, doctors, librarians, designers, teachers — they each see a different face of the same technology.
- Stay sceptical of both extremes. Both “AI changes nothing” and “AI changes everything” are almost always wrong.
Final projects — The Synthetic Gallery¶
The remaining 2 hours of class are spent on the final project showcase, which we call The Synthetic Gallery. The project counts for 50% of the course grade and is the central artefact of your semester. The Synthetic Gallery is a public mini-exhibition open to other UiO students, staff, and invited guests — held both in a physical room at UiO and as a static gallery on the course’s GitHub Pages site.
What a project can look like¶
Anything that meaningfully uses Creative AI for a creative purpose:
- a short story or chapter,
- a poster series,
- a 30–90 second short film,
- a song or short EP,
- a small interactive web piece,
- a game prototype,
- a podcast episode,
- a redesign of a real organisation’s brand,
- a curated exhibition of generated work,
- a written critical essay using AI as a tool for the work and as the object of study,
- a teaching resource for a younger sibling.
Solo or in groups of 2–3. The project should be ambitious enough to be hard, and small enough to finish.
Technical requirements¶
- The work must use at least two different AI modalities — for example text + image, image + video, audio + code, 3D + text. This is the technical bar of the course.
- All prompts, generations, and decisions must be logged and submitted with the work.
- The work and reflection must explicitly acknowledge which AI tools were used, with versions or dates.
What you must deliver¶
- The work itself — file, link, video, deck, or performance.
- A reflection (1 500–2 500 words) covering:
- The brief and the audience.
- The tools used, with versions.
- A timeline of decisions, including the moments where the AI surprised you and the moments where you exerted your will.
- One ethical question you ran into and how you resolved it.
- What you would do differently.
- A 5-minute presentation at the Synthetic Gallery in week 12, with a 5-minute Q&A.
- A gallery page — a single HTML/markdown page (template provided) for the public online gallery, with consent options for inclusion in future cohorts’ material.
- Your full prompt log (the file you have been keeping all semester). Yes, this matters.
What “good” looks like¶
Examples of strong projects from prior offerings:
- A 40-second AI-generated music video for an original song the student wrote, with stems generated by Suno and re-recorded vocals on top, and a hand-edited storyboard in Runway. The reflection compared early Bob Dylan music videos with the new affordances of cheap motion.
- A redesign of the visual identity for a Norwegian charity, using Midjourney for moodboards, Recraft for vector marks, and Figma for the final system. The reflection walked the reader through every prompt and editorial decision.
- A short interactive piece in p5.js where the user types a memory and a generated soundscape plays back. The reflection focused on what the AI got wrong, and why those errors became part of the piece.
What makes these projects strong is not the polish but the fit between brief, tool, and reflection. A modest project with a clear, honest brief beats a flashy project with no spine.
Process memo — once more, with feeling¶
When you submit, your reflection must answer the two questions we have used all semester:
- Where did the AI surprise you?
- Where did you exert your own creative will?
These will be the first things the audience at the Synthetic Gallery asks you in the Q&A. Be ready.
Closing¶
When this course was designed in 2026, the field was moving so fast that two of the tools listed in chapter 1 had merged and one had been bought before the syllabus was approved. By the time you read this in your future career, every tool name will have changed.
What will not have changed is the structure of the questions:
- How do these systems work?
- How do I use them well?
- Who benefits, who is harmed, and what does my practice owe them?
You leave this course with a small toolkit, a personal log of decisions, and one finished project. Take all three with you.
This week’s lab: Reflect, Explore, Create — The Synthetic Gallery¶
The three tracks converge in the final session. The Synthetic Gallery is itself a piece of Create (you are exhibiting an artefact), of Explore (you have probed the limits of every tool used to make it), and of Reflect (you frame it for an audience that did not live the project).
Create — present your project (5 min + 5 min Q&A)¶
- 5 minutes per project + 5 minutes of Q&A.
- Audience is the rest of the class plus invited guests from elsewhere at UiO.
- Bring your laptop. Test the AV during the break.
- Submit your reflection and prompt log by the start of the session.
- A version of your artefact lives in the GitHub-Pages Synthetic Gallery alongside this textbook for at least one year (with your consent).
Explore — read the room¶
Watch every other project. For at least three of them, note in your log:
- one technical thing that surprised you (what the tool managed, or failed at);
- one artistic choice you would have made differently;
- one question you would steal for your own next project.
Reflect — a retrospective across the three tracks¶
After the showcase, write a final 300–500 word entry in your log, organised explicitly around the three tracks:
- Reflect: what changed in your view of AI between week 1 and week 12? Quote your week 1 log entry if you can.
- Explore: which tool taught you the most, and what was the lesson?
- Create: which artefact are you most proud of, and which one are you most ready to leave behind?
These three short paragraphs are also the seed of next year’s textbook — open a pull request on https://
Going further¶
After the course:
- James Bridle, Ways of Being Bridle, 2022 — accessible book-length reframe of intelligence beyond the human/machine binary; reads beautifully right after this chapter.
- Suleyman & Bhaskar, The Coming Wave Suleyman & Bhaskar, 2023 — an industry-insider policy book that takes the risks seriously without giving up on the technology.
- The Stanford AI Index Report Maslej et al., 2024 — the best single-volume snapshot of the field, published yearly.
- The Distill archive Distill, 2021 and Lilian Weng’s Lil’Log Weng, 2024 — the best long-form technical writing on machine learning ideas, free.
- The Hugging Face Hub Hugging Face, 2024 — the closest thing the open AI world has to a town square. Follow people there, not on Twitter.
- The next iteration of this course. The textbook is open and updated yearly. Issues, pull requests, and corrections welcome at https://
github .com /fourMs /Creative -AI.
Thank you for spending the semester here. Make things.
- Bridle, J. (2022). Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence. Allen Lane. https://www.penguin.co.uk/books/441267/ways-of-being-by-bridle-james/9780141994017
- Suleyman, M., & Bhaskar, M. (2023). The Coming Wave: AI, Power, and the Twenty-First Century’s Greatest Dilemma. Crown. https://www.the-coming-wave.com/
- Maslej, N., Fattorini, L., Perrault, R., Parli, V., Reuel, A., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Niebles, J. C., Shoham, Y., Wald, R., & Clark, J. (2024). The AI Index Report. Stanford Institute for Human-Centered Artificial Intelligence. https://aiindex.stanford.edu/report/
- Distill. (2021). Distill — Articles About Machine Learning. Distill Working Group. https://distill.pub/
- Weng, L. (2024). Lil’Log — Notes on Machine Learning. https://lilianweng.github.io/
- Hugging Face. (2024). The Hugging Face Hub — Models, Datasets, and Spaces. Hugging Face. https://huggingface.co/