Direkt zum Seiteninhalt springen

Die Filme und Gespräche des Panels „Solidarische Intelligenz“ betrachten künstliche Intelligenz nicht nur als technologisches Werkzeug, sondern als soziale, ästhetische und politische Bedingung, die die Art und Weise der Bildproduktion, -interpretation und -verbreitung grundlegend verändert. Die folgenden Texte bieten Ansätze zu einigen der von den Filmemacherinnen und Filmemachern aufgeworfenen und im Panel diskutierten Fragen: Wie verändern generative Systeme Autorenschaft und filmische Praxis? Welche Formen von Arbeit, Infrastruktur und Machtverhältnissen liegen KI-generierten Bildern zugrunde? Und wie können Künstler diese Systeme kritisch hinterfragen und ihre Funktionsweise sichtbar machen, anstatt sie als gegeben hinzunehmen?

Diese bibliographische Auswahl vereint Perspektiven aus Medienwissenschaft, Soziologie, feministischer Theorie, politischer Ökonomie und Kunsttheorie und verortet KI im breiteren Kontext der Geschichte von Bildproduktion, Automatisierung und kollektiver Wissensgenerierung. Anstatt KI als autonome Intelligenz zu behandeln, betonen diese Werke ihre Abhängigkeit von menschlicher Arbeit, kulturellen Archiven und materiellen Infrastrukturen und laden uns ein, die Bedeutung der Bildproduktion und -bearbeitung in der heutigen Zeit neu zu überdenken.

Irani, Lilly. 2016. “The Hidden Faces of Automation.” XRDS: Crossroads, The ACM Magazine for Students 23 (2): 34–37. 
https://doi.org/10.1145/3014390

→ A concise, widely cited intervention explaining how “automation” often depends on hidden human labor (e.g., gig work), challenging narratives of fully autonomous systems.

Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 610–623.
https://doi.org/10.1145/3442188.3445922

→ A foundational critique of large language models: it argues that scaling up can intensify harms (bias, environmental costs, opacity, and power concentration) while offering a misleading sense of “understanding.”

Crawford, Kate. 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven, CT: Yale University Press.

→ Reframes AI as an extractive, material infrastructure (minerals, energy, labor, data) rather than a purely “virtual” technology, tying AI to geopolitics and environmental damage.

Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.

→ Uses case studies in welfare and public services to show how automated decision systems can intensify poverty governance, surveillance, and exclusion under a veneer of objectivity.

Laba, Nataliia. 2025. “Whose Imagination? Conflicting Narratives and Sociotechnical Futures of Visual Generative AI.” AI & Society.
https://doi.org/10.1007/s00146-025-02675-2

→ Uses sociotechnical imaginaries to analyze competing public narratives around visual genAI (creativity/democratization vs. extraction/market power), based on empirical discourse analysis.

Toupin, Sophie. 2024. “Shaping Feminist Artificial Intelligence.” New Media & Society 26 (1): 580–595.
 https://doi.org/10.1177/14614448221150776

→ Traces feminist engagements with AI historically and conceptually, showing how feminist theory and activism can reorient what counts as “intelligence,” whose values get built in, and what futures are pursued.

Whittaker, Meredith. 2023. “Origin Stories: Plantations, Computers, and Industrial Control.” Logic(s), no. 19 (May 17).
https://logicmag.io/supa-dupa-skies/origin-stories-plantations-computers-and-industrial-control/

→ Connects early computing and managerial control to plantation and industrial logics, arguing that contemporary computational systems inherit histories of labor discipline and extraction.

Further reading:

Ericson, Petter, Joel Modin, and Johan Söderberg. 2024. “Tracing Class and Capital in Critical AI Research.” tripleC: Communication, Capitalism & Critique 22 (1): 307–328.
https://www.triple-c.at/index.php/tripleC/article/view/1464

→ Maps how “Critical AI Studies” addresses class and capitalism, pushing the field toward more explicit political-economy analysis of AI research, deployment, and labor relations.

Freedom House. 2023. “The Repressive Power of Artificial Intelligence.” In: Freedom on the Net 2023.
https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence

→ Documents how states use AI to scale censorship, surveillance, and disinformation, emphasizing that AI often amplifies (rather than replaces) existing repression infrastructures.

Katz, Yarden. 2017. “Manufacturing an Artificial Intelligence Revolution.” SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.3078224

→ Argues that “the AI revolution” is not just technical progress but a strategically produced narrative used to legitimate investment, institutional authority, and particular models of intelligence.

Klimczak, Peter, and Christer Petersen, eds. 2023. AI – Limits and Prospects of Artificial Intelligence. Bielefeld: transcript Verlag.
https://library.oapen.org/bitstream/id/a6924c76-0041-49ed-94ab-808289e9eacf/9783839457320.pdf.

→ An edited volume that surveys conceptual and practical limits of AI while discussing implications for critique, governance, and societal expectations of “intelligent” systems.

Konzack, Lars. 2025. “Generative AI, Simulacra, and the Transformation of Media Production.” Athens Journal of Mass Media and Communications 11 (3): 177–196.
https://www.athensjournals.gr/media/2025-11-3-3-Konzack.pdf

→ Reads generative AI in advertising/film/TV through Baudrillard’s simulacra, arguing that synthetic media destabilizes authenticity, authorship, and audience trust in “real” referents.

Lindgren, Simon, ed. 2023. Handbook of Critical Studies of Artificial Intelligence. Cheltenham, UK: Edward Elgar Publishing.
https://doi.org/10.4337/9781803928562

→ A comprehensive reference work assembling interdisciplinary critical approaches (STS, media studies, sociology, political economy) to analyze AI’s cultural, social, and political effects.

McQuillan, Dan. 2015. “Algorithmic States of Exception.” European Journal of Cultural Studies 18 (4–5): 564–576.
https://doi.org/10.1177/1367549415577389

→ Argues that data mining and pervasive tracking can produce governance “exceptions” where automated classification enables new forms of political exclusion and control beyond ordinary legal accountability.

Pasquinelli, Matteo. 2023. The Eye of the Master: A Social History of Artificial Intelligence. London: Verso.
 https://www.versobooks.com/products/735-the-eye-of-the-master

→ Historicizes AI through labor, measurement, and industrial control, linking “machine intelligence” to the organization of work and the politics of knowledge production.

Steyerl, Hito. 2025. Medium Hot: Images in the Age of Heat. London: Verso. 
https://www.versobooks.com/products/3329-medium-hot

→ A recent essay collection arguing that images are inseparable from “heat” (energy-intensive computation, logistics, warfare, and climate), and asking whether art is increasingly made by and for machines.

Gefördert durch: