Reimagining AI as Commons
While AI is often described as “the future” or the “fourth industrial revolution”, there’s still a significant knowledge gap between AI experts and everyday users, many of whom don’t understand how AI works or even what “open-source AI” means. In today’s global tech landscape, AI is far from neutral; it acts as a cultural force, shaping how we perceive and interpret the world. Current models rely on probabilistic pattern-matching rather than true understanding. Because they reproduce the most common patterns in their training data, they replicate dominant cultural narratives and amplify existing biases. While this can shore up acceptance of widely accepted facts, it becomes problematic in policymaking, education and public discourse wherever nuanced or marginalized perspectives are already underrepresented.
Tech companies and policymakers often try to come up with frameworks to “fix” bias, but definitions of sexism, racism and discrimination vary across cultures. In a globalized world in which datasets mix contexts, can bias ever be universally defined? Is a universal AI even possible – or are we moving toward multiple “pluriversal” AIs informed by different value systems?
Furthermore, AI is deeply enmeshed with capitalism, which drives and is driven by industry. Often overlooked are not only the material costs of AI – including electricity, water, rare earth minerals – but also the human costs, including the exploitation of workers in the Global South. AI development is concentrated in the hands of a few powerful corporations and nation-states, reinforcing existing social hierarchies and often harming vulnerable groups such as women of color, non-binary people, immigrants and refugees. Generative AI makes this imbalance visible, since image-generation tools frequently default to Western-centric aesthetics and disregard diverse lived experiences as a result. These limitations highlight the urgent need for alternative, commons-based approaches that diversify what AI can see and say.
These considerations raise fundamental questions: Who benefits from AI? Who decides what data is legitimate for AI training purposes? Whose safety matters when governments and corporations promise “ethical AI”? If AI is more than just a tool, are humans ethically responsible for AI? These tensions show that AI is not just about technology: it’s about humanity, justice and the kind of world we want to create. Training AI requires global conversations about ethics, power and responsibility.
Global AI Governance
Concerns about AI safety have driven governments toward international cooperation. The first AI Safety Summit in the UK (2023) produced the Bletchley Declaration (signed by 29 countries) to promote safe and responsible AI. South Korea emerged as a key player by co-hosting the AI Seoul Summit in 2024, where ten countries along with the EU pledged funding for state-backed AI Safety Institutes and global coordination.Since then, South Korea has launched its own AI Safety Institute, signaling a commitment to safety and inclusivity. However, the third summit in France (2025) shifted the focus from safety to competition, raising doubts about whether global governance can withstand the pressures of an AI arms race. Ethical considerations risk being sidelined as nations race to dominate AI markets, especially for countries with limited infrastructure, which often become sources of cheap labor rather than hubs of innovation.
South Korea: Sovereign AI and Ethical Challenges
South Korea has rapidly become a major AI hub, following a strategy of “adopt technology first, regulate later”. This fast-track approach reflects the pressure on countries outside the EU, US and China to keep up in the global AI race, where AI is often seen as a marker of being a “developed” nation.Korean tech giants are now pushing for AI and data sovereignty. Naver’s HyperCLOVA X, trained on vastly more Korean data than GPT-4, aims to create an AI that “sees, hears and speaks like a Korean”. But this raises some critical questions: Whose voices and values does such an AI represent? Korean online spaces are often misogynistic, while feminist movements actively challenge these norms. Hence the urgent need for ethical and inclusive approaches as South Korea accelerates the process of widespread AI adoption.
Gender Bias and Ethical Concerns
Gender bias and ethical issues loom large in South Korea’s AI landscape. Feminist groups and legal experts deplore the absence of strong ethical frameworks, and recent controversies make these lacunae impossible to ignore.In 2019, Korea Telecom’s interactive AI voice assistant GiGA Genie got public backlash for its sexist responses. Two years later, the chatbot Iruda sparked outrage for using private conversations without consent and generating racist, sexist and homophobic outputs. More recently, a deepfake pornography epidemic in Korean schools has given added urgency to calls for stricter oversight.
South Korea has recently introduced measures to provide a unified AI framework, including the so-called “AI Basic Act” legislation proposed in 2020 and the AI Digital Textbook Promotion Plan in 2023 (subsequently scrapped in the wake of the overthrow of disgraced ex-president Yoon Suk-yeol). But these initiatives drew criticism for issues of bias and infringement of privacy and constitutional rights. Public concern escalated when an AI textbook misrepresented Dokdo, an island that South Korea claims as sovereign territory, as being “subject to territorial dispute” with Japan, echoing the Japanese government’s framing of the matter. This incident underscores the risks of misinformation and cultural bias in AI-driven education.
These cases send a clear-cut message: As South Korea amps up AI adoption, ethical governance cannot be just an afterthought. Without robust safeguards, AI risks reinforcing harmful stereotypes and spreading biased narratives – both online and in classrooms.
Why Community-Based AI Matters
After years of debate, South Korea passed the AI Basic Act in December 2024, becoming the second jurisdiction after the EU to adopt comprehensive AI legislation. The Act reflects South Korea’s ambition to lead on AI, framing it as a matter of national security and economic competitiveness. While it does contain provisions for safety, transparency and privacy protection, glaring shortcomings remain, including the failure to regulate online platforms on which harmful AI-generated content (e.g. deepfake porn) continues to circulate. Globally, there is still no consensus on what data should be regulated or how governments should engage with AI, leaving these questions unresolved despite the rampant AI boom.This situation raises some deeper questions about AI: What would be the nature of a sovereign, non-Western AI, such as those promoted by companies like Naver? Can such systems avoid reproducing colonial and patriarchal structures embedded in local and global data systems? Besides avoiding Western datasets, sovereign AI projects must address the ways in which colonial histories have informed local knowledge systems and social hierarchies. Korean online spaces, many of which are tainted by misogyny, operate within patriarchal structures influenced by Japanese colonial rule and US neocolonialism. As decolonial feminists argue, colonialism and patriarchy are intertwined systems that continue to shape power relations.
Can an AI trained on Korean online discourse truly decolonize representation – or does it risk reinforcing discrimination? This problem calls for an intersectional approach to AI governance, one that takes into account issues of gender, race and global inequalities, while empowering communities to shape technology in inclusive and ethical ways.
AI as Commons
AI ethics increasingly focus on rethinking model training and the sourcing of training data. Most current models rely on massive internet datasets – often “scraped” without consent, which raises serious ethical concerns. Researchers distinguish between data that is given and “capta”, i.e. data that is taken and constructed, to point up the need for AI transparency and permission to use data.In response, the “AI as Commons” movement proposes treating AI as a shared cultural resource shaped by the public. This approach empowers underrepresented voices and local communities to influence technology in inclusive and transparent ways, rather than leaving control to corporations or states.
The LoRA Training Toolkit*is a practical step toward this vision. It enables communities to inject curated, ethically sound local data into AI systems, reducing bias and diversifying outputs. While this won’t change the economic system, it will raise awareness of AI’s social implications and foster democratic participation by including non-experts in the model-training process.
Yonsei University Workshop
This practical step builds on initiatives like Open Future’s AI and the Commons platform and makes use of ethnographic perspectives on shared resources, as proposed by anthropologist Massimiliano Mollona in Art/Commons: Anthropology Beyond Capitalism (2021). Mollona’s essay shows how art can become a collaborative practice in which knowledge is co-created and creativity serves to sustain the commons rather than generating profits.Inspired by this thinking, a three-day workshop held in December 2024 at Seoul’s Yonsei University brought artists, academics and activists together to explore the use of open-source participatory methods for local interventions in AI. Researchers Seoyoung Choi and Miro Leon Bucher introduced “critical data injection” as a way for participants to challenge dominant narratives by training their own models with culturally specific data from South Korea.
In the workshop’s first experiment, participants used this method to generate a more realistic “photo of Seoul” than the usual AI-generated images of the Korean capital. The exercise challenged the glossy, tourism-driven images of Seoul’s skyscrapers and clean, modern skyline typically proposed by AI in line with the government’s branding of Seoul as an orderly, prosperous, hyper-modern metropolis.
Such standard images obscure lived realities and the social, political and urban-planning decisions that shape this city. Research reveals that Seoul’s urban space is profoundly gendered, classed, racialized and ableist. AI-generated imagery seldom shows the city’s inequalities, how it favors heteronormativity and marginalizes queer and disabled communities – groups that have always been part of public life here, as evidenced by events like the Seoul Pride and recent mass protests.
Reimagining Seoul meant giving its inhabitants pride of place. During the workshop, participants reflected on how diverse relationships to the city, informed by differences between neighborhoods and different experiences of daily life, produce several Seouls, not just one. They shared personal narratives and created datasets based on lived experiences using the LoRA Training Toolkit designed for the study.
The resulting images revealed Seoul as a city full of contradictions and political possibilities, a city that means different things to different people, thereby challenging prevailing portrayals of a uniformly globalized, hyper-modern capital and tourist city. While conventional AI generates sanitized images, community-trained models produced scenes featuring neighborhood corners and other markers of everyday life, protest banners and rainbow flags, children and even urban wildlife. Using the participants’ images for LoRA training purposes, the workshop generated AI outputs that diverged sharply from standard representations.
Conditions and Limitations
The workshop showed that community-driven AI training can produce visual narratives strikingly different from those generated by corporate systems, offering a glimpse of what commons-based AI images might look like. While such interventions remain constrained by biases in large base models, they demonstrate that communities can meaningfully shape representation – even on a small scale.True “AI as Commons” requires more than shared datasets: it calls for reciprocal, consensual relationships between contributors. Most commercial and even many open-source models fail to meet this criterion, relying on data scraped without consent and monetizing outputs of the creators whose material they use. A commons-based approach must be ongoing, with datasets and models evolving alongside communities through continual participation and retraining.
Small-scale efforts will not suffice to overhaul the AI ecosystem, but they do create spaces for alternative ways of seeing – and being seen – within AI-mediated cultures. These initiatives suggest systemic measures to be undertaken: open, modular AI infrastructure, public investment and technical frameworks that allow multiple small models to be integrated without relying on a single base system.
Practical Steps Towards AI as Commons
An “AI of the Commons” sounds ideal, but it’s not without risks. Even consensually shared local data can carry bias, so community-built models can still produce harmful outputs. Yet this approach does offer clear-cut advantages: unlike proprietary systems, commons-based AI can be subject to legal oversight and it can enable users to avoid unethical models.For this approach to succeed, inclusive AI governance is essential. Councils representing diverse social groups should oversee local models, allocate resources and ensure accountability – along the lines of Germany’s public broadcasting system, which is independent but overseen by representative boards. By enabling people to build and govern their own models, AI can be turned from an extractive industry into a participatory process in which users become co-creators shaping both outputs and ethics.
Values and norms evolve, and so must AI. A commons-based system embraces this reality through ongoing participation, continual dataset renewal, and retraining, in order to make AI responsive to changing identities and ethics. Technically, this vision aligns with Mixture-of-Experts (MoE) architectures, where specialized community-built models combine tasks like image generation and language processing – without relying on commercial gatekeepers.
Governments and cultural institutions should play a crucial role by investing in open, modular infrastructure, funding community capacity-building and supporting ethical dataset creation through cultural cooperation agreements, exchange programs and grants for “AI as Commons” projects. Practical tools such as mobile storytelling labs and LoRA workshops can make participation accessible to the public at large.
Education is equally vital: educational programs should empower non-technical users to understand, train and audit AI models, while embedding feminist and decolonial perspectives in AI governance. Lastly, robust accountability mechanisms would be needed to enable communities to flag harmful components without censorship, and governments, multilateral bodies and cultural institutions should act as intermediaries and create safe spaces for ethical dialogue.
Inclusive AI is not just a technological goal, it’s a cultural and political imperative. Only through transparency, consent and shared AI governance can AI reflect plural experiences, counter systemic discrimination and foster a more sustainable and peaceful global future.
* This article is based on “Democratising Artificial Intelligence through Culture” (Seoyoung Choi, Miro Leon Bucher, © ifa 2025), a study carried out within the framework of the ifa’s “Culture and Foreign Policy” research program and published under a Creative Commons license (CC BY 4.0). Read the original study (in English) here.
Authors: Seoyong Choi, Miro Leon Bucher
Edit: Leslie Klatte
English Proofreading: Eric Rosencrantz
German Translation: Kathrin Hadeler
Korean Translation: Young-Rong Choo