We envisioned the Openness & Equity in AI Policy Leadership Cohort as a space to bring together practitioners across the emerging technology policy ecosystem to reflect on a key question: What would it mean to write AI policy that empowers diverse communities, respects planetary boundaries, and truly serves the public benefit?
Our cohort - Ana, Camilla, Daniel, Isha, Pen, Samantha, and Ximena - comprises a diverse group of leaders from different parts of the AI policy landscape, spanning labour rights, market power, open science and open source technologies, climate justice, grassroots advocacy, and corporate accountability. Together, we have set out to draft a set of policy recommendations for EU decisionmakers that explore what openness can and should mean in advancing equity, accountability, and public benefit. These recommendations are still a work in progress, and we will continue refining them in our next session this September, in conversation with Max Gahntz from the Mozilla Foundation.
Over the course of the sessions, we examined openness through multiple lenses: technical standards, public AI, climate justice, security, as well as our upcoming session in September which will cover the evolving and challenging EU AI policy landscape. These conversations offered a rare and valuable space to break down silos, connect across disciplines, and critically engage with the status quo.
Here are some of our key takeaways and learnings from the cohort:
Openness exists on a gradient
Many of us entered the cohort with assumptions: that openness is synonymous with open source, that open source inherently produces good outcomes like greater equity and countervailing market power, or that fully open models are always preferable to closed ones.
Over the course of our sessions, we began to challenge these assumptions and recognise the complexity behind the idea of openness. We came to understand that ‘open source’ and ‘openness’ may be interrelated, but they remain distinct concepts. While an AI model may be classified as ‘open source’ or ‘closed source’ based on technical attributes such as usability and replicability, ‘openness’ has more to do with power, transparency and accountability; it must be viewed as a gradient with varying degrees.
We wrestled with fundamental questions: when should systems be open, and why? Who benefits from openness, and who bears the risks? These questions pushed us to confront the limits of openness in contexts shaped by extraction, exploitation, and power imbalances.
-
Ana highlighted how publicly available images of real people can be used without consent or knowledge to train AI models that generate porn. A reminder that making the training data open can enable harm.
-
Daniel pointed out that while open source AI models may be freely accessible, only well-resourced companies often have the infrastructure and expertise to meaningfully use or shape them. This raises questions about who openness truly benefits in practice.
-
Our guest Udbhav Tiwari from the Signal Foundation highlighted how some companies engage in “open washing”, using the language of openness to advance commercial interests and avoid regulation, without delivering meaningful public benefit.
-
Isha emphasised that even when AI systems are open about their risks and harms, affected communities often lack avenues and means to challenge decisions or seek meaningful redress.
Through these conversations, we began to see openness not as a fixed category, but as a spectrum, encompassing transparency, accessibility, knowledge sharing, and collaboration. This reframing helped us move beyond (without wholly ignoring) formal definitions of “open source” to consider how openness functions in practice. We also discussed how in certain cases full access to training data across the AI stack may not necessarily serve as a public interest goal and may lead to unintended consequences such as creation of child sexual abuse material, or intimate image abuse.
As Daniel put it, “openness is not just about licensing, but the power AI tools and their infrastructures wield.” In that spirit, Udbhav underscored the understanding that while open source may have formal definitions put into practice through open source licenses, openness exists on a gradient, and it must be evaluated in context, with equity and accountability at the center.
Reimagining the “Public” in Public AI
One recurring theme was the complexity of the term “the public.” AI is often described as serving the “public good” or delivering “public benefit”, but what public are we referring to? Who defines that good, and who gets left out?
In one of our early sessions, Camilla challenged us to interrogate the assumptions behind calls for “Public AI”. She asked a simple but important question: “Why Public AI? Is that really the answer to serve good?” Together with Alek Tarkowski and Zuza Warso of the Open Future Foundation, we understood the “public” not as a static group or a state-owned entity, but as a dynamic, ongoing negotiation between diverse communities. Ultimately, we recognised that the value of public AI lies not in who builds it, but whether and how it empowers communities.
This question of “who defines the public” becomes even more complex across geopolitical and cultural contexts. Ximena shared examples from Mexico, where the government faces pressure to adopt AI solutions to appear modern and innovative. Often, large tech companies offer these tools “for free,” framing them as inherently beneficial. But as we discussed, these narratives often obscure power dynamics and sideline questions of accountability and long-term public interest. As Samantha put it, “If the product is free, you [the public] are the product”, while Pen noted that, sometimes, “you’re still the product even when you pay!” Both could be true, and illustrate the complexities we must confront.
Across these conversations, we agreed that claims of “public AI” demand deeper scrutiny. We must ask: who is defining the public, who benefits, and on whose terms?
Openness is only valuable if it redistributes power
Discussions of open source in AI often focus narrowly on licensing or code availability, treating openness as a technical checkbox rather than a pathway to equity, participation, and shared ownership. But technical openness alone does not guarantee inclusive or just outcomes. For AI systems to truly serve the public interest, we must ask deeper questions:
Does openness enable meaningful participation? Does it shift control away from centralised actors? Does it create conditions for shared benefit across communities?
As Pen noted, “While open source tools are a necessary component in our toolbox, they are often not sufficient for equitable outcomes on their own.” The real measure of openness lies in whether it decentralises power and enables meaningful public participation. Guest speaker Michelle Thorne from the Green Web Foundation echoed this, emphasising that openness only matters if it actively challenges existing ownership structures. That means going beyond code availability, and toward building processes that foster trust, transparency, and shared governance.
Isha added an important nuance: “Trusted is not the same as trustworthy.” A system may be trusted out of necessity or familiarity even though it poses risks, but trustworthiness must be earned; through openness that is grounded in fairness, responsiveness, and accountability.
Ultimately, we came to understand openness not as an end in itself, but as a means, valuable only when it redistributes power and strengthens the public’s ability to shape the technologies that shape them.
Public AI is a necessity: Openness, security, and the need for systemic reform
AI is often framed as a solution to deeply rooted systemic issues: from climate change and job insecurity to loneliness and inequality. This dominant narrative, which we came to describe as “potentialitis,” repackages long-standing, systemic social problems as ones that can be solved by AI. In reality, this framing often deflects attention from the root problems (often capitalism) and the structural changes those problems actually require.
During our session on AI and climate justice, we examined how superficial gestures of redress often serve more as performative optics than genuine accountability. As Samantha noted, “Companies should be held accountable if they abide by the laws in the EU, but then continue that behaviour elsewhere - usually the Global Majority.” Her insight pointed to the dangers of a ‘not in my backyard’ approach to regulation, where companies comply locally while continuing harmful practices globally. It underscored the importance of regulation that moves beyond checkboxes but challenges and transforms the systems that enable harm in the first place.
As a cohort, we agreed that open and equitable AI will not emerge primarily from market incentives. It demands structural reform, including aligning AI development within planetary boundaries, dismantling harmful data extraction practices, and confronting concentrated corporate power. We explored how “Public AI” could be an alternative: systems built with public funding, governed by public values, and designed by and for the empowerment of communities.
Security was another key theme in our conversations, especially in our session with Bruce Schneier. He challenged the common assumption that secrecy makes AI systems safer. Instead, he argued that openness and security are not at odds, they are intertwined. Open systems, when thoughtfully designed, can expose vulnerabilities, encourage learning and correct harms, and build resilience. As Bruce emphasised, it’s not a technical inevitability that AI systems are closed, it’s an intentional policy choice.
We left this conversation with a shared understanding that openness, accountability, and public oversight are not just ideals, they are necessary conditions for building AI systems that are safe, equitable, and worthy of public trust.
The value of slowing down, listening, and being in dialogue
One of the most meaningful parts of the cohort experience was the space it created; a space where it felt safe to say “I’m not sure” or “I don’t know.” That openness to uncertainty, met with generosity and curiosity, became a foundation for deeper learning.
In a policy and tech landscape that often rewards speed and certainty, the chance to slow down and reflect together, across disciplines and lived experiences, proved rare and powerful. The cohort gave us permission to ask bigger questions, to hold complexity, and to resist easy answers.
Cohort members shared how transformative it was to explore AI through intersectional and interdisciplinary lenses. Not just because it sparked richer insights, but because it built a sense of solidarity and shared responsibility. We were reminded that meaningful progress on openness and equity requires not only good ideas, but also trust, time, and active listening.
What’s next?
While open approaches like open source code, data, and standards can support transparency and public oversight, technical openness on its own is not enough. Without being grounded in equity, justice, and societal need, openness risks becoming symbolic or even reinforcing existing power imbalances.
As the final output of the AI Openness and Equity Policy Leadership Cohort, and with input from a wide network of AI and digital justice practitioners, we are drafting a set of policy recommendations to guide more accountable and equitable AI policymaking. These recommendations are designed to help decision-makers evaluate AI systems in regulation, procurement, and public investment.
We’re excited to continue this work. In September, we’ll convene again with Max Gahntz from the Mozilla Foundation to explore concrete policy reforms needed across the EU that center openness not as a buzzword, but as a tool for equity, justice, and the public interest.
This post was co-written by Nitya Kuthiala.