The AI Commons Licence: A Pattern Language For Commons-Based AI Deployment Terms
The second installment in our Civilizational AI series is a guest editorial by Em Lenartowicz.
In this guest article, Em Lenartowicz (CLEA/Nunet Foundation) has specific proposals to ‘commonify’ our AI infrastructures. She proposes six specific commons, initiated through a new type of license.
This text is more technical than our usual fare but important as it proposes a new approach to protect AI as a public and/or common good.
This is also the second article in our Civilizational AI series.
The author’s affiliations are:
Free University of Brussels, CLEA (VUB)
United Nations, “AI for Good” Impact Initiative – Steering Committee
NuNet Foundation Fellowship (Governance Council)
Statement of interest: I am a colleague of Marta both in CLEA, as a guest researcher, and in the Nunet Foundation Fellowship, but this article is not directly related to these distributed computing activities.
Guest editorial: The Boring Moment That Decides Whether AI Becomes A Commons. By Em Lenartowicz.
Imagine an AI that tracks your biology so well that you barely ever need a doctor. Imagine another that follows and informs your learning patterns so closely that school becomes obsolete. We might ask the obvious question: is this good or bad?
I want to shift the question by half a step, because the decisive moment is rarely the model. It is the arrangement into which the model is inserted: the rule-set that decides how it is offered, sold, and governed, and therefore the game it is made to play.
That rule-set arrives in suspiciously boring dress: platform terms, API agreements, procurement clauses, vendor addenda, cloud contracts. We treat them as paperwork because they look like paperwork. Yet they behave like infrastructure. They coordinate behaviour among actors who will never sit in the same room, and they decide what becomes visible, what becomes rewarded, what becomes normal, and what becomes conveniently “nobody’s problem”.
This is how enclosure happens now. Not only by taking something away, but by setting defaults that turn shared resources into private rent streams while social, ecological, and governance costs drift downstream.
If you come from Michel Bauwens’ world, none of this should feel exotic. Commons either stabilise through good interaction rules, or they get hollowed out. A commons is a grammar.
My proposal is to treat AI deployment terms as the place where that grammar can be written deliberately.
A Calibration Point: Licences Already Rewired Whole Ecosystems
We have already seen small legal templates reshape system dynamics without waiting for legislatures to catch up. Creative Commons did it for culture and knowledge. Open source did it for code, and now also for a growing share of AI tooling and model weights.
They worked because they made certain choices repeatable, even defaultable. A funder could say “publish under CC BY” and make reuse frictionless. A developer could pick GPL or Apache and know, in advance, what reuse and reciprocity will look like downstream. Many local decisions then accumulated into shared, interoperable ecosystems.
Yet AI impact is increasingly set elsewhere: not only in upstream artefacts, but in deployment-as-a-service—APIs, platforms, procurement bundles, downstream wrappers, integration layers. Upstream openness still matters, but it does not decide who captures value, what becomes visible in use, how ecological costs are handled, who retains access at scale, whether communities are replenished, and who has standing when harms emerge.
That is the gap I’m thinking of.
A Pattern Language For Deployment Terms
Over the last year, I have been developing a proposal I call “AI Commons (AIC) Licence Suite,” intended as a design space for AI governance at the deployment layer (Lenartowicz, 2025a, 2025b, 2025c). The core intuition is simple: licensing and “terms” are one of the places where social rules get written, and where those rules can travel through real supply chains.
AIC is a pattern language for those terms: reusable, composable clause patterns that projects can insert into agreements they already use (API terms, platform conditions, procurement clauses, cloud and vendor contracts, integration agreements), so commons-shaped expectations survive handovers between actors.
In the papers, I describe six moves that can be combined depending on context. They function as contractual levers that change how a deployment behaves over time.
It starts with value (Value Commons).
When automation creates surplus, does it circulate or concentrate? AIC makes room for terms that distribute a predictable share directly to the people and communities who generate the value, sustain the system, or carry its costs. In practice, this means clear definitions of who qualifies, what counts as surplus, when distribution happens, and how payment flows are verified.
Then comes visibility (Transparency Commons).
What must be knowable during operation? AIC treats transparency as a stable, ongoing surface: evaluation summaries that can be compared across versions, change logs that explain what shifted and why, incident reporting, and enough operational facts for downstream users to act responsibly and for upstream providers to be held to account.
Then comes footprint (Sustainability Commons).
Compute has ecology. AIC supports basic, comparable accounting for energy and emissions with defined boundaries, so claims can be checked and improved. It also creates room to link scaling up to scaling better: stronger efficiency commitments, cleaner energy requirements, or tighter reporting once usage passes agreed thresholds.
Then comes access (Access Commons).
AIC supports terms that secure durable use rights as systems scale: reserved capacity or service tiers for public-interest users, predictable pricing or caps where needed, continuity commitments, and rights of exit and portability.
Then comes reciprocity (Reciprocity Commons).
When commons-based resources are commercialised, what flows back? AIC supports legible mechanisms that return value to the commons that sustains the system: contributor funds, stewardship budgets, shared registries, or maintenance pools. It sits in the copyfair neighbourhood, expressed in deployment terms that can travel across supply chains.
Finally comes standing (Governance Commons).
When stakes rise, who has the right to intervene? AIC gives governance contractual hooks that persist over time: review triggers for major updates or expanded use, escalation paths, audit or evaluation rights, and named processes or bodies with real standing to pause, require remediation, or impose conditions when risks and harms cross agreed thresholds.
These moves translate a familiar commons lesson into the deployment layer: shared resources hold when they are replenished, when extraction faces enforceable reciprocity, and when affected groups have standing in the rules of the system.
The main structural idea is composability. You can adopt only the parts you actually need, and they still connect cleanly with the parts others are using. A project can adopt one move, several, or a full bundle, and still remain legible because the same named patterns and expectations recur across deployments. That is how a grammar becomes real in practice: clauses that can be recognised, obligations that can be carried across handovers, and terms that can be compared across projects without starting from zero each time. Over time, those repeated, interoperable choices accumulate into a shared order.
A key part of this proposed grammar is a supply-chain syntax. AIC is designed so obligations can run in two directions: downstream inheritability and upstream conditioning. Downstream, certain commitments “stick” to the deployment as it is integrated, resold, wrapped, or redistributed, so downstream operators inherit clear duties. Upstream, certain duties become conditions of participation: core providers can require suppliers to provide the artefacts, data, logs, or assurances needed to keep commons commitments meaningful in operation. In other words, AIC makes it possible to express what must propagate downstream and what must be satisfied upstream for the deployment to remain coherent.
What It Could Look Like
1) A major automation deal that writes redistribution into the contract
A global consultancy wins a multi-year modernisation programme with a major European infrastructure provider. The brief is to automate large parts of the client’s software development and maintenance pipeline. What used to take several hundred person-years now runs through an AI toolchain orchestrated by a small human team.
Under a Value Commons profile, the master services agreement treats the productivity gain as a governance question, not only a pricing question. Above an agreed historical baseline, a fixed share of documented cost savings flows into a transition and innovation fund for affected staff across both organisations. The contract defines eligibility, what counts as savings, when transfers occur, and what evidence counts. It also sets governance: staff representatives sit on the fund’s steering group, and annual summaries of contributions and disbursements circulate internally. The contract stops “efficiency” from becoming a euphemism for displacement. It turns automation gains into a managed flow—back into people, skills, and the next round of collective capacity.
2) A transparency profile that creates a shared evidential base across a platform and its vendors
A large social platform relies on AI for content moderation: flagging harassment, hate speech and disinformation. Some systems are built in-house, others are licensed from vendors. In practice, the crucial choices live in private contracts and internal tooling. Outsiders see outcomes and policy pages.
A Transparency Commons profile turns “appropriate transparency” into a repeatable contract pattern. To deploy at high volume under the profile, the platform maintains a standard transparency pack: a structured description of the model’s role in the moderation stack, summaries of training and evaluation sources, evaluation and red-team results with basic breakdowns, operational metrics (for example, volumes of flags and appeal/reversal rates), and notable failure modes. Serious incidents are logged and linked to model versions in a change log. The same profile also becomes a procurement filter: vendors who want large platform contracts offer a Transparency Commons version that supplies the artefacts the platform needs to stay accountable. The result is a reusable record of behaviour that can be shared under defined access rules, rather than reinvented as bespoke compliance paperwork in every deal.
3) A compute network where sustainability claims become comparable, and workloads can be routed by rule
A decentralised compute network matches buyers of AI workloads with a global pool of node operators. Under baseline terms, jobs chase price, latency and reliability. Energy sources and carbon intensity remain thinly described, so “green” claims drift into branding.
A Sustainability Commons onboarding profile turns sustainability into an operational discipline with a ratchet. Node operators who want the profile publish a standard footprint template (grid region, energy mix or on-site renewables, hardware type, typical utilisation) and accept simple metering or estimation rules. The network maps this into comparable categories and makes boundaries explicit, so the numbers mean something.
Then the profile ties access and incentives to ongoing improvement. Buyers who opt into the profile accept routing rules that favour lower-impact nodes, and they accept tightening targets: above defined volume thresholds, a growing share of their workloads must run in preferred categories, and intensity ceilings progressively tighten over time. On the supply side, node operators get better placement and better terms when they improve their footprint category (for example by moving to cleaner power, upgrading to more efficient hardware, or meeting utilisation benchmarks), and they drop in priority if they do not.
Reporting stays practical, but it is not passive. A shared formula estimates energy per job from hardware, utilisation and runtime; the network publishes periodic performance summaries that show intensity trends, not only totals; and a registry allows benchmarking across networks and cohorts. The same clause pattern governs both sides, so the improvement curve remains coherent as workloads move through the marketplace.
4) A neighbourhood camera system where residents gain standing by contract
A company sells an AI-enabled camera system to housing associations and residential communities. It spots intrusions, damaged property, fires. It can also, if configured, recognise faces and follow people across entrances. Under standard service terms, settings are decided between the housing association and the vendor. Residents usually discover what the system does when something goes wrong, or when the feature creep is already in place.
A Governance Commons profile keeps the product commercial, but changes who has standing over high-impact choices. The service contract requires a residents’ board and names its role in plain language. The licence lists the decisions that cannot be taken without the board: whether facial recognition is allowed at all, whether footage can be shared with police, how long video is kept, which areas may be monitored, what counts as a “security incident”. Changes in these settings must be proposed, discussed, and minuted. The vendor commits to implement high-impact changes only after a recorded decision, and the contract gives the board a right to trigger an external review and require a temporary switch-off of specific features after serious incidents.
Across these examples, the technical system can stay the same. Outcomes shift because the handover rules shift: who must provide artefacts upstream, what obligations stick downstream, what gets measured in operation, where value flows, and who has standing when stakes rise. (For more examples, see Lenartowicz, 2025b, 2025c.)
A Small Invitation
I am now convening a small working group to pilot-test this proposal in real deployments and to turn the pattern language into a commons: shared, improved through use, and reusable by others.
The work is practical. We will take a handful of live projects, map their actual supply chains, and draft a few clause patterns that make beneficial obligations travel across those handovers—upstream and downstream—without dissolving at each contract boundary.
If you are building or operating an AI system with a real stack behind it, and you care about value, transparency, sustainability, access, reciprocity, or governance as you scale, I would love to hear from you. Send me a short note on what you are deploying, who sits upstream and downstream, and which outcomes you want the contract layer to hold in place.
Email: emetlalune@icloud.com
References
Lenartowicz, E.M. (2025a). Impact-Oriented Licensing for Artificial Intelligence: A Conceptual Framework for a New Domain of AI Governance. SSRN id=5794362
Lenartowicz, E.M. (2025b). Shaping AI Impacts Through Licensing: Illustrative Scenarios for the Design Space. SSRN id=5835702
Lenartowicz, E.M. (2025c). AI Commons (AIC) Licence Suite: A Modular Framework for Impact-Oriented AI Governance. SSRN id=584852


Texte très éclairant, merci pour cette contribution.
J’y vois une avancée importante : déplacer la question de l’IA du modèle vers les arrangements dans lesquels il est déployé, et reconnaître que ce sont bien les règles d’usage, de circulation et de gouvernance qui déterminent les effets réels des systèmes.
Intuitivement, je situerais l’AI Commons Licence comme une grammaire d’accompagnement et de stabilisation : une manière très juste d’inscrire, au niveau contractuel et opérationnel, des exigences de valeur partagée, de transparence, de soutenabilité, de réciprocité et de gouvernance, une fois que des usages et des chaînes d’acteurs existent.
De notre côté, nous travaillons plutôt en amont, sur ce qui précède le contrat : les conditions de capacité, de seuil et de passage qui permettent à des acteurs, des collectifs ou des marchés émergents d’exister sans être immédiatement capturés ou figés. Autrement dit, là où les règles ne peuvent pas encore être écrites, mais où le risque, les limites et les transformations doivent déjà être rendus lisibles.
Dans cette lecture, AIC et ce type d’approche ne s’opposent pas : ils opèrent à des étages différents d’un même cycle, entre émergence, passage et stabilisation des communs.
Merci encore pour ce travail, qui ouvre un espace très fécond pour penser l’articulation entre gouvernance, usage et transformation.
🙏🦾🌐🧬