Creating Agentic AI Cooperatives for the Recovery of Economic Self-Sovereignty
This fourth installment in our ‘Civilizational AI’ series is a guest article, from Young Yoon in South Korea. It argues for the need of data commons aided by AI agents.
Young Yoon, Ph.D. is Associate Professor at the Department of Computer Engineering, in Hongik University, Seoul, South Korea.
< Resistance to Big Tech will not come from isolated individuals but from networks of such avatars organized as cooperatives. >
Young Yoon:
The impact of AI’s rapid development on society has been enormous. Its shockwaves have persisted for years and continue to intensify. When I see my elderly mother finding comfort in conversations with ChatGPT— receiving words of consolation for her worries and even treating AI as a trusted companion— I cannot help but feel that we are living in a truly uncanny world. AI has now settled into the fabric of everyday life. Students complete their homework using large language models, and it seems plausible that teachers may also rely on LLMs to evaluate that work. In offices, AI agents automate workflows and give employees a tangible sense of increased productivity.
I have even witnessed cases where a single individual, through prompt engineering alone, developed an arcade game machine in a couple of months at less than one-hundredth of the traditional cost—work that would once have required several human developers and substantial capital. As a university educator, such examples lead me to question the very purpose of conventional coding education.
This palpable sense of rising productivity invites a deeper question: Is AI-driven innovation truly ushering in a historic industrial revolution? Are we, in fact, enjoying unprecedented prosperity?
Recently, a former student visited me. Sharp and diligent, he had secured a position at a prominent AI computer vision company, only to be laid off at thirty after only two years. We hear constant laments about the scarcity of jobs. Small businesses are collapsing; vacant storefronts are increasingly common. The economy does not feel strong on the ground. AI’s breakthroughs are undeniably impressive, yet it remains unclear whether our lives have materially improved.
In 2024, Geoffrey Hinton, often called a godfather of deep learning, became the first computer scientist to receive a Nobel Prize. Interest in AI has since grown without limit. Yet far less attention has been paid to that same year’s Nobel laureate in economics, MIT professor Daron Acemoglu, who developed influential theories on why institutions succeed or fail. Of particular relevance to the AI era is his recent macroeconomic analysis of AI’s impact. Acemoglu examined Total Factor Productivity (TFP)—the portion of productivity growth attributable to technological innovation and social policy beyond capital and labor— and found that TFP growth over the past decade was only 0.7 percent. Given that AI has been at the center of technological innovation, this figure can reasonably be interpreted as reflecting AI’s macroeconomic contribution.
While not negligible, 0.7 percent TFP is hardly “revolutionary.” Acemoglu characterizes current AI as “so-so automation,” likening it to self-checkout kiosks at grocery stores. His outlook for the coming decade is even more pessimistic, projecting TFP growth of only 0.55 percent. More troubling, however, is the acceleration of wealth inequality and the social instability it may engender. In Power and Progress, he warns that without appropriate social policies, the consequences for humanity could be severe.
By contrast, over the same decade, Big Tech firms have increased their profits by hundreds of percent. Their shared strategy is straightforward: model user data through AI and deploy it for business gain. We provide the data, yet it is unclear whether we truly receive commensurate value. Consider YouTube. It analyzes viewing patterns internally and directs users toward content that maximizes advertising revenue. At dinner tables, people laugh about the same viral Shorts videos—naturally so, since driving traffic to trending content optimizes ad profits. With the introduction of the “Like” button, independent bands’ videos are buried, weakening their connection to fans and directly affecting their livelihoods. The result is a symbiotic relationship between large YouTubers and the platform itself, while others serve merely as unpaid extras. Even access to YouTube’s viewing-pattern APIs is restricted to channel owners; external entrepreneurs who seek some share in the ad market are excluded.
We have given away our data, yet what returns to us are dopamine-driven, goofy videos. Having surrendered our self-sovereignty, we watch trivial content while Big Tech consolidates wealth. Jaron Lanier, in Who Owns the Future?, warns that unless one becomes part of Big Tech, opportunities for wealth creation will shrink, potentially leading to the collapse of the middle class. He describes “siren servers”, platforms that entice users with alluring services while extracting their data, much like sirens luring sailors to their demise. We are drawn in, often unaware of the extraction underway, enthusiastically applauding each new AI breakthrough. Lengthy service agreements— miniature constitutions in fine print— are hastily accepted, even when they effectively transfer all personal data rights to the provider.
If we recognize this problem, we must reconsider the casual, uncompensated provision of our data. Ted Nelson foresaw this issue early on. His Xanadu project envisioned bidirectional links that could track references and enable micropayments. That design was never realized, and we consequently lost sovereignty over our own data.
There are attempts to surface and address this issue. Platforms such as Databricks Marketplace, DataHive, SingularityNET, Ocean Protocol, AEVIR, and Sahara AI treat data as a tradable asset. Yet their marketplaces remain far from vibrant. One key reason appears to be persistently high transaction costs. Ronald Coase and Oliver Williamson argued that when transaction costs are excessive, markets fail to form. If prices lack transparency, negotiations become burdensome and deals collapse. Today’s digital data marketplaces reflect this reality. Datasets and AI models rarely have clear price tags; prospective buyers are told to “call for pricing.” If a single apple costs $500, no one will buy it. But if a 1GB training dataset is offered at $500, determining whether it is a bargain is far more complex. Pricing digital data is intrinsically difficult. Simply listing assets on a marketplace and expecting organic bargaining is naïve. Even when transactions occur, governance mechanisms to prevent misuse or ensure fair compensation for derivative works remain immature.
Consequently, these marketplaces have not flourished, even among enterprises and skilled developers, let alone ordinary individuals. For individuals, the marginal value of a few Instagram comments is negligible and difficult to price. Indeed, compensation in class-action lawsuits in Korea over personal data breaches, often around $10 per individual, offers a rough proxy for how little isolated personal data is valued.
Lowering transaction costs, as Coase and Williamson suggested, often requires institutional intervention. While digital marketplace platforms have emerged organically, transaction costs have not meaningfully declined. In fact, fragmented platforms may increase the search costs of locating relevant data and models due to polycentricity.
Individuals who seek to reclaim data sovereignty and generate income find little practical support. Models such as DataHive’s, where individuals contribute data for potential monetization, remain marginal. If transactions are not active among corporate players and developers, opportunities for individuals are even slimmer.
However, it would be incorrect to conclude that individual data has no value. A single person’s data may not generate high-value AI services. Yet when combined, its value may scale nonlinearly. Consider clinical research on treatment efficacy across life stages. Results from short-lived lab mice do not easily extrapolate to humans. But if individuals with similar biological characteristics are grouped by age cohorts, lifecycle effects can be inferred more effectively. The resulting benefits could be substantial.
Thus, we cannot simply build a marketplace and hope that Adam Smith’s invisible hand will magically create transactions. We need an intermediary with intentionality, an agent endowed with artificial intelligence that represents our interests. Individuals would contribute their data into cooperatives and delegate to AI agents the entire value chain: from data aggregation and federated learning to AI service production and revenue generation. Dividends would be distributed according to transparent contribution metrics, also managed by AI agents.
Privacy concerns can be mitigated through federated learning and distributed computation, ensuring that data never leaves its original location. On the demand side, “custobots”, AI agents acting on behalf of consumers, are emerging as recently analyzed by Gartner. Corporations may also deploy such agents. Cooperative-representing agents and custobots may transact autonomously.
As the saying goes in Korea, above the God stands the landlord—and above the landlord stands the platform owner. Likewise, individuals may yet generate income from their own data assets. Of course, transactions remain complex. Dividend shares must reflect contribution levels; pricing for AI services must be negotiated; compliance must be monitored. Yet these complexities can be delegated to intelligent AI agents, lowering the overall transaction cost.
While some, like Yuval Harari, contemplate AI as a potential legal entity, I envision AI strictly as my proxy and subordinate, an agent fully aligned with my economic goals and values. This aligns with Jaron Lanier’s concept of the “Economic Avatar.” Resistance to Big Tech will not come from isolated individuals but from networks of such avatars organized as cooperatives.
Skeptics may question whether cooperatives can produce meaningful AI services. Yet we are entering the era of vertical AI and small models. With high-quality, domain-specific data, impactful AI services are feasible. Big Tech does not monopolize all valuable data. Vast repositories remain untapped, including sensitive domains like medical data, which Big Tech cannot freely access. If cooperatives prioritize fair revenue distribution for ordinary individuals, public support may follow. Big Tech is structurally driven to maximize profit, and without strong social pressure, it is unlikely to share gains equitably.
Alvin Toffler’s notion of the “prosumer” aptly describes us today: we produce data while consuming data-driven services. Toffler believed prosumers’ economic impact would be immeasurable. That remains true, though today’s data prosumers are passive. Agentic AI may overcome this passivity and open the door to a new form of revolutionary wealth.
Sam Altman has declared 2025 the “Year of Agents,” perhaps envisioning individuals equipped with super-intelligent personal agents achieving economic independence. Yet this risks dependence on frontier AI platforms, recreating a YouTube-like equilibrium: a winner-take-all structure sustained by the symbiosis between the platform and a very small number of dominant individual unicorns. Kevin Kelly’s “1,000 True Fans” theory offers a counterpoint: one does not need millions of followers to thrive. Inspired by this idea, Jack Conte founded Patreon after struggling as an indie musician on YouTube. Patreon continues to provide sustainable income opportunities for creators. Likewise, even modest datasets, when aggregated and deployed through federated learning, could generate viable income streams.
There is no need to belong to a single cooperative. One could participate in multiple purpose-driven cooperatives, diversifying income sources. Such cooperatives could bridge the digital and real economies and scale trans-locally in the digital sphere.
Perhaps economic independence through cooperative data aggregation represents one of the last avenues of fulfillment for Homo sapiens. Pandora’s box has already been opened, and dystopian possibilities remain real. A dying star often expands in a final burst of brilliance before it vanishes. Humanity may be in a similar phase—poised between evolution and extinction—seeking, perhaps, one last sphere of comfort within a decentralized, self-sovereign order built on our collective data.
Bio of the author
A bio of the author, who came to visit us in Chiang Mai during the Wamotopia festival. See the picture below.
“Young Yoon, Ph.D., is an Associate Professor of Computer Engineering at Hongik University in Seoul, where he leads the Distributed Intelligence and Autonomy (DINA) Lab. His research focuses on distributed intelligence, AI-driven security systems, and autonomous digital infrastructures, particularly middleware and orchestration platforms that enable large-scale systems to self-optimize, self-protect, and self-learn.
He received his Ph.D. in Computer Engineering from the University of Toronto and previously earned both his B.A. and M.S. in Computer Science from the University of Texas at Austin. Before entering academia, he worked in industry and research roles including Samsung Electronics and distributed systems development in the cloud computing sector.
Professor Yoon’s work spans distributed systems, AI-enabled security operations, intelligent mobility, and decentralized data infrastructures. His research explores how emerging AI architectures, such as autonomous agents and cooperative data ecosystems, can reshape digital infrastructures and support new models of economic sovereignty in the AI era.
He is also the founder and CTO of Neouly Inc., a technology venture exploring next-generation distributed AI platforms.”
Note from Michel Bauwens: if you come to Chiang Mai and want to converse about commons-driven social and civilizational change, meet me in Living the Dream, Chang Phuak, as Young Yoon did:



It is a great pleasure to have this article published here.
Mr. Bauwens’ interview at DEVCON 2024 regarding cosmo-local commoning with Web3 immediately intrigued me, prompting me to reach out to him for an in-person meeting. The conversation with him in Chang Phuek was a delight. He warmly engaged with my ideas and offered valuable advice, including references to successful real-world cooperatives such as Seikatsu and SMART for EU artists, which may provide useful clues for realizing an AI commons cooperative in the digital realm as well.
I strongly recommend watching his epic interview at the following link:
https://youtu.be/UCkLHj6r7y8
https://conordesmond.substack.com/p/the-visible-hand?r=24v233&utm_medium=ios