A Literary Festival, an AI Revelation
A few weeks ago, the vibrant Jaipur Literature Festival (JLF) in India became an unexpected forum for a crucial discussion about the future of artificial intelligence. During a panel ostensibly focused on the legacy of empire, the conversation took a sharp turn. The audience, captivated by Pankaj Mishra’s ‘From the Ruins of Empire: The Revolt Against the West and the Remaking of Asia,’ posed a series of pointed questions, not about literature, but about DeepSeek, a new generative AI model from China.
These questions – How did we get here? How do we craft the best path possible for the future of AI? Why is open source key in AI development? – resonated far beyond the festival grounds. They touched upon a deep-seated historical rivalry, a yearning for self-reliance, and a growing global movement advocating for a more open and collaborative approach to AI development.
The Historical Roots of DeepSeek’s Reception
DeepSeek’s emergence at a literature festival might seem peculiar. However, its prominence is deeply intertwined with historical events and a long-standing rivalry, particularly between Asia and the West. While European AI labs have garnered acclaim for their open-source breakthroughs, DeepSeek’s reception in Asia carries a far more profound historical resonance.
The launch of DeepSeek was met with intense media coverage. Its reception at JLF revealed a sentiment that transcended mere discussions of AI performance. Indian writers and journalists, often critical of China, found themselves united by a shared struggle against the dominance of American AI Corporations (AICs). This enthusiasm for DeepSeek across Asia is rooted in colonial history and, more recently, in provocative corporate pronouncements.
AI: A Modern Struggle for Self-Reliance
For Stephen Platt, author of ‘Imperial Twilight: The Opium War and The End of China’s Last Golden Age,’ China’s technological ambitions are inseparable from its historical scars. The Opium Wars (1839–1860) serve as a potent symbol of how Britain’s technological and military superiority humiliated China. This ‘Century of Humiliation’ fuels China’s current drive for self-reliance, its aggressive investments in AI, semiconductors, and other critical technologies. It’s a determination to avoid dependence on Western technology, a lesson etched into the national consciousness.
The Indian panelists at JLF found common ground in this narrative. Like China, India bears the dark mark of the East India Company’s influence. Furthermore, British journalist Anita Anand highlighted a controversial video of OpenAI CEO Sam Altman dismissing India’s potential to compete with AICs in training foundation models, stating it was ‘totally hopeless.’ Such remarks have only strengthened the resolve for self-reliance in the region.
Open Source AI: A Symbol of Resistance
DeepSeek, and European labs that preceded it, have offered a beacon of hope in the AI race. Their choice to embrace open source has become a powerful symbol of resistance against the dominance of proprietary AI models.
DeepSeek R1’s release must be understood within the context of a deeply entrenched rivalry, particularly with the United States. This rivalry is so profound that Europe is often overlooked in discussions of competition with US technology.
The dominance of AICs has even triggered comparisons to colonialism in the West. In an August 2024 op-ed titled ‘The Rise of Techno-Colonialism,’ Hermann Hauser, a member of the European Innovation Council, and Hazem Danny Nakib, a Senior Researcher at University College London (UCL), wrote: ‘Unlike the colonialism of old, techno-colonialism is not about seizing territory but about controlling the technologies that underpin the world economy and our daily lives. To achieve this, the US and China are increasingly onshoring the most innovative and complex segments of global supply chains, thereby creating strategic chokepoints.’
The pioneering open-source approach of European AI labs like Mistral, kyutai, and Meta’s FAIR Paris team, and now DeepSeek, has presented a compelling alternative to the proprietary AI model strategy of the AICs. These open-source contributions are resonating globally and have further solidified the embrace of open-source AI as a symbol of resistance against American AI dominance.
The Case for Open Source: History Rhymes
Technological collaboration thrives on energy and speed, something that is inherent in the evolution of software code.
French Nobel Economics laureate Jean Tirole, initially puzzled by the emergence of open source, questioned in his 2000 paper with Josh Lerner, ‘The Simple Economics of Open Source’: ‘Why should thousands of top-notch programmers contribute freely to the provisions of a public good? Any explanation based on altruism only goes so far.’
While understandable at the time, anyone following AI’s progress in recent years, especially after the DeepSeek R1 release, would find the answer self-evident. The impact of FAIR Paris at Meta’s open-sourcing of Llama, the meteoric rise of Mistral and its founders through open-sourcing a 7B language learning model (LLM), and DeepSeek R1 demonstrate the compelling reasons behind these programmers’ and scientists’ dedication to open source.
It also clarifies why Sam Altman and his co-founders chose the name ‘OpenAI’ to attract talent. Would any of these frontier labs have achieved such resounding publicity and built such strong personal brands within the AI community had they opted for a proprietary approach? The answer is a resounding no.
Two powerful quotes from 1999, by programmer Richard Stallman and developer Eric Raymond, respectively, included at the beginning of the paper, illuminate the reception of DeepSeek at JLF and underscore the deeper ideological forces at play:
‘The idea that the proprietary software social system—the system that says you are not allowed to share or change software—is unsocial, that it is unethical, that it is simply wrong may come as a surprise to some people. But what else can we say about a system based on dividing the public and keeping users helpless?’ - Richard Stallman
‘The utility function Linux hackers is maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers. … Voluntary cultures that work this way are actually not uncommon; one other in which I have long participated is science fiction fandom, which unlike hackerdom explicitly recognizes egoboo (the enhancement of one’s reputation among other fans).’ - Eric Raymond
The trajectory of Unix in the 1970s and 1980s provides a compelling analogy for the current state of AI. AT&T’s initial promotion and free distribution of Unix within academia fostered innovation and adoption. However, when AT&T imposed a proprietary license in the late 1970s, it inevitably led Berkeley University to launch BSD Unix, an open alternative, and ultimately Linus Torvalds to create Linux. Torvalds’ development of Linux in Europe shifted the epicenter of open-source software away from the US.
The parallels are striking, even geographically, with the evolution of AI. This time, however, new geographies have emerged: Abu Dhabi’s TII with its Falcon Models, China’s DeepSeek, Alibaba’s Qwen, and more recently, India’s Krutrim AI Lab with its open-source models for Indic languages.
The Meta FAIR Paris team, along with leading European AI labs and newer frontier labs (DeepSeek, Falcon, Qwen, Krutrim), have significantly accelerated AI innovation. By openly sharing research papers and code, they have:
- Trained a new generation of AI engineers and researchers in state-of-the-art AI techniques.
- Created an ecosystem of open collaboration, enabling rapid advancements outside of proprietary AI labs.
- Provided alternative AI models, ensuring that AI is not monopolized by American AI Corporations.
These four ecosystems (Europe, India, Abu Dhabi, and China) could forge a powerful open-source AI alliance to challenge the dominant AICs still operating under a proprietary AI mindset.
In an Ask Me Anything (AMA) questionnaire on January 31, 2025, following the release of DeepSeek R1, Altman acknowledged that the proprietary AI model approach had been on the wrong side of history.
In time, AI labs worldwide may choose to join this alliance to collectively advance the field. This wouldn’t be the first instance of a scientific field transcending boundaries and political ideologies through a non-profit initiative. It offers a mode of competition that avoids triggering the anti-colonial grievances often expressed by the Global South.
Historical Precedents: The Human Genome Project as a Model for AI
As a biologist, I am particularly aware of the achievements of the Human Genome Project (HGP) and how it ultimately surpassed the for-profit initiative of Celera Genomics, benefiting the field and humanity as a whole.
The HGP was a groundbreaking international research initiative that mapped and sequenced the entire human genome. Completed in 2003 after 13 years of collaboration, it has generated nearly $800 billion in economic impact from an investment of $3 billion, according to a 2011 report updated in 2013 (a return on investment to the US economy of 141 to one – every $1 of federal HGP investment has contributed to the generation of $141 in the economy). It has revolutionized medicine, biotechnology, and genetics, enabling advancements in personalized medicine, disease prevention, and genomic research. The sequencing work and research were conducted by 20 laboratories across six countries: the US, UK, France, Germany, Japan, and China.
While Celera Genomics attempted to sequence genomic sequences for profit, the HGP prioritized open data sharing, enshrined in its Bermuda Principles. Established during the International Strategy Meeting on Human Genome Sequencing in Bermuda in February 1996, these principles were crucial in shaping data-sharing policies for the HGP and have had a lasting impact on genomic research practices globally. Its key tenets were:
- Immediate Data Release: All human genomic sequence data generated by the HGP were to be released into public databases, preferably within 24 hours of generation. This rapid dissemination aimed to accelerate scientific discovery and maximize societal benefits.
- Free and Unrestricted Access: The data were to be made freely available to the global scientific community and the public, with no restrictions on their use for research or development purposes.
- Prevention of Intellectual Property Claims: Participants agreed that no intellectual property rights would be claimed on the primary genomic sequence data, promoting an open-science ethos and preventing potential hindrances to research due to patenting.
In terms of governance, the HGP was a collaborative and coordinated scientific initiative, not a standalone organization or corporation. It was a decentralized effort funded through government grants and contracts to various research institutions. A portion of its budget (3–5%) was dedicated to studying and addressing ethical, legal, and social concerns related to human genome sequencing.
Bridging AI Safety and Open Source AI
Another crucial advantage of open-source AI is its role in AI safety research.
The AI Seoul Summit in 2024 focused exclusively on existential risks at a time when AICs held a significant lead over the rest of the world. As recently as May 2024, former Google CEO Eric Schmidt claimed the US was 2–3 years ahead of China in AI, while Europe was too preoccupied with regulation to be relevant. Had the Summit succeeded, it would have effectively ceded control of AI safety decisions to these corporations. Fortunately, it did not.
Now that open-source AI is bridging the technological gap, safety discussions will no longer be dictated solely by a handful of dominant players. Instead, a broader and more diverse group of stakeholders – including researchers, policymakers, and AI labs from Europe, India, China, and Abu Dhabi – have the opportunity to shape the discussion alongside the AICs.
Furthermore, open-source AI enhances global deterrence capabilities, ensuring that no single actor can monopolize or misuse advanced AI systems without accountability. This decentralized approach to AI safety will help mitigate potential existential threats by distributing both capabilities and oversight more equitably across the global AI ecosystem.
A Human AI Project with the Paris Principles
What role can the AI Action Summit in Paris next week play in shaping the future of AI?
This presents a crucial opportunity to establish a Human AI Project, modeled after the Human Genome Project, to advance and support open-source AI development on a global scale. The current open-source contributions, from pioneering European AI labs to DeepSeek, are already accelerating the field and helping close the gap with AICs.
AI’s capabilities are significantly enhanced by the maturity of the general open-source ecosystem, with thousands of mature projects, dedicated governance models, and deep integration into enterprise, academia, and government.
The AI open-source ecosystem also benefits from platforms like Github and Gitlab. More recently, dedicated platforms for open-source AI, such as Hugging Face – a US corporation co-founded by three French entrepreneurs – have begun playing a vital role as distribution platforms for the community.
Given the relative maturity of the open-source AI ecosystem compared to human genome sequencing in the early 1990s, how could open-source AI benefit from a Human AI Project?
For one, the European Union is often criticized by AICs and its own frontier AI Labs for its regulation of open source. A Human AI Project could dedicate a joint effort to develop regulatory alignment and standards across participating countries and regions. A coordinated approach, with initial contributions from Europe, India, Abu Dhabi, and China, could facilitate the dissemination of open-source models across this shared regulatory region (a kind of free trade area for open source).
While not definitively proven, there are parallels to the rivalry-driven dynamics that shaped the reaction to DeepSeek at JLF. Similarly, AI regulation could be crafted with a focus on fostering innovation and maximizing public benefit – both for enterprises and consumers – rather than serving as a potential mechanism to impede the progress of AICs or hinder homegrown AI champions striving to close the gap.
The project could also facilitate talent exchange and fund a shared compute infrastructure (linked to energy infrastructure) for open-source AI. It’s evident from the chart below that talented STEM graduates in some parts of the world might currently struggle to access the world-class AI infrastructure their country lacks.
Another area of collaboration would be to establish best practices on open access standards for models and data sets, encompassing weights, code, and documentation.
The project could also foster global collaboration on AI Safety Research. Instead of racing in secret to fix alignment issues, researchers from Paris to Beijing to Bangalore could work together on evaluating models and mitigating risks. All safety findings (e.g., methods to reduce harmful outputs or tools for interpretability) could be shared promptly in the open domain.
This principle would recognize that AI safety is a global public good – a breakthrough in one lab (say, a new algorithm to make AI reasoning transparent) should benefit all, not be kept proprietary. Joint safety benchmarks and challenge events could be organized to encourage a culture of collective responsibility. By pooling safety research, the project would aim to stay ahead of potential AI misuse or accidents, reassuring the public that powerful AI systems are being stewarded with care.
The focus on existential risk at the 2023 UK AI Safety Summit at Bletchley Park, by overemphasizing the Nuclear Proliferation analogy, missed an opportunity to examine other areas where safety is considered a public good: cybersecurity, antibiotics and immunology (with several interesting initiatives post-Covid-19), and aviation safety.
The project could also partner with and further the work currently carried out by the private ARC Prize Foundation to foster the development of safe and advanced AI systems. The ARC Prize, co-founded by François Chollet, creator of the Keras open-source library, and Mike Knoop, co-founder of the Zapier software company, is a nonprofit organization that hosts public competitions to advance artificial general intelligence (AGI) research. Their flagship event, the ARC Prize competition, offers over $1 million to participants who can develop and open-source solutions to the ARC-AGI benchmark – a test designed to evaluate an AI system’s ability to generalize and acquire new skills efficiently.
The ARC Prize Foundation’s emphasis on open-source solutions and public competitions aligns seamlessly with the Human AI Project’s goals of fostering international collaboration and transparency in AI development, as stated on the ARC Prize Foundation website under ‘AGI’:
‘LLMs are trained on unimaginably vast amounts of data, yet remain unable to adapt to simple problems they haven’t been trained on, or make novel inventions, no matter how basic. Strong market incentives have pushed frontier AI research to go closed source. Research attention and resources are being pulled toward a dead end. ARC Prize is designed to inspire researchers to discover new technical approaches that push open AGI progress forward.’
Like the HGP, the Human AI Project would dedicate part of its funding to ethical governance and oversight. This would include discussions about copyright. The Project could help society consider the ethics of accessing the best source of information in training for free while developing proprietary models on top of it. In the biology space, it’s well known that the Protein Data Bank, which was critical for Google DeepMind’s AlphaFold model to predict protein structure, likely required the equivalent of $10 billion of funding over a period of 50 years. The Project could help in thinking about how we continue to fund AI development or how the proprietary AICs should share revenue with original work creators.
Together, these Paris Principles and the Human AI Project would help advance AI globally in a more open, collaborative, and ethical manner. They would build upon the achievements of leading open-source contributors from Europe to the Middle East, India, and now China, within the existing open-source software and AI-specific frameworks and platforms.
History Rhymes with AI
The opportunity before us is immense. Mistral AI, kyutai, BFL, Stability, and more recently DeepSeek have given the public hope that a future where cooperation rivals or even surpasses the proprietary AICs is possible.
We are still in the early stages of this technological breakthrough. We should be grateful for the contributions AICs have made to the field. The AI Action Summit should be an opportunity to foster cooperative innovation on an unprecedented scale and bring as many players as possible to the right side of history.
It’s 1789 all over again. We are witnessing a fight for technological sovereignty, a decentralization of power, and a call for AI as a public good. And just like in 1789, this revolution will not be contained.