In a scene reminiscent of the intricate power struggles in ‘Game of Thrones,’ the AI industry is currently witnessing its own high-stakes drama. While the world’s attention is focused on the competition surrounding model parameters and performance, a silent battle is brewing over AI and agent standards, protocols, and ecosystems.
In November 2024, Anthropic introduced the Model Context Protocol (MCP), an open standard for intelligent agents, aiming to unify the communication protocols between large language models and external data sources and tools. Shortly after, OpenAI announced Agent SDK support for MCP. Google DeepMind CEO Demis Hassabis also confirmed that Google’s Gemini model and software development kits would integrate this open standard, calling MCP ‘fast becoming the open standard for the AI agent era.’
Concurrently, Google announced the open-source Agent2Agent Protocol (A2A) at the Google Cloud Next 2025 conference. This protocol aims to break down barriers between existing frameworks and vendors, enabling secure and efficient collaboration between agents in different ecosystems.
These actions by tech giants have unveiled a competition across AI and intelligent agents in terms of connection standards, interface protocols, and ecosystems. The principle of ‘protocol equals power’ is evident. As the global AI landscape takes shape, whoever controls the definition of basic protocol standards in the AI era has the opportunity to reshape the power structure and value distribution order of the global AI industry chain.
The Future AI Ecosystem’s ‘USB-C Port’
With the rapid advancement of AI technology, large language models such as GPT and Claude have showcased impressive capabilities. The real value of these models lies in their ability to interact with the external world’s data and tools to solve real-world problems.
However, this interaction capability has long faced issues of fragmentation and a lack of standardization, requiring developers to implement specific integration logic for different AI models and platforms.
To address this issue, MCP has emerged. As a bridge connecting AI models with the external world, MCP solves several key problems faced during AI interaction.
Before MCP, if an AI model needed to connect to a local database (such as SQLite) to obtain data or call remote tools (such as Slack for team communication, GitHub API to manage code), developers had to write specific connection code for each data source or tool. This process was not only cumbersome and error-prone, but also expensive to develop, difficult to maintain, and hard to scale due to the lack of a unified standard.
When launching MCP, Anthropic made an analogy: MCP is like the USB-C port for AI applications. MCP aims to create a common standard, allowing various models and external systems to use the same protocol for access instead of writing a separate set of integration solutions each time. This makes the development and integration of AI applications simpler and more unified.
For example, in a software development project, an MCP-based AI tool can directly delve into the project code repository, analyze the code structure, understand historical commit records, and then provide developers with code suggestions that are more in line with the actual needs of the project, significantly improving development efficiency and code quality.
In the past, to enable large models and other AI applications to use data, it was usually necessary to copy and paste or upload and download. Even the most powerful models were limited by data isolation, forming information silos. To create more powerful models, each new data source needed to be customized and implemented, making it difficult to scale truly interconnected systems, resulting in many limitations.
By providing a unified interface, MCP directly bridges AI and data (including local and internet data). Through the MCP server and MCP client, as long as both follow this protocol, ‘everything can be connected.’ This allows AI applications to securely access and operate local and remote data, providing AI applications with an interface to connect to everything.
From an architectural perspective, MCP mainly includes two core parts: the MCP server and the MCP client. Developers can expose their data through the MCP server, which can come from local file systems, databases, or remote services such as Slack and GitHub APIs. AI applications built to connect to these servers are called MCP clients. Simply put, the MCP server is responsible for exposing data, and the MCP client is responsible for accessing the data.
When AI models access external data and tools, security is an important consideration. By providing standardized data access interfaces, MCP significantly reduces the number of direct contacts with sensitive data, reducing the risk of data leakage.
MCP has built-in security mechanisms, allowing data sources to share data with AI in a controlled manner within a secure framework. AI can also securely feed processing results back to data sources, ensuring that only verified requests can access specific resources, equivalent to adding another layer of defense to data security, dispelling corporate concerns about data security, and laying a solid foundation for the deep application of AI in enterprise-level scenarios.
For example, the MCP server controls its own resources and does not need to provide sensitive information such as API keys to large model technology providers. This way, even if the large model is attacked, attackers will not be able to obtain this sensitive information, effectively isolating risks.
It can be said that MCP is a natural product of AI technology development and an important milestone. It not only simplifies the development process of AI applications, but also creates conditions for the prosperity of the AI ecosystem.
As an open standard, MCP greatly stimulates the vitality of the developer community. Global developers can contribute code and develop new connectors around MCP, continuously expanding its application boundaries, forming a virtuous ecological cycle, and promoting the deep integration of AI and data in various industries. This openness makes it easier for AI applications to connect to various services and tools, forming a rich ecosystem, ultimately benefiting users and the entire industry.
MCP’s advantages are not only reflected at the technical level, but more importantly, the actual value it brings to different fields. In the AI era, the ability to acquire and process information determines everything, and MCP allows multiple agents to collaborate, maximizing each other’s strengths.
For example, in the medical field, intelligent agents can connect to patient electronic medical records and medical databases through MCP, and combined with doctors’ professional judgments, can provide initial diagnostic suggestions more quickly. In the financial industry, intelligent agents can collaborate to analyze financial data, track market changes, and even automatically conduct stock trading. This division of labor and cooperation between intelligent agents makes data processing more efficient and decision-making more accurate.
Reviewing the development history of MCP, it is not difficult to find that its growth rate is amazing. In early 2023, MCP completed the design of the core communication protocol, realizing basic intelligent agent registration and message transmission functions. This is like creating a universal language for intelligent agents, allowing them to communicate with each other instead of speaking their own languages.
At the end of 2023, MCP further expanded its functions, supporting intelligent agents to call external APIs and data sharing, which is equivalent to allowing intelligent agents to not only chat, but also exchange information and jointly process tasks.
In early 2024, the MCP ecosystem reached a new level. Developer toolkits and sample projects were launched, and the number of intelligent agent plug-ins contributed by the community exceeded 100, achieving a ‘blooming’ situation.
Recently, Microsoft integrated MCP into its Azure OpenAI service, and Google DeepMind also announced that it will provide support for MCP and integrate it into the Gemini model and SDK. Not only large technology companies, but also AI startups and development tool providers have joined MCP, such as Block, Apollo, Zed, Replit, Codeium, and Sourcegraph.
The rise of MCP has attracted rapid follow-up and competition from Chinese technology companies such as Tencent and Alibaba, regarding it as an important step in the AI ecosystem strategy. For example, recently Alibaba Cloud’s Bailian platform launched a full life cycle MCP service, eliminating the need for users to manage resources, develop and deploy, and engineer operations and maintenance, reducing the intelligent agent development cycle to minutes. Tencent Cloud released the ‘AI Development Kit,’ which supports MCP plug-in hosting services to help developers quickly build business-oriented intelligent agents.
The ‘Invisible Bridge’ for Multi-Agent Collaboration
As the MCP protocol transforms intelligent agents from chat tools into action assistants, tech giants are beginning to build ‘small courtyards and high walls’ of standards and ecosystems on this new battlefield.
Compared with MCP, which focuses on connecting AI models with external tools and data, A2A goes a step further, focusing on efficient collaboration between intelligent agents.
The original intention of the A2A protocol is simple: to enable intelligent agents from different sources and manufacturers to understand and collaborate with each other, bringing greater autonomy to the collaboration of multiple intelligent agents.
This is like the WTO aiming to reduce tariff barriers between countries. Intelligent agents from different suppliers and frameworks are like independent countries. Once A2A is adopted, it is equivalent to joining a free trade zone, where they can communicate in a common language, collaborate seamlessly, and jointly complete complex workflows that a single intelligent agent cannot complete independently.
The specific interoperability form of the A2A protocol is achieved by facilitating communication between the Client Agent and the Remote Agent. The client agent is responsible for formulating and communicating tasks, and the remote agent takes action based on these tasks to provide the correct information or perform corresponding operations.
In this process, the A2A protocol has the following key capabilities:
First, intelligent agents can advertise their capabilities through ‘intelligent agent cards.’ These ‘intelligent agent cards’ exist in JSON format, allowing client agents to identify which remote agent is best suited to perform a specific task.
Once the appropriate remote agent is identified, the client agent can use the A2A protocol to communicate with it and assign the task to it.
Task management is an important part of the A2A protocol. Communication between the client and remote agents revolves around completing tasks. The protocol defines a ‘task’ object. For simple tasks, it can be completed immediately; for complex and long-term tasks, intelligent agents can communicate with each other to maintain synchronization on the task completion status.
In addition, A2A also supports collaboration between intelligent agents. Multiple intelligent agents can send messages to each other, which can contain contextual information, replies, or user instructions. In this way, multiple intelligent agents can work together better to complete complex tasks together.
When designing this protocol, Google followed five key principles. First, A2A focuses on enabling intelligent agents to collaborate in their natural, unstructured modes, even if they do not share memory, tools, and context.
Second, the protocol is built on existing, popular standards, including HTTP, Server-Sent Events (SSE), and JSON-RPC, meaning it is easier to integrate with existing IT stacks that companies already use on a daily basis.
For example, an e-commerce company uses the HTTP protocol daily to handle web data transmission and JSON-RPC to transmit data instructions between the front and back ends. After introducing the A2A protocol, the company’s order management system can quickly obtain logistics data updates provided by relevant intelligent agents through HTTP and A2A protocol docking, without having to rebuild complex data transmission channels, making it easy to integrate into the existing IT architecture and making the collaboration of various systems smoother.
Third, A2A is designed to support enterprise-level authentication and authorization. Using the A2A protocol can quickly authenticate and securely obtain data, ensuring the security and compliance of data transmission and preventing data leakage risks.
Fourth, A2A is flexible enough to support various scenarios, from quick tasks to in-depth research that may take hours or even days (when humans are involved). Throughout the process, A2A can provide users with real-time feedback, notifications, and status updates.
Take a research institution as an example. Researchers use intelligent agents under the A2A protocol to conduct research related to new drug development. Simple tasks, such as quickly retrieving existing drug molecule structure information in the database, can be completed and fed back to the researchers within seconds. However, for complex tasks, such as simulating the reaction of new drug molecules in the human body environment, it may take several days.
During this period, the A2A protocol will continuously push simulation progress to the researchers, such as how many steps have been completed, the current problems encountered, etc., allowing researchers to keep abreast of the situation, just like having an assistant reporting work progress at all times.
Fifth, the world of intelligent agents is not limited to text, so A2A supports various modalities, including audio, images, and video streams.
Imagine that in the future, your intelligent assistant, the company’s CRM system, supply chain management AI, and even intelligent agents on different cloud platforms can ‘chat about tasks and divide work’ like old friends, efficiently completing various needs from simple queries to complex processes, thus opening the era of machine intelligence.
Currently, the protocol already supports application platforms for more than 50 mainstream technology companies, including Atlassian, Box, Cohere, Intuit, MongoDB, PayPal, Salesforce, and SAP.
It is worth noting that these are all companies that have subtle relationships with the Google ecosystem. For example, Cohere, an independent AI startup, was founded in 2019 by three researchers who previously worked at Google Brain; they have a long-term technical partnership with Google Cloud, and Google Cloud provides Cohere with the computing power needed to train models.
Atlassian, a company that provides team collaboration tools, such as Jira and Confluence, is used by many people. They have a partnership with Google, and some applications can be used in Google products.
Although Google said that A2A is a supplement to the MCP model context protocol proposed by Anthropic, this is a bit like Google taking the lead in developing the Android system with more than 80 companies in the past. As more and more companies join, the commercial value of A2A will be greatly improved, and it will promote the rapid development of the entire intelligent agent ecosystem.
From ‘Connecting Tools’ to ‘Dominating Ecosystems’
MCP and A2A represent two different paths for AI interconnection. MCP, as the underlying model interaction protocol, ensures seamless docking between applications and different models; A2A providesa collaboration framework between intelligent agents on this basis, emphasizing autonomous discovery and flexible collaboration between intelligent agents. This layered structure can simultaneously meet the needs of model standardization and intelligent agent collaboration.
At the same time, both have achieved dominant positions in their respective sub-fields. MCP has advantages in enterprise-level applications, cross-model services, and standardization scenarios; A2A has gained more support in open-source communities, research projects, and innovative applications.
From a macro perspective, the rise of MCP and A2A is not only related to future AI technology standards, but also heralds a major change in the AI industry landscape. We are witnessing a historic turning point in AI from ‘stand-alone intelligence’ to ‘collaborative networks.’ As the development history of the Internet shows, the establishment of open and standardized protocols will become a key force in promoting industry development.
But from a deeper level, MCP and A2A hide huge commercial interests and the competition for future AI technology discourse power.
In terms of business models, the two are opening up different profit paths. Anthropic plans to launch an enterprise version service based on MCP, charging companies based on API call volume. Companies use MCP to deeply integrate internal data with AI, improve business efficiency, and need to pay for this convenient service.
Google is using the A2A protocol to promote cloud service subscriptions. When companies use A2A to build intelligent agent collaboration networks, they are guided to use Google Cloud’s powerful computing power and related services, thereby increasing Google Cloud business revenue.
In terms of data monopoly, mastering protocol standards means controlling the flow of AI data. Through the A2A protocol, Google collects massive amounts of data during the collaboration of many enterprise intelligent agents. This data feeds back into its core advertising algorithms, further consolidating its dominance in the advertising market. Anthropic wants to use MCP to allow AI to penetrate the core of enterprise data. If it forms a scale advantage, it will also accumulate a large amount of industry data, providing data support for expanding business and developing AI products that are more in line with enterprise needs.
In terms of open-source strategy, although both claim to be open source, they have their own plans. The MCP core protocol is open source, attracting developers to participate in ecosystem construction, but enterprise-level key functions (such as remote connection advanced functions and in-depth processing of multi-modal data) need to be unlocked for a fee, balancing open source and commercial interests. While the A2A protocol is open source, it guides more than 50 enterprise partners to give priority to using Google Cloud services, closely binding the open-source ecosystem with its own commercial system and enhancing user stickiness and platform competitiveness.
Technology itself has no good or evil, but when it is embedded in the chain of interests, it becomes a carrier of power and control. Every technological revolution is reshaping the world’s chain of interests. The industrial revolution shifted the chain of interests from land and labor to capital and machines, while the digital revolution pushed it to data and algorithms.
Open-source tools can certainly explore innovative paths, but don’t expect to use data and algorithm keys to open all doors, because each string of keys is engraved with the platform’s interest password.
While technology companies appear to be opening up the AI ecosystem, they are actually building high and thick ecological walls around application scenarios that are more conducive to themselves, preventing data gold mines from being poached, after all, the ultimate competitiveness in the AI era is still data. The battle is for control of the AI ‘stack’ and the ecosystem it supports.
Whether MCP and A2A can eventually merge is still uncertain. If they each act independently, technology giants are very likely to build ‘AI small courtyard walls.’ As a result, the data island phenomenon will become more serious, data circulation between companies in different protocol camps will be blocked, limiting the scope of AI innovation applications; developers will need to master multiple protocol development skills, increasing learning costs and development workload, suppressing innovation vitality; the direction of industry innovation will be easily guided by giant protocols, and start-ups will be at a disadvantage in competition due to the difficulty in supporting multiple protocols, hindering the overall innovation pace of the industry. A fractured ecosystem will stifle creativity. The rise of multiple competing standards adds significant complexity and slows down adoption of powerful AI technologies.
We hope that the rise of MCP and A2A will promote the global AI industry to evolve in the direction of collaboration rather than confrontation.
Just like the railway gauge dispute in the 19th century and the mobile communication standard war in the 20th century, every technological split is accompanied by huge social costs. The consequences of the AI standard and protocol dispute may be more far-reaching. It will determine whether we are moving towards an ‘Internet of Everything’ star federation or falling into a dark forest where the ‘chain of suspicion’ prevails. The stakes are high as we navigate this crucial juncture in AI history. The question isn’t just about protocols; it’s about shaping the very future of intelligent interaction and collaboration. A truly open and interoperable AI ecosystem is vital for fostering innovation and ensuring that the benefits of AI are widely distributed.
The future of AI isn’t just about better models; it’s about how those models connect and work together. MCP and A2A represent crucial pieces of that puzzle, and the choices made now will determine the shape of the AI landscape for years to come. It is vital that collaboration triumphs over competition in the long run.