MCP: The Key to AI Agent Productivity?

MCP: A USB-C for AI Applications

Integrating AI models with external tools has long been a significant hurdle, characterized by substantial customization expenses and unreliable system stability. Traditionally, developers were compelled to devise specific interfaces for each novel tool or data source, resulting in wasted resources and fragile system architectures. This ‘MxN’ integration problem has plagued the industry for years, hindering the seamless adoption of AI across various applications.

MCP is designed to tackle these pain points by standardizing interaction rules. With MCP, AI models and tools only need to adhere to the protocol’s standards to achieve plug-and-play compatibility. This simplifies the complexity of integration, allowing AI models to directly access databases, cloud services, and even local applications without needing individual adaptation layers for each tool. The promise of MCP lies in its ability to unlock a more fluid and efficient ecosystem where AI agents can interact with a wide range of resources without the need for bespoke integrations.

MCP’s ability to integrate ecosystems is already evident. For example, Anthropic’s Claude desktop application, when connected to a local file system via an MCP server, allows the AI assistant to directly read document content and generate context-aware responses. This direct access to local files enables Claude to provide more informed and relevant assistance, significantly enhancing its usability. Meanwhile, the Cursor development tool, by installing multiple MCP servers (such as Slack and Postgres), enables seamless multitasking within the IDE. Developers can now interact with different services and data sources directly from their coding environment, streamlining their workflow and boosting productivity.

MCP is becoming what Justin envisioned: a USB-C for AI applications, a universal interface connecting the entire ecosystem. The idea is simple yet powerful: to create a common language that allows AI agents to interact with various tools and services without the need for complex, custom integrations. This standardization promises to unlock a new era of AI productivity, making it easier for developers to build and deploy AI-powered applications.

The Journey to Popularity

The journey from MCP’s release to its current popularity is an interesting one. The evolution of MCP highlights the complex interplay between technological innovation, market adoption, and the ever-changing landscape of the AI industry.

When MCP was released in November 2024, it quickly gained the attention of developers and businesses. However, it didn’t explode in popularity right away. At the time, the value of intelligent agents wasn’t clear. Even if the ‘MxN’ integration complexity of Agents was solved, no one knew if AI productivity would take off. The initial response to MCP was cautious, with many industry observers waiting to see whether the technology could deliver on its promises.

This uncertainty stemmed from the difficulty of translating rapidly evolving LLM technology into practical applications. The internet was filled with conflicting opinions about intelligent agents, leading to low confidence in AI’s ability to make a real impact. Even with some promising applications emerging, it was difficult to tell whether AI was truly boosting productivity or just scratching the surface. It would take time to find out. The initial lack of widespread adoption underscores the challenges of introducing new technologies into a market still grappling with the implications of AI.

The turning point came with the release of Manus’s framework and OpenAI’s announcement of support for MCP. These two events acted as catalysts, propelling MCP from a promising concept to a widely adopted standard.

Manus demonstrated the collaborative capabilities of multiple Agents, perfectly capturing what users expected from AI productivity. When MCP enabled a ‘dialogue-as-operation’ experience through a chat interface, allowing users to trigger system-level actions like file management and data retrieval simply by entering commands, a shift in perception began: AI could actually help with real work. Manus showed how MCP could be used to create intelligent agents that could seamlessly interact with different systems, automating tasks and improving user workflows.

This groundbreaking user experience boosted MCP’s popularity. Manus’s release was a key factor in MCP’s success. The demonstration of tangible benefits and real-world applications played a crucial role in convincing developers and businesses of the value of MCP.

OpenAI’s support further elevated MCP to the status of a ‘universal interface’. The endorsement from a major player in the AI industry provided MCP with the credibility and visibility it needed to achieve widespread adoption.

On March 27, 2025, OpenAI announced a major update to its core development tool, AgentSDK, officially supporting the MCP service protocol. With this move by the tech giant, which controls 40% of the global model market, MCP began to resemble a foundational infrastructure like HTTP. MCP officially entered the public eye, and its popularity soared. OpenAI’s decision to embrace MCP signaled a significant shift in the industry, paving the way for other companies to follow suit.

This made the dream of an ‘HTTP for AI’ seem possible. Platforms like Cursor, Winsurf, and Cline followed suit and adopted the MCP protocol, and the Agent ecosystem built around MCP grew. The momentum behind MCP continued to build, with more and more companies recognizing its potential to revolutionize the way AI agents interact with the world.

MCP: Is an Agent Ecosystem on the Horizon?

Can MCP really become the de facto standard for AI interaction in the future? The question of whether MCP can achieve widespread adoption and become the dominant standard in the AI industry is a subject of ongoing debate and speculation.

On March 11, LangChain co-founder Harrison Chase and LangGraph head Nuno Campos debated whether MCP would become the future standard for AI interaction. Although they didn’t reach a conclusion, the debate sparked a lot of imagination around MCP. The discussion between Chase and Campos highlighted the potential benefits and challenges of MCP, sparking further interest and discussion within the AI community.

LangChain also launched an online poll during the debate. Surprisingly, 40% of participants supported MCP becoming the future standard. The results of the poll reflect the divided opinions within the industry, with a significant portion of participants believing that MCP has the potential to become the dominant standard.

The remaining 60% who didn’t vote for MCP suggest that the path to becoming the future standard for AI interaction won’t be easy. The challenges facing MCP are significant, and there is no guarantee that it will ultimately succeed in achieving widespread adoption.

One major concern is the disconnect between technical standards and commercial interests, as evidenced by the actions of domestic and international players after MCP’s release. The tension between open standards and proprietary interests is a recurring theme in the technology industry, and it poses a significant challenge to the widespread adoption of MCP.

Shortly after Anthropic released MCP, Google created A2A (Agent to Agent). The emergence of competing standards highlights the challenges of achieving consensus in a rapidly evolving industry.

If MCP paved the way for individual intelligent agents to easily access ‘resource points,’ A2A aimed to build a vast communication network connecting these agents, enabling them to ‘talk’ to each other and work together. While MCP focuses on standardizing the way agents interact with external tools and services, A2A aims to standardize the way agents interact with each other.

From an underlying perspective, both MCP and A2A are vying for control of the Agent ecosystem. The competition between these two standards reflects the broader power struggle within the AI industry, as companies vie for control of the emerging agent ecosystem.

So, what’s happening in the Chinese market? The Chinese market presents a unique dynamic, with its own set of players and priorities.

More activity is concentrated among LLM companies. Since April, Alibaba, Tencent, and Baidu have all announced their support for the MCP protocol. The endorsement from these major Chinese tech companies represents a significant boost for MCP, signaling its growing acceptance within the Chinese market.

Alibaba Cloud’s Bailian platform launched the industry’s first full-lifecycle MCP service on April 9, integrating over 50 tools, including Amap and Wuying Cloud Desktop, allowing users to generate exclusive Agents in 5 minutes. Alipay partnered with the ModelScope community to launch the ‘Payment MCP Server’ service in China, allowing AI intelligent agents to access payment capabilities with one click. Alibaba’s embrace of MCP reflects its broader strategy of building a comprehensive AI ecosystem that spans various industries and applications.

On April 14, Tencent Cloud upgraded its LLM knowledge engine to support MCP plug-ins, connecting to ecosystem tools such as Tencent Location Service and WeChat Reading. On April 16, Alipay launched the ‘Payment MCP Server,’ allowing developers to quickly access payment functions via natural language commands, creating a closed loop for AI service commercialization. Tencent’s integration of MCP into its cloud platform and its support for various ecosystem tools demonstrates its commitment to fostering innovation and driving adoption of AI across its vast user base.

On April 25, Baidu announced full compatibility with the MCP protocol, launching the world’s first e-commerce transaction MCP and search MCP service. The Smart Cloud Qianfan platform has integrated a third-party MCP Server, indexing resources across the network to reduce development costs. Baidu’s embrace of MCP reflects its efforts to leverage its expertise in search and e-commerce to build a leading AI platform in China.

The MCP approach of Chinese LLM companies is a ‘closed loop.’ From Alibaba Cloud’s Bailian platform integrating Amap, to Tencent Cloud supporting MCP plug-ins and connecting to ecosystems like WeChat Reading, to Baidu launching a search MCP service, all are using MCP to leverage their strengths and strengthen their ecosystem barriers. The focus on building closed ecosystems reflects the unique dynamics of the Chinese market, where companies often prioritize control and integration over open standards.

There’s a deep business logic behind this strategic choice. The strategic choices made by Chinese LLM companies are driven by a clear understanding of the competitive landscape and the need to protect their proprietary assets.

Imagine if Alibaba Cloud allowed users to call Baidu Maps or if Tencent’s ecosystem opened up core data interfaces to external models. The differentiated advantages created by each company’s data and ecosystem moats would collapse. The desire to maintain control over data and ecosystem assets is a key factor driving the adoption of MCP by Chinese LLM companies.

It’s this need for absolute control over ‘connectivity’ that makes MCP, beneath its technical standardization, a silent redistribution of infrastructure control in the age of AI. The underlying power dynamics at play in the adoption of MCP highlight the importance of understanding the business logic behind technical standards.

This tension is becoming clear: On the surface, MCP is promoting the standardization of technical protocols through unified interface specifications. In reality, each platform is defining its own connection rules through private protocols. The divergence between the stated goals of standardization and the reality of proprietary implementations poses a significant challenge to the widespread adoption of MCP.

This division between open protocols and ecosystems will inevitably become a major obstacle to MCP becoming a truly universal standard. The fragmentation of the AI ecosystem could hinder the development of interoperable AI agents and limit the potential benefits of standardization.

The Real Value of MCP in the Wave of AI Industrialization

Even if there’s no absolute ‘unified protocol’ in the future, the standard revolution sparked by MCP has opened the floodgates for AI productivity. The impact of MCP extends beyond the specific technical details of the protocol itself, signaling a broader shift towards standardization and interoperability in the AI industry.

Currently, each LLM company is building its own ‘ecological enclave’ through the MCP protocol. This ‘closed-loop’ strategy will expose the deep contradictions of Agent ecosystem fragmentation. While the creation of closed ecosystems may limit the potential for widespread interoperability, it also has the potential to accelerate the development and deployment of AI applications.

However, it will also release the capabilities accumulated by ecosystem builders, quickly forming application matrices and promoting AI implementation. The focus on building complete ecosystems allows companies to leverage their existing assets and expertise to create compelling AI solutions for specific industries and applications.

For example, the advantages of large companies in the past (such as Alipay’s payment technology, user scale, and risk control capabilities) were limited to their own businesses. However, by opening them up through standardized interfaces (MCP), these capabilities can be called by more external developers. The ability to expose core capabilities through standardized interfaces allows companies to unlock new revenue streams and expand their reach to a wider audience.

For example, other companies’ AI Agents don’t need to build their own payment systems, they can directly call Alipay interfaces. This can attract more participants to use the large company’s infrastructure, forming dependence and network effects, and expanding ecological influence. The network effects created by open APIs and standardized interfaces can be a powerful driver of growth and innovation.

This ‘enclosure innovation’ is accelerating the industrial penetration of AI technology. The combination of closed ecosystems and open interfaces is driving the adoption of AI across various industries, transforming the way businesses operate and interact with their customers.

From this perspective, it may drive the future Agent ecosystem to present a pattern of ‘limited openness.’ The future of the AI ecosystem is likely to be characterized by a balance between proprietary control and open collaboration.

Specifically, core data interfaces will still be firmly controlled by large companies, but in non-core areas, through the promotion of technical communities and the intervention of regulatory agencies, cross-platform ‘micro-standards’ may gradually form. The emergence of micro-standards could help to address the challenges of fragmentation and interoperability, while still allowing companies to maintain control over their core assets.

This ‘limited openness’ can protect the ecological interests of manufacturers and avoid a completely fragmented technical ecosystem. The balance between proprietary control and open collaboration is crucial for fostering innovation and driving the widespread adoption of AI.

In this process, MCP’s value will also shift from a ‘universal interface’ to an ‘ecological connector.’ The role of MCP is likely to evolve from a focus on standardization to a focus on enabling interoperability between different ecosystems.

It will no longer seek to become the only standardized protocol, but will serve as a bridge for dialogue between different ecosystems. MCP can play a crucial role in facilitating communication and collaboration between different AI platforms and ecosystems.

When developers can easily achieve cross-ecological Agent collaboration through MCP, and when users can seamlessly switch intelligent agent services between different platforms, the Agent ecosystem will truly usher in its golden age. The ultimate goal is to create a seamless and interoperable AI ecosystem that benefits both developers and users.

The prerequisite for all of this is whether the industry can find a delicate balance between commercial interests and technical ideals. This is the change brought about by MCP beyond the value of the tool itself. The success of MCP depends on the ability of the AI industry to find a balance between proprietary control and open collaboration.

The construction of the Agent ecosystem doesn’t lie in the emergence of a certain standard protocol. The implementation of AI doesn’t lie in the connection of a certain link, but in consensus. The key to building a thriving AI ecosystem is to foster a shared understanding and agreement on the fundamental principles and values that will guide its development.

As Anthropic engineer David originally envisioned, ‘We need not only a ‘universal socket,’ but also a ‘power grid’ that allows the sockets to be compatible with each other.’ This power grid requires both technical consensus and a global dialogue about the rules of AI-era infrastructure. The creation of a ‘power grid’ for AI requires both technical innovation and a broader societal conversation about the ethical and societal implications of AI.

In the current era of rapid AI technology iteration, driven by MCP, manufacturers are accelerating the unification of this technical consensus. The ongoing development and refinement of MCP is contributing to a growing consensus within the AI industry regarding the importance of standardization and interoperability.