The world of Artificial Intelligence (AI) is constantly evolving, with new terms and technologies emerging at a rapid pace. One such term that has recently gained significant attention is “MCP,” or Model Context Protocol. This concept has sparked considerable excitement within the AI community, drawing parallels to the early days of mobile app development.
As Baidu Chairman Li Yanhong stated at the Baidu Create conference on April 25th, “Developing intelligent agents based on MCP is like developing mobile apps in 2010.” This analogy highlights the potential impact of MCP on the future of AI applications.
Understanding MCP
If you’re not yet familiar with MCP, you’ve likely encountered the term “Agent” (or intelligent agent). The surge in popularity of Manus, a Chinese startup, in early 2025 brought this concept to the forefront.
The key to Agent’s appeal lies in its ability to perform tasks effectively. Unlike earlier large language models (LLMs) that primarily served as conversational interfaces, Agents are designed to actively execute tasks, leveraging external tools and data sources. Traditional LLMs are limited by their training data and require complex processes to access external resources.
MCP is crucial to realizing the Agent vision, allowing LLMs to seamlessly interact with external tools that support the MCP protocol. This enables them to perform more specific and complex tasks.
Currently, several applications, including Amap and WeChat Read, have launched official MCP Servers. This empowers developers to create AI applications by selecting a preferred LLM and integrating it with MCP servers like Amap or WeChat Read. This allows the LLM to perform tasks such as map queries and information retrieval from books.
The MCP wave began in February 2024 and has quickly gained momentum worldwide.
Major players like OpenAI, Google, Meta, Alibaba, Tencent, ByteDance, and Baidu have all announced support for the MCP protocol and launched their own MCP platforms, inviting developers and application service providers to join.
MCP: Unifying the AI Ecosystem
The concept of “super apps” was a hot topic in the AI field in 2024, with expectations of a rapid proliferation of AI applications. However, the AI innovation ecosystem remained fragmented.
The emergence of MCP can be compared to the unification of China under Qin Shi Huang, who standardized writing, transportation, and measurement systems. This standardization greatly facilitated economic activity and trade.
Many market analysts believe that the adoption of MCP and similar protocols will pave the way for a significant surge in AI applications in 2025.
In essence, MCP acts as a “super plug-in” for AI, enabling seamless integration with various external tools and data sources.
The Technical Foundation of MCP
MCP, or Model Context Protocol, was first introduced by Anthropic in November 2024.
As an open standard, MCP allows AI applications to communicate with external data sources and tools.
Think of MCP as a universal adapter for LLMs, defining a standard “USB interface.”
This interface allows developers to create applications in a more standardized and organized manner, connecting to various data sources and workflows.
Overcoming Barriers to AI Application Development
Before the rise of MCP, developing AI applications was a challenging and complex process.
For example, developing an AI travel assistant required an LLM to perform tasks such as accessing maps, searching for travel guides, and creating personalized itineraries based on user preferences.
To enable the LLM to query maps and search for guides, developers faced the following challenges:
- Each AI provider (OpenAI, Anthropic, etc.) implemented Function Calling differently. Switching between LLMs required developers to rewrite adaptation code, essentially creating a “user manual” for the LLM to use external tools. Otherwise, the accuracy of the model’s output would decrease significantly.
- The lack of a unified standard for LLM interaction with the outside world resulted in low code reusability, hindering the development of the AI application ecosystem.
According to Chen Ziqian, an algorithm technology expert at Alibaba Cloud ModelScope, “Before MCP, developers needed to understand LLMs and perform secondary development to embed external tools into their applications. If the performance of the tools was poor, developers had to investigate whether the issue was with the application itself or the tools.”
Manus, the aforementioned AI startup, serves as a prime example. In a previous evaluation, it was found that Manus needed to call more than ten tools to write a simple news article, including opening a browser, browsing and scraping web pages, writing, verifying, and delivering the final result.
If Manus chose to call external tools in each step, it needed to write a ‘function’ to arrange how the external tools would run. As a result, Manus often terminated tasks due to overload and consumed excessive tokens.
The Benefits of MCP
With MCP, developers no longer need to be responsible for the performance of external tools. Instead, they can focus on maintaining and debugging the application itself, significantly reducing development workload.
Individual servers within the ecosystem, such as Alipay and Amap, can maintain their MCP services, update to the latest versions, and wait for developers to connect.
Limitations and Challenges of MCP
Despite its potential, the MCP ecosystem is still in its early stages and faces several challenges.
Some developers argue that MCP is an unnecessary layer of complexity, suggesting that APIs are a simpler solution. LLMs can already call APIs through various protocols, making MCP seem redundant.
Currently, most MCP services released by large companies are defined by the companies themselves, determining which functions can be called by LLMs and how they are scheduled. However, this raises concerns that companies may not provide access to their most critical and real-time information.
Furthermore, if MCP servers are not officially launched or well-maintained, the security and stability of MCP connections may be questionable.
Tang Shuang, an independent developer, shared an example of a map MCP Server with fewer than 20 tools. Five of these tools required latitude and longitude, while a weather tool required an administrative division ID without providing instructions on how to obtain these IDs. The only solution was for users to return to the service provider’s ecosystem and follow the steps to obtain information and permissions.
While MCP’s popularity is evident, the underlying dynamics are complex. Although LLM vendors are willing to provide MCP services, they retain control and are hesitant to benefit other ecosystems. If services are not properly maintained, developers may face increased workload, undermining the purpose of the ecosystem.
The Victory of Open Source
Why is MCP gaining traction now?
Initially, MCP received little attention after its launch by Anthropic. Only a limited number of applications, such as Anthropic’s Claude Desktop, supported the MCP protocol. Developers lacked a unified AI development ecosystem and primarily worked in isolation.
The adoption of MCP by developers has gradually brought it to the forefront. Starting in February 2025, several popular AI programming applications, including Cursor, VSCode, and Cline, announced support for the MCP protocol, significantly raising its profile.
Following the developer community’s adoption, the integration of MCP by LLM vendors has been the key factor in its widespread adoption.
OpenAI’s announcement of support for MCP on March 27th, followed by Google, was a crucial step.
Google CEO Sundar Pichai expressed his ambivalence towards MCP on X, stating, “To MCP or not to MCP, that is the question.” However, just four days after posting this tweet, Google also announced its support for MCP.
The rapid adoption of MCP by major players in the AI industry highlights its potential to transform the way AI applications are developed and deployed.
MCP: A Deeper Dive into Its Impact and Future
The rise of Model Context Protocol (MCP) signifies more than just a new technical standard; it represents a fundamental shift in how AI applications are conceived, developed, and deployed. By creating a universal communication layer between Large Language Models (LLMs) and external tools, MCP promises to democratize AI development, empowering a broader range of creators to build innovative solutions.
The implications of this shift are far-reaching. Consider the potential for personalized medicine. With MCP, an LLM could seamlessly access patient records, research databases, and diagnostic tools to generate tailored treatment plans. Or imagine a more sophisticated customer service experience, where AI agents can resolve complex issues by interacting with multiple systems, from billing platforms to product databases.
MCP also addresses a critical challenge in the AI space: data silos. By providing a standardized way to access and integrate diverse data sources, MCP can unlock valuable insights that were previously inaccessible. This can lead to more accurate predictions, more effective interventions, and a deeper understanding of complex phenomena.
The “super app” concept, once a distant aspiration, is now within reach, thanks to MCP. Developers can create AI-powered applications that seamlessly integrate a wide range of functionalities, offering users a unified and intuitive experience. This could revolutionize industries such as finance, education, and entertainment.
The standardization driven by MCP is also expected to fuel innovation. By reducing the complexity of AI development, MCP frees up developers to focus on creating novel features and functionalities. This can lead to a Cambrian explosion of AI applications, transforming virtually every aspect of our lives.
However, the success of MCP hinges on addressing the limitations and challenges that currently exist. Ensuring the security and privacy of data accessed through MCP is paramount. Robust security protocols and governance frameworks are needed to prevent misuse and protect sensitive information.
Interoperability is another key concern. While MCP aims to create a universal standard, ensuring that different implementations are compatible and work seamlessly together is essential. This requires collaboration and coordination among vendors, developers, and industry stakeholders.
The governance of the MCP ecosystem is also crucial. Clear guidelines and standards are needed to ensure fairness, transparency, and accountability. This includes defining rules for data sharing, access control, and dispute resolution.
Despite these challenges, the potential benefits of MCP are too significant to ignore. By fostering a more open, collaborative, and standardized AI ecosystem, MCP can unlock a new era of innovation and create a future where AI is more accessible, beneficial, and aligned with human values.
The Path Forward for MCP
As the MCP ecosystem continues to evolve, it will be crucial to address the existing limitations and challenges. This includes:
- Standardization: Developing a more standardized MCP protocol that is independent of individual vendors. This will foster greater interoperability and prevent vendor lock-in. Open-source initiatives and industry consortia can play a vital role in defining and promoting such standards.
- Security: Implementing robust security measures to ensure the safety and reliability of MCP connections. This includes authentication, authorization, encryption, and intrusion detection. Regular security audits and penetration testing are also essential.
- Maintainability: Encouraging the development and maintenance of high-quality MCP servers. This requires providing developers with the tools and resources they need to build and maintain robust and reliable MCP services. Incentives for quality and performance can also be effective.
- Accessibility: Making MCP more accessible to developers of all skill levels. This includes providing clear documentation, tutorials, and sample code. Low-code/no-code platforms and AI-powered development tools can also help lower the barrier to entry.
Addressing these challenges requires a collaborative effort involving vendors, developers, researchers, and policymakers. Open dialogue and knowledge sharing are essential for fostering innovation and ensuring that MCP is developed and deployed in a responsible and ethical manner.
By addressing these challenges, MCP has the potential to unlock a new era of AI innovation, enabling the creation of more powerful, versatile, and user-friendly AI applications. These applications can transform industries, improve lives, and address some of the world’s most pressing challenges.
In conclusion, while MCP is still in its early stages, its potential to transform the AI landscape is undeniable. By fostering a more open, standardized, and collaborative ecosystem, MCP can pave the way for a future where AI is more accessible and beneficial to everyone. The journey may be complex and challenging, but the rewards are well worth the effort.