Windows AI Development: New Build 2025 Features

We are thrilled to be at Build again, and every year it is such a special moment to connect with the global developer community. It’s incredibly energizing to share what we’ve been working on and to learn how developers are using Microsoft platforms to build the next generation of innovation.

At Microsoft, we believe the future of AI is being built across the cloud, the edge, and Windows. Windows is, and will remain, an open platform that empowers developers to do their best work and provides ultimate flexibility.

Our North Star is clear: to make Windows the best platform for developers, purpose-built for the new era of AI, where intelligence is integrated into software, silicon, and hardware. From using Windows 11 on the client to Windows 365 in the cloud, we are building a platform that supports a wide range of scenarios, from AI development to core IT workflows, all with security-first principles.

Over the past year, we have spent time listening to developers, understanding what they value most and where we have opportunities to continue making Windows an even better development environment, particularly in the age of AI development. This feedback has shaped our view of the Windows developer platform and the updates we are introducing today.

New on Windows at Build:

  • Windows AI Foundry, the evolution of the Windows Copilot Runtime, offers a unified and reliable platform supporting the AI development lifecycle, from model selection, optimization, fine-tuning, to deployment across client and cloud. Windows AI Foundry includes these capabilities:

  • Windows ML, the foundation of the AI platform and the built-in AI inference runtime on Windows. This allows developers to bring their own models and deploy them efficiently across an ecosystem of silicon partners including AMD, Intel, NVIDIA, and Qualcomm, spanning CPUs, GPUs, and NPUs.

  • Windows AI Foundry integrates Foundry Local and other model catalogs, like Ollama and NVIDIA NIMs, giving developers fast access to a variety of ready-to-use, open-source models on Windows silicon. This empowers developers to browse, test, interact with, and deploy models within their local applications.

  • Additionally, Windows AI Foundry provides ready-to-use AI APIs powered by Windows built-in models on Copilot+ PCs for key language and vision tasks such as text intelligence, image captioning, text recognition, custom prompting, and object eraser. We are announcing new features such as LoRA (low-rank-adaption) for fine-tuning our built-in SLM Phi Silica with custom data. We are also announcing new APIs for semantic search and knowledge retrieval so developers can use their custom data to build natural language search and RAG (Retrieval Augmented Generation) scenarios in their applications.

  • Evolving Windows 11 for a future Agentic environment with native support for the Model Context Protocol (MCP). The integration of MCP with Windows will provide a standardized framework for AI agents to connect to native Windows applications, enabling applications to seamlessly participate in agentic interactions. Windows applications can expose specific functionality to augment the skills and capabilities of agents installed on Windows PCs. We will be providing a private developer preview with select partners over the coming months to begin gathering feedback.

  • App Actions on Windows, a new app developer capability that unlocks new entry points for developers to engage new users by building actions and increasing discoverability for specific functionalities within their applications.

  • New Windows Security features, like Virtualization-Based Security (VBS) Enclave SDK and Post-Quantum Cryptography (PQC), provide developers with additional tools to more easily develop secure solutions as the threat landscape continues to evolve.

  • Windows Subsystem for Linux (WSL) Open Source, inviting developers to contribute, customize, and help us integrate Linux more seamlessly into Windows.

  • New improvements to popular Windows developer tools, including Terminal, WinGet, and PowerToys, empower developers to increase productivity and focus on what they do best – coding.

  • New Microsoft Store growth features, now including free developer registration, Web Installer for Win32 apps, analytics reporting, app promotion programs, and more to help app developers increase user acquisition, discovery, and engagement on Windows.

Windows AI Foundry

We want to democratize the ability for developers to build, experiment, and reach users with breakthrough AI experiences. We have heard from developers who are just starting out with AI development that they prefer ready-to-go solutions for specific task capabilities to accelerate AI integration into apps. Developers have also told us they need an easy way to browse, test, and integrate open-source models in their apps. Developers building their own advanced models have told us they prefer fast and powerful solutions to efficiently deploy models across various kinds of silicon. To meet various development needs, we evolved Windows Copilot Runtime to be Windows AI Foundry which offers many powerful capabilities.

Developers can more easily access ready-to-use open-source models

Windows AI Foundry integrates Foundry Local and other model catalogs, like Ollama and NVIDIA NIMs, giving developers fast access to a variety of ready-to-use open-source models on Windows silicon. With Foundry Local model catalog, we have done the heavy lifting of optimizing these models across CPU, GPU, and NPU, making them ready to use out of the box.

During the preview, developers can access Foundry Local by installing it from WinGet (winget install Microsoft.FoundryLocal) and Foundry Local CLI to browse, download, and test models. Foundry Local will automatically detect the device hardware (CPU, GPU, and NPU), and list compatible models that developers can try. Developers can also leverage Foundry Local SDK to easily integrate Foundry Local into their applications. In the coming months, we will offer these capabilities directly in Windows 11 and Windows App SDK, which will optimize developer experience for publishing production applications using Foundry Local.

While we provide ready-to-use open-source models, we have an increasing number of developers who are building their own models and bringing breakthrough experiences to end users. Windows ML is the foundation of the AI platform and the built-in AI inference runtime that streamlines and efficiently deploys models on CPUs, GPUs, and NPUs.

Windows ML is a high-performance, local inferencing runtime built directly into Windows, making shipping production applications with open source or proprietary models—including our own Copilot+ PC experiences—straightforward. It’s been built from the ground up to be optimized for model performance and agility, and responsive to the rate of innovation of model architectures, operators, and optimizations across all layers of the stack. Windows ML is an evolution of DirectML (DML) built on our learnings over the past year listening to numerous developers, our silicon partners, and our own teams building AI experiences for Copilot+ PCs. Windows ML is built to take this feedback into account enabling our silicon partners (AMD, Intel, NVIDIA, Qualcomm) to leverage an execution provider contract to optimize model performance and keep pace with the rate of innovation.

Windows ML provides a number of benefits:

Simplified Deployment: Enables developers to ship production applications without bundling ML runtimes, hardware execution providers, or drivers with their applications. Windows ML detects the hardware on the client device, extracts the corresponding execution provider, and chooses the correct execution provider for inference based on the configuration provided by the developer.

Automatic adaptation to future generations of AI silicon: Windows ML enables developers to confidently build AI applications in a rapidly evolving silicon ecosystem. As new hardware is released, Windows ML keeps all the necessary dependencies up to date, and adapts to new silicon, while maintaining model accuracy and hardware compatibility.

Tools to prepare and publish high-performance models: Powerful tools included in the AI Toolkit for VS Code for various tasks, from model conversion, to quantization, to optimization, streamline the process of preparing and publishing high-performance models.

We are working closely with all silicon partners (AMD, Intel, NVIDIA, Qualcomm) to seamlessly integrate their execution providers with Windows ML to provide the best model performance for their specific silicon.

Many application developers such as Adobe, Bufferzone, McAfee, Reincubate, Topaz Labs, Powder, and Wondershare have already partnered with us to utilize Windows ML for deploying models across AMD, Intel, NVIDIA, and Qualcomm silicon. To learn more about Windows ML, please visit this blog.

Quickly and easily integrate AI with APIs powered by Windows built-in models

We are providing ready-to-use AI APIs powered by Windows built-in models for key tasks such as text intelligence and image processing. These include language APIs such as text summarization and re-writing, and vision APIs such as image captioning, text recognition (OCR), image super-resolution, and image segmentation, all available in stable version at the latest release of Windows App SDK 1.7.2. These APIs eliminate the overhead of model building or deployment. These APIs run locally on the device, helping to provide privacy, security, and compliance at zero additional cost, and are optimized for NPUs on Copilot+ PCs. Application developers such as Dot Vista, Wondershare’s Filmora, Pieces for Developers, Powder, iQIYI and more are already leveraging our ready-to-use AI APIs in their applications.

We have also heard from developers that they need to fine-tune LLMs with their custom data to get the desired outputs for specific scenarios. Many have also shared that fine-tuning foundation models is a daunting task. That’s why we are announcing LoRA (low-rank-adaption) support for Phi Silica.

Introducing LoRA (low-rank-adaption) for Phi Silica to fine-tune our built-in SLM with custom data

LoRA improves the efficiency of fine-tuning by only updating a small subset of the model’s parameters with custom data. This allows for improved performance for a desired task without compromising the model’s overall capabilities. This is being publicly previewed on Snapdragon X Series NPUs starting today, and will be available for Intel and AMD Copilot+ PCs over the coming months. Developers can access LoRA for the Phi Silica in Windows App SDK 1.8 Experimental 2.

Developers can get started with LoRA training for Phi Silica through the AI Toolkit for VS Code. Select the fine-tuning tool, select the Phi Silica model, configure the project and launch training in Azure with a custom dataset. Once training has completed, developers are able to download the LoRA adapter and use it on top of the Phi Silica API and experiment to understand the difference in responses with or without the LoRA adapter.

Introducing Semantic Search and Knowledge Retrieval for LLMs

We are introducing new Semantic Search APIs to help developers create powerful search experiences with their own application data. These APIs provide support for semantic search (searching by meaning, including image search) and lexical search (searching by exact words), allowing users to find what they need in a more intuitive and flexible way.

These search APIs runs locally on all device types and offer seamless performance and privacy. On Copilot+ PCs, semantic capabilities are enabled for a premium experience.

In addition to traditional search, these APIs support RAG (Retrieval Augmented Generation), allowing developers to use their own custom data to support LLM outputs.

These APIs are currently available as a private preview.

In summary, Windows AI Foundry provides many capabilities that can meet the needs of developers at any stage of their AI journey. It provides ready-to-use APIs powered by built-in models, tools for customizing Windows built-in models, and high-performance inference runtime to help developers ship their own models and deploy them on silicon. With Foundry Local integrated into Windows AI Foundry, developers can also access a rich catalog of open-source models.

Windows AI Foundry ISV Adoption

We are excited to celebrate our incredible community of developers building experiences with on-device AI on Windows 11 today, and we cannot wait to see what else developers will build with these rich capabilities provided by Windows AI Foundry.

Introducing Native Model Context Protocol (MCP) Support to Power Agentic Ecosystem on Windows 11

As the world moves toward an Agentic future, Windows is evolving to provide the tools, capabilities, and security paradigms for agents to operate and augment their skills within, to provide meaningful value to customers.

The MCP platform on Windows will provide a standardized framework for AI agents to connect to native Windows applications, which can expose specific functionality to augment the skills and capabilities of those agents on Windows 11 PCs. This infrastructure will be available as a private developer preview with select partners over the coming months to begin gathering feedback.

Security and Privacy First: With the new MCP capabilities, we recognize that as we continue to expand MCP and other Agentic features, we will be continuously learning, and our first priority is ensuring we’re building on a secure foundation. Here are some principles that guide our responsible development of MCP on Windows 11:

  • We are committed to making the MCP registry for Windows a trusted ecosystem of MCP servers that meet strong security benchmark standards.

  • User control is the guiding principle in our development of this integration. By default, agent access to MCP servers is turned off. When enabled, all sensitive actions that the agent performs on behalf of the user will be auditable and transparent.

  • MCP server access will be governed following the principle of least privilege, enforced through declarative capabilities and isolation where applicable, ensuring users can control the privileges granted to MCP servers and helping to limit the impact of any attack on any particular server.

Security isn’t a one-time feature, but a continuous commitment. As we expand MCP and other Agentic capabilities, we’ll continue to evolve our defenses. To learn more about our security approach, please visit Securing the Model Context Protocol: Building a secure Agentic future on Windows.

We are introducing the following components in the MCP platform on Windows:

MCP registry for Windows: This will be a single, secure, and trusted source for AI agents to discover MCP servers accessible on Windows. Agents can leverage the MCP registry for Windows to discover MCP servers installed on the client device, leverage their expertise, and provide meaningful value for end users.

MCP servers for Windows: This will include Windows system functionalities such as the file system, windows, and Windows Subsystem for Linux acting as MCP servers for Agents to interact with.

Developers can wrap desired functions and capabilities in their applications as MCP servers and make them available through the MCP registry for Windows. We are introducing App Actions on Windows, a new developer capability that can also act as a built-in MCP server, enabling applications to expose their functionalities to agents.

MCP Architecture on Windows

We are partnering with application developers, such as Anthropic, Perplexity, OpenAI, and Figma, to build this platform as they are integrating their MCP functionalities for applications on Windows.

As Rich O’Connell, Strategic Alliances Lead at Anthropic shared, “We’re excited to see the continued adoption of the Model Context Protocol and the thriving ecosystem of integrations built by popular services and the community. LLMs benefit from connecting to your world of data and tools, and we look forward to the value users will experience from connecting Claude to Windows.

Aravind Srinivas, Co-founder and CEO of Perplexity shared: “At Perplexity, like Microsoft, we are focused on truly useful trusted experiences. MCP in Windows brings assistive AI experiences to one of the most impactful operating systems in the world.

Kevin Weil, Chief Product Officer at OpenAI shared: “We are excited to see Windows embrace AI agent experiences through its adoption of the Model Context Protocol. This paves the way for ChatGPT to seamlessly connect with the Windows tools and services that people use every day. We look forward to empowering developers and users to create powerful, context-rich experiences through this integration.

These early collaborations are setting the foundation for our commitment to keeping Windows an open platform and evolving it for an agentic future. The momentum behind MCP provides a great opportunity for developers to increase app discovery and engagement.

Introducing Windows App Actions, a New Developer Feature to Increase Discoverability of their Apps

We have heard from developers that keeping their apps top of mind for users and driving increased usage is paramount for their growth. As a developer company ourselves, we deeply understand this core need. We are therefore introducing Windows App Actions. App Actions provides a new capability for developers to increase the discoverability of capabilities of their apps, thereby unlocking new entry points for developers to engage new users.

Leading apps across industries including productivity, creativity, and communication currently use App Actions to unlock new engagement surfaces. Zoom, Filmora, Goodnotes, Todoist, Raycast, Pieces for Developers, and Spark Mailare among the first developers to join this feature.

Developers can use :

  • App Actions APIs to author actions for their desired capabilities. Developers can also utilize Actions developed by other related apps to provide complementing functionalities, thereby increasing their stickiness. These APIs are accessible via Windows SDK 10.0.26100.4188 or above.

  • App Actions Test Environment to test the functionality and user experience of their App Actions. Developers can download the testing tool via Microsoft Store.

Powerful AI Developer Workstations for High compute and local inference Workload needs

Developers building high compute AI workloads have told us that they not only need reliable software but also powerful hardware to support local AI development. We have partnered with a range of OEM and silicon partners to provide powerful AI developer workstations.

OEM partners similar to Dell, HP, and Lenovo provide a range of Windows-based systems to provide flexibility regarding hardware specifications and budget. The Dell Pro Max Towe delivers impressive hardware specifications for powerful performance, making it a great choice for AI model authoring on a GPU or CPU, and for local model fine-tuning. For processing power with space efficiency, the HP Z2 Mini G1a is a powerful mini workstation. The new Dell Pro Max 16 Premium, HP Zbook Ultra G1a and Lenovo P14s/P16s are all Copilot+ PCs, offering incredible mobility for developers.

New on Windows platform Security

Announcing VBS Enclave SDK (preview) for secured computing needs

Security is at the forefront of innovation and everything that Microsoft does. In the age of AI, more and more applications need to protect their data against malware and even malicious users and admins. In 2024, we introduced Virtualization-Based Security (VBS) Enclave technology to provide a trusted execution environment where applications can perform secured computing including cryptographic operations preventing admin level attacks. This is the same foundation that protects our Recall experience on Copilot+ PCs. We are now enabling developers to leverage this security foundation. The VBS Enclave SDK is now available as public preview, which includes a collection of libraries and tools to program the secure region with a more natural experience, where the developer can clone the repository.

It starts with a tool that creates the API projection layer. Developers can now define the interface between the host application and the secure region and the tool does all the heavy lifting to validate parameters and handle memory management and security checks. This allows developers to focus on their business logic and the secure region protects the parameters, data, and memory. In addition, the libraries make it easy for developers to handle common tasks such as secure region creation, encrypting and decrypting data, managing a thread pool, and reporting telemetry.

Post-Quantum Cryptography arrives in Windows Insiders and Linux

We have previously discussed the security challenges presented by advances in quantum computing, and have taken steps to contribute to quantum security for the industry as a whole, including the addition of PQC algorithms to our core cryptographic library, SymCrypt.

We will soon be providing PQC capabilities to Windows Insiders and Linux (SymCrypt-OpenSSL version 1.9.0). This integration is an important first step in enabling developers to experiment with PQC in their environments and evaluate compatibility, performance, and integration with existing security systems. Early access to PQC capabilities helps security teams identify challenges, optimize strategies, and streamline transitions as industry standards evolve. By proactively addressing the security concerns of current cryptography standards, we are working to pave the way for a digital future that embraces the benefits of quantum computing and mitigates security risks.

New experiences designed to empower every developer to be more productive on Windows 11

Windows Subsystem for Linux (WSL) provides a robust platform for AI development on Windows by making it easy to run both Windows and Linux workloads side by side. Developers can easily share files, GUI applications, GPUs, and more between Windows and Linux environments without additional setup.

Announcing Windows Subsystem for Linux is now Open Source

We are excited to announce that we are open sourcing Windows Subsystem for Linux. With this, we are opening up the code that creates and supports the virtual machine behind WSL distributions, and integrates it with Windows features and resources, for community contributions. This will unlock new performance and extensibility gains. This is an open invitation for the developer community to help us integrate Linux more seamlessly into Windows and make Windows the preferred platform for modern, cross-platform development.

In fact, reflecting on this, making WSL open source was the very first issue submitted in that repository. At the time, all of the logic for the project couldn’t be separated from the Windows image itself, but since then, we’ve made changes to WSL 2 distributions, and ship WSL as its own separate application. With that, we are able to close the very first request! Thank you to the wonderful WSL community for all the feedback, ideas, and effort.

We know that building great AI experiences starts with developer productivity, from setting up devices and environments faster to getting all the tools you need in one place. Therefore, we are announcing improvements to popular Windows developer tools like WinGet, PowerToys, and Terminal.

Get Code Ready Faster with WinGet Configuration

Developers can easily set up and replicate development environments using a single, reliable, WinGet Configure command. Developers can now capture the current state of their devices, including their applications, packages, and tools–available in configured WinGet sources–into a WinGet Configuration file. WinGet Configuration has now been updated to support Microsoft DSC V3. If installed applications and packages are DSC V3 enabled, then the settings of the application will be included in the generated configuration file as well. This will be generally available next month. Please visit winget-dsc GitHub repository for more information.

Introducing Advanced Windows Settings to help developers control and personalize their Windows experiences

Developers and power users often face challenges customizing Windows to meet their unique needs due to settings that are hidden or obscure. Advanced Windows Settings enables developers to easily control and personalize their Windows experience. They can access and configure powerful advanced settings with just a few clicks, all from a central location within Windows Settings. This includes powerful settings such as enabling File Explorer with GitHub version control details. This will be available soon in preview for the Windows Insider Program.

Advanced Windows Settings

Introducing Command Palette in PowerToys

The Command Palette is the next evolution of PowerToys Run and enables developers to reduce their context-switching efforts by providing an easily accessible way to access all their frequently used commands, applications, and workflows. It is customizable, fully extensible, and performant, empowering developers to effectively manage interactions with their favorite tools. It is generally available today.

Edit, a New Command Line Text Editor on Windows

We are introducing Edit, a command-line text editor on Windows, accessible by running “edit” within the command line. This allows developers to edit files directly within the command line, staying within their current workflow and minimizing context switching. It is currently open source and will be available in preview in the Windows Insider Program in the coming months. Head over to the GitHub repository to learn more.

Microsoft Store: Strategic Growth Opportunities for App Developers

The Microsoft Store is a safe and extensible channel for distributing Windows apps. With over 250 million monthly active users and a rapidly expanding catalog — including recent additions like ChatGPT, Perplexity, Fantastical, Day One, Docker, and coming soon, Notion — the Store is becoming Windows’ largest app marketplace. And, with the reimagined AI Hub, we’re making the Microsoft Store on Windows the go-to destination for people to discover how to leverage AI on their devices. For those with Copilot+ PCs, we have launched the new AI Hub experience and AI Badges to highlight experiences across Windows and the developer ecosystem.

Today, we are introducing exciting new capabilities for developers:

  • Free account registration for individual developers — making it easier than ever for everyone to publish apps.

  • Microsoft Store FastTrack, a new free preview program for eligible companies submitting their first Win32 app.

  • App Campaigns public beta, a new developer program