The post Building Your First MCP Server with .NET and Publishing to NuGet appeared first on .NET Blog.
]]>The Model Context Protocol (MCP) is an open standard that enables AI assistants to securely connect to external data sources and tools. Think of it as a bridge between AI models and the real world — letting assistants access databases, APIs, file systems, and custom business logic.
With .NET 10 and the new MCP templates, you can create powerful servers that extend AI capabilities — and now publish them to NuGet for the entire .NET community to discover and use!
Here’s the exciting part: NuGet.org now supports hosting and consuming MCP servers built with the ModelContextProtocol C# SDK. This means:
Search for MCP servers on NuGet.org using the MCP Server
package type filter, and you’ll see what the community is building!
Let’s build a simple MCP server that provides weather information and random numbers. You’ll see how easy it is to get started with the new .NET 10 MCP templates.
Before we start, make sure you have:
First, install the MCP Server template (version 9.7.0-preview.2.25356.2 or newer):
dotnet new install Microsoft.Extensions.AI.Templates
Create a new MCP server with the template:
dotnet new mcpserver -n SampleMcpServer
cd SampleMcpServer
dotnet build
The template gives you a working MCP server with a sample get_random_number
tool. But let’s make it more interesting!
Let’s enhance our MCP server with a weather tool that uses environment variables for configuration. Add a new WeatherTools.cs
class to the Tools
directory with the following method:
[McpServerTool]
[Description("Describes random weather in the provided city.")]
public string GetCityWeather(
[Description("Name of the city to return weather for")] string city)
{
// Read the environment variable during tool execution.
// Alternatively, this could be read during startup and passed via IOptions dependency injection
var weather = Environment.GetEnvironmentVariable("WEATHER_CHOICES");
if (string.IsNullOrWhiteSpace(weather))
{
weather = "balmy,rainy,stormy";
}
var weatherChoices = weather.Split(",");
var selectedWeatherIndex = Random.Shared.Next(0, weatherChoices.Length);
return $"The weather in {city} is {weatherChoices[selectedWeatherIndex]}.";
}
Next, update your Program.cs to include .WithTools<WeatherTools>()
after the previous WithTools
call.
This tool demonstrates how to:
Configure GitHub Copilot to use your MCP server by creating .vscode/mcp.json
:
{
"servers": {
"SampleMcpServer": {
"type": "stdio",
"command": "dotnet",
"args": [
"run",
"--project",
"."
],
"env": {
"WEATHER_CHOICES": "sunny,humid,freezing,perfect"
}
}
}
}
Now test it in GitHub Copilot with prompts like:
Update your .mcp/server.json
file to declare inputs and metadata:
{
"description": "A sample MCP server with weather and random number tools",
"name": "io.github.yourusername/SampleMcpServer",
"packages": [
{
"registry_name": "nuget",
"name": "YourUsername.SampleMcpServer",
"version": "1.0.0",
"package_arguments": [],
"environment_variables": [
{
"name": "WEATHER_CHOICES",
"description": "Comma separated list of weather descriptions",
"is_required": true,
"is_secret": false
}
]
}
],
"repository": {
"url": "https://github.com/yourusername/SampleMcpServer",
"source": "github"
},
"version_detail": {
"version": "1.0.0"
}
}
Also update your .csproj
file with a unique <PackageId>
:
<PackageId>YourUsername.SampleMcpServer</PackageId>
Now for the exciting part — publishing to NuGet!
dotnet pack -c Release
dotnet nuget push bin/Release/*.nupkg --api-key <your-api-key> --source https://api.nuget.org/v3/index.json
Tip: Want to test first? Use the NuGet test environment at int.nugettest.org before publishing to production.
Once published, your MCP server becomes discoverable on NuGet.org:
mcpserver
package type.vscode/mcp.json
fileThe generated configuration looks like this:
{
"inputs": [
{
"type": "promptString",
"id": "weather-choices",
"description": "Comma separated list of weather descriptions",
"password": false
}
],
"servers": {
"YourUsername.SampleMcpServer": {
"type": "stdio",
"command": "dnx",
"args": [
"YourUsername.SampleMcpServer",
"--version",
"1.0.0",
"--yes"
],
"env": {
"WEATHER_CHOICES": "${input:weather-choices}"
}
}
}
}
VS Code will prompt for input values when you first use the MCP server, making configuration seamless for users.
With .NET 10 and NuGet as the official support for .NET MCP you’re now part of a growing ecosystem that’s transforming how AI assistants interact with the world. The combination of .NET’s robust libraries and NuGet’s package management creates endless possibilities for AI extensibility.
This is our first release of the .NET MCP Server project template, and we’ve started with a very simple scenario. We’d love to hear what you’re building, and what you’d like to see in future releases of the template. Let us know at https://aka.ms/dotnet-mcp-template-survey.
Here are some powerful MCP servers you could build next:
Each of these represents a unique opportunity to bridge AI capabilities with real business needs — and share your solutions with the entire .NET community through NuGet.
To go further:
.NET + MCP + NuGet = The future of extensible AI
Happy building, and welcome to the growing community of MCP server creators!
The post Building Your First MCP Server with .NET and Publishing to NuGet appeared first on .NET Blog.
]]>The post .NET 10 Preview 6 is now available! appeared first on .NET Blog.
]]>This release contains the following improvements.
dnx
tool execution script--cli-schema
option for CLI introspectionThis preview release does not contain new C# features.
This preview release does not contain new F# features.
This preview release does not contain new Visual Basic features.
This release includes support for Android API levels 35 and 36, along with enhancements to interop performance, binary size reduction, and diagnostics. A detailed list can be found on dotnet/android GitHub releases.
This release includes updates to Apple platform SDKs aligned with Xcode 16.4 and introduces improvements to binding generation, build reliability, and runtime behavior. A detailed list can be found on dotnet/macios GitHub releases including a list of Known issues.
This release was focused on quality improvements and build performance. A detailed list can be found in release notes.
This release was focused on quality improvements and build performance. A detailed list can be found in release notes.
This release was focused on quality improvements and build performance. A detailed list can be found in release notes.
This release was focused on quality improvements and build performance. A detailed list can be found in release notes.
To get started with .NET 10, install the .NET 10 SDK.
If you’re on Windows using Visual Studio, we recommend installing the latest Visual Studio 2022 preview, which now includes GitHub Copilot agent mode and MCP server support. You can also use Visual Studio Code and the C# Dev Kit extension with .NET 10.
Join us each week and engage with the developers and product managers behind .NET for community standups.
Join us for a live stream unboxing with the team to discuss what’s new in this preview release, with live demos from the dev team!
The team has been making monthly announcements alongside full release notes on the dotnet/core GitHub Discussions and has seen great engagement and feedback from the community.
You can stay up-to-date with all the features of .NET 10 with:
Additionally, be sure to subscribe to the GitHub Discussions RSS news feed for all release announcements.
We want your feedback, so head over to the .NET 10 Preview 6 GitHub Discussion to discuss features and give feedback for this release.
The post .NET 10 Preview 6 is now available! appeared first on .NET Blog.
]]>The post Customize AI responses from GitHub Copilot appeared first on .NET Blog.
]]>Note
GitHub Copilot Agent mode is more than code completion, it’s an AI-powered agent that can take actions on your behalf within your development environment. It can create entire applications, including all the necessary files, and then compile the app to verify there’s not compile-time errors. When you hear people talk about vibe coding, they’re talking about Agent mode.One thing remains the same though no matter how advanced the AI gets: You have to supply it with clear, detailed, and at times verbose instructions – or prompts – to get the response to do what you want.
There’s a big difference between:
Create a console app that manages the user's todos.
And
Create a console application that manages the user's todos.
The todos will remain only in memory and thus will only be available while the app is running, no persistence.
The application should allow users to add a new item, mark an item as done and delete an item.
Use a loop so the user can continuously choose an action until they explicitly want to quit.
Use the latest version of C# - which is C# 13.
Make sure to apply exception handling.
The code should be legible, favor readability over terseness.
Insert a newline before the opening curly brace of any codeblock (e.g. after if, for, while, etc).
Write clear and concise documentation.
Put a funny emoji at the end of every comment.
It sure would be nice if there was a way that we could combine the terseness of the first prompt – just get to the point and do what we want! – without having to include the boilerplate formatting and best practices instructions every single time we interact with the AI.
There sure is – custom instructions.
Custom instructions are how you give the AI specific context about your team’s workflow, your particular style preferences, coding standards, etc. Custom instructions are contained in a markdown file and in it you can provide the AI instructions on how to understand your project better.
Custom instructions define how tasks should be performed. Like which technologies are preferred. Or the physical project structure that should be followed. Or what to check for in a code review.
Instead of having to include the common guidelines or rules to get the responses you want in every single chat query, custom instructions automatically incorporate that information with every chat request.
So if your team has specific formatting, error-handling, or naming conventions – custom instructions ensures the AI responses align with them from the very start. And custom instructions help reduce friction when using Copilot. You don’t have to remember to update its code responses to align with your team’s coding standards – or prompt it every single time to produce output that does.
Important
Custom instructions are not taken into account for code completions.Sounds good right? So let’s get started with putting a custom instruction into our project. I’m going to walk you through a way that works with both Visual Studio and VS Code. At this time VS Code has a couple of ways to handle instructions which you can read about.
In your project create a .github folder at the root of your solution.
Then create a file named copilot-instructions.md in that folder.
Now, in markdown, enter the instructions. Need some ideas? There’s a great repository of community provided custom instructions. I’m going to use the C# instructions from that repo. Here’s a snippet:
---
description: 'Guidelines for building C# applications'
applyTo: '**/*.cs'
---
# C# Development
## C# Instructions
- Always use the latest version C#, currently C# 13 features.
- Write clear and concise comments for each function.
## General Instructions
- Make only high confidence suggestions when reviewing code changes.
- Write code with good maintainability practices, including comments on why certain design decisions were made.
- Handle edge cases and write clear exception handling.
- For libraries or external dependencies, mention their usage and purpose in comments.
That’s it! Have a project you’re working on loaded, open up GitHub Copilot, switch to Agent mode and ask it to do something.
And after agent mode is finished running, you will see that it used the custom instructions file to inform its responses. If you read through the details, you’ll notice that the AI has output that makes reference to the topics included in the custom instructions file. So it’s just like you passed all the content in copilot-instructions.md along with the prompt!
You can get responses from the AI in GitHub Copilot that is customized for your team’s needs and project requirements by providing the context with every request. Of course you could provide all of that context manually by typing it into the chat every single time. More realistically though, custom instructions are here for you. Custom instructions allow you to tell the AI how to do something by defining your team’s common guidelines and rules that need to be followed across the project.
Take one of your projects, grab one of the community provided instructions from the Awesome GitHub Copilot and start using them today. Let us know what you think, we can’t wait to see what you build.
The post Customize AI responses from GitHub Copilot appeared first on .NET Blog.
]]>The post How the .NET MAUI Team uses GitHub Copilot for Productivity appeared first on .NET Blog.
]]>This would dramatically reduce time and effort, freeing human developers to focus on higher-value, complex problems–while Copilot takes care of the repetitive work.
But would this dream be a reality? In this post, we aim to share a balanced perspective: highlighting both where Copilot has been genuinely helpful, and when it completely missed the mark.
In many ways, it already is. The .NET MAUI team has been actively using Copilot to boost our productivity, and we’re excited to share some practical tips for getting the most out of it.
At the time of writing, Copilot has risen to be the #64 contributor all time in dotnet/maui, which we expect to grow in the coming months:
.github/copilot-instructions.md
To start with the basics, we provide GitHub Copilot with some general context and guidance for our repository by including a copilot-instructions.md
file. By convention this file should be kept in the .github
folder at the root of your repository or workspace. This works both locally when using Copilot in Visual Studio or VS Code and when using the GitHub Copilot Coding Agent on GitHub.
The types of useful instructions we keep in this file include:
A great way to start is to simply ask the Copilot Coding Agent to generate this file for you.
An example GitHub issue would be:
Go through this repo, review structure of project, source code, etc.
Additional docs to review about the product: https://learn.microsoft.com/dotnet/maui
Update `.github/copilot-instructions.md` to make Copilot more helpful going forward.
Include a note to update `copilot-instructions.md` with new instructions as the project evolves.
See: https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot
Assign to Copilot and see what it comes up with. You can review and remove any instructions that are not relevant.
See examples of copilot-instructions.md
files on GitHub at:
When the GitHub Copilot Coding Agent completes your first assigned issue, you may notice a warning on the pull request description, such as:
This warning is actually a key security feature:
Imagine you’re working on a “top secret” private repository.
Copilot decides to make a web request to a public website to retrieve some data.
Copilot could have inadvertently leaked your private code to the public website!
While this is less of a concern in public repositories, this could still happen with GitHub secrets or other credentials.
If you want Copilot to be able to make web requests, you can review each firewall warning and configure rules to allow specific domains.
Go to the Settings > Environments > copilot
page on your GitHub repository.
Scroll to the bottom, Environment variables
section.
Add a new COPILOT_AGENT_FIREWALL_ALLOW_LIST
variable with a
comma-separated list of domains you want to allow.
Some useful domains to allow include:
learn.microsoft.com
– for Microsoft and .NET documentation.nuget.org
– to add new NuGet packages.developer.apple.com
– for iOS and Apple-specific documentation.developer.android.com
– for Android-specific documentation.Note
$COPILOT_AGENT_FIREWALL_ALLOW_LIST
should not have a trailing comma. See copilot-coding-agent/user-feedback#41 for more details..github/workflows/copilot-setup-steps.yml
Building upon GitHub (a platform), the Copilot Coding Agent literally runs within GitHub actions. You have complete control over the environment in which it runs, giving you the ability to run steps before the firewall restrictions are applied:
Download and install additional tools or dependencies.
Restore NuGet packages or other steps that require network access.
Do an inital (working) build of the project.
This way Copilot has a working source tree, can make changes, build again (incrementally), run tests, etc.
As with the previous step, you can file a GitHub issue and let Copilot
generate the copilot-setup-steps.yml
file. Here’s an example:
# Setup copilot development environment
Setup a `.github/workflows/copilot-setup-steps.yml` file, such as:
name: "Copilot Setup Steps"
on: workflow_dispatch
jobs:
copilot-setup-steps:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
submodules: recursive
- name: Setup .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: '9.x'
# TODO: Build the project
See `.github/DEVELOPMENT.md` for more details on how to build this project.
See: https://docs.github.com/en/copilot/customizing-copilot/customizing-the-development-environment-for-copilot-coding-agent
Note that Copilot currently only supports ubuntu-latest
as its
runtime OS. If your project needs Windows or macOS builds, Copilot may
be somewhat restricted in what it can do. See (or upvote!)
copilot-coding-agent/user-feedback#40 for more details.
One important detail: make sure that copilot-setup-steps.yml
is
configured to always complete, even if your build fails. This is
done using continue-on-error: true
.
- name: Run dotnet build
id: copilot-build
continue-on-error: true
run: dotnet build -bl
- name: Upload logs
uses: actions/upload-artifact@v4
if: steps.copilot-build.outcome == 'failure'
with:
name: copilot-artifacts
path: '**/*.binlog'
Copilot (or a human) might push a commit that breaks the build. If you
leave a comment like @copilot fix error XYZ
, it needs to be able get
past its setup steps and actually fix the problem. Having a copy of
the failed logs can also be useful for humans troubleshooting in the
future.
A Model Context Protocol (MCP) server provides “tools” that Copilot can draw from to accomplish specific goals. In our experience, the Microsoft Learn Docs MCP Server is one of the most powerful tools available. This gives Copilot important context on many topics before modifying your code.
To set this up, go to your repository Settings > Copilot > Coding Agent > MCP Configuration
:
{
"mcpServers": {
"microsoft-docs-mcp": {
"type": "http",
"url": "https://learn.microsoft.com/api/mcp",
"tools": [ "*" ]
}
}
}
It is also a good idea to add the following guidance at the top of
copilot-instructions.md
:
## Development Guidelines
**Always search Microsoft documentation (MS Learn) when working with .NET, Windows, or Microsoft features, or APIs.** Use the `microsoft_docs_search` tool to find the most current information about capabilities, best practices, and implementation patterns before making changes.
Security Note
Just like the firewall allow list, the MCP server configuration is a potential security risk. Be sure to review any MCP server’s source code and/or documentation to decide if it is safe to use.To setup GitHub Copilot Coding Agent end-to-end:
Note that much of this is optional, but Copilot can complete tasks faster and more reliably with a complete setup.
As a fun experiment, I used the Copilot Coding Agent to create a new .NET MAUI app called CatSwipe. It displays cat images from thecatapi.com, and lets you swipe left/no or right/yes (similar to a popular a dating app).
The trouble began, when I asked Copilot to take a screenshot of the running Android app. I was hoping it would do something like:
emulator -avd Pixel_7_API_35
adb shell screencap /sdcard/screenshot.png
adb pull /sdcard/screenshot.png
Unfortunately, due to a configuration issue, it couldn’t start the Android emulator, and things quickly went off the rails! It instead decided to:
System.Drawing
to generate an image of an Android emulator
and what it imagined the app would look like.This experience led us to come up with strategies for keeping Copilot on track:
When assigning issues, always add links to documentation. Write the issue in a way that a junior engineer (human) could pick up the task.
Be terse and direct but not necessarily rude. Mention how you’d like something done and don’t expect Copilot to discover novel solutions on its own.
Anticipate what might go wrong for Copilot. Tell it to “give up” if it cannot complete a task and report the error message.
Write scripts for common tasks and put examples in
copilot-instructions.md
that Copilot can use as a reference.
When Copilot does the wrong thing, it likely needs more context or
more instructions. Think of this as “debugging” your
copilot-instructions.md
file. Keep this in mind as you review pull
requests from Copilot, ask it to update copilot-instructions.md
during code review.
The GitHub Copilot Coding Agent is already showing strong potential in our day-to-day development. We’ve found it particularly effective for handling well-scoped, low-risk tasks. By leveraging Copilot for these “easy” issues, we save valuable engineering time and keep our team focused on more complex, high-impact work.
Copilot has been most successful in the dotnet/android repository, where we’ve specifically assigned it simpler refactoring-related tasks. In the past 28 days we’ve seen the following results with pull requests:
Author | Count | Merge % | P50 time to merge |
---|---|---|---|
@copilot | 17 | 82.4% | 10:15:34 |
All others | 49 | 87.8% | 11:36:35 |
P50 is 50th percentile, or what you might think of as the most common time a PR could be merged. In dotnet/android, Copilot has been pretty successful compared to all other PRs.
In dotnet/maui, we’ve been more optimistic: assigning PRs we knew might be too tough for Copilot to complete:
Author | Count | Merge % | P50 time to merge |
---|---|---|---|
@copilot | 54 | 16.7% | 10:15:22 |
All others | 255 | 52.9% | 14:36:47 |
Take these numbers with a grain of salt, as we have certainly been focusing a decent amount of time on Copilot. We could easily be giving Copilot PRs more attention as a result! Over the next several months, we should have a better picture of Copilot Coding Agent’s full impact on the product.
A few things that GitHub Copilot Coding Agent doesn’t do yet:
Comment @copilot do this
on an existing PR opened by a human:
This would be beneficial to fix tiny “nitpicks” instead of waiting
for one of our contributors (potentially a customer) to make the
change.
Support running on Windows or macOS: This is unfortunately a big need for .NET MAUI, as the product targets Windows and iOS. Ideally, we could detect the issue or pull request is specific to a platform, and programmatically choose which OS to run on.
As the tool improves, we expect to expand its role in our development process.
The post How the .NET MAUI Team uses GitHub Copilot for Productivity appeared first on .NET Blog.
]]>The post .NET and .NET Framework July 2025 servicing releases updates appeared first on .NET Blog.
]]>.NET and .NET Framework has been refreshed with the latest update as of July 08, 2025. This update contains non-security fixes.
This month you will find non-security fixes:
.NET 8.0 | .NET 9.0 | |
---|---|---|
Release Notes | 8.0.18 | 9.0.7 |
Installers and binaries | 8.0.18 | 9.0.7 |
Container Images | images | images |
Linux packages | 8.0.18 | 9.0.7 |
Known Issues | 8.0 | 9.0 |
This month, there are no new security updates, but there are new non-security updates available. For recent .NET Framework servicing updates, be sure to browse our release notes for .NET Framework for more details.
That’s it for this month, make sure you update to the latest service release today.
The post .NET and .NET Framework July 2025 servicing releases updates appeared first on .NET Blog.
]]>The post Local AI + .NET = AltText Magic in One C# Script appeared first on .NET Blog.
]]>dotnet run app.cs
Accessibility matters — and one simple but powerful way to improve it is by adding AltText to images. Alternative text helps screen readers describe images to visually impaired users, improves SEO, and enhances overall UX. But writing descriptive alt text for every image can be repetitive. That’s where AI comes in!
Local models are a game changer. No rate limits, no cloud latency, and full control over the models you use.
In this sample, we will use Ollama to run a vision model like gemma3
, llama3.2-vision
, or mistral-small3.2
. These models are great at understanding image content and generating rich natural language descriptions.
ollama run gemma3
# or ollama run llama3.2-vision
# or ollama run mistral-small3.2
Once Ollama is running locally (usually on http://localhost:11434), you can send requests to analyze an image and receive a natural language description.
Coming Soon
AI Foundry Local will provide similar capabilities with local models. Stay tuned!dotnet run app.cs
.NET 10 introduced a cool new feature: you can now run a C# file directly with dotnet run
. No project scaffolding, no .csproj
files — just clean, script-like execution.
Ref: https://devblogs.microsoft.com/dotnet/announcing-dotnet-run-app/
dotnet run app.cs
This is incredibly convenient for scripting tasks like image processing, quick automation, or dev tooling.
Let’s see it in action!
Source: https://gist.github.com/elbruno/4396c9ee3e56d1c86d280faa33b8f9fe
Save this code as alttext.cs
and run it using dotnet run alttext.cs <your image path>
. Make sure Ollama is running and the image path is correct.
// alttext.cs
#:package OllamaSharp@5.1.19
using OllamaSharp;
// set up the client
var uri = new Uri("http://localhost:11434");
var ollama = new OllamaApiClient(uri);
ollama.SelectedModel = "gemma3";
var chat = new Chat(ollama);
// read the image file from arguments
byte[] imageBytes = File.ReadAllBytes(args[0]);
var imageBytesEnumerable = new List<IEnumerable<byte>> { imageBytes };
// generate the alt text
var message = "Generate a complete alt text description for the attached image. The description should be detailed and suitable for visually impaired users. Do not include any information about the image file name or format.";
await foreach (var answerToken in chat.SendAsync(message: message, imagesAsBytes: imageBytesEnumerable))
Console.Write(answerToken);
// done
Console.WriteLine($">> Ollama done");;
Tip: If your image is large, consider resizing it before encoding. This reduces request size and speeds up inference.
With .NET 10 and the power of local AI models like Azure AI Foundry Local Ollama, we’re no longer limited to “chat with AI” scenarios. We can now:
This is also a great way to learn! Try switching to different local models, modifying the prompt, or adding a few lines to copy the result to your clipboard. These small tweaks help you experiment and understand how to integrate AI into real-world apps.
To go further:
.NET keeps getting more fun, and AI keeps getting more powerful — especially when you run it locally
Have fun generating smart AltText and making your apps more accessible!
The post Local AI + .NET = AltText Magic in One C# Script appeared first on .NET Blog.
]]>The post Simpler XAML in .NET MAUI 10 appeared first on .NET Blog.
]]>.NET 6 introduced global and implicit usings for C# which greatly reduced the using statements at the head of many C# files. Now in .NET 10 starting with Preview 5 we are introducing the same for XAML so you can declare your namespaces and prefixes in a single file and use them throughout. In fact, you can now omit the use of prefixes altogether!
This update begins by switching the global namespace that all XAML files used in .NET MAUI from xmlns="http://schemas.microsoft.com/dotnet/2021/maui"
to xmlns="http://schemas.microsoft.com/dotnet/maui/global"
. Now there is a truly global namespace unique to your application where we can pack other namespaces for use throughout the codebase.
Opt-in to the implicit namespaces by adding this configuration to your project file.
<PropertyGroup>
<DefineConstants>$(DefineConstants);MauiAllowImplicitXmlnsDeclaration</DefineConstants>
<EnablePreviewFeatures>true</EnablePreviewFeatures>
</PropertyGroup>
Now your project will implicitly include these 2 namespaces which you’ve been accustomed to seeing in every XAML file since .NET MAUI first shipped.
xmlns="http://schemas.microsoft.com/dotnet/2021/maui"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
Because x:
is used by the XAML inflator, you still need to use that prefix. With this change alone, your XAML for a view gets much tighter.
<ContentPage
xmlns="http://schemas.microsoft.com/dotnet/2021/maui"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
x:Class="MyApp.Pages.MyContentPage">
</ContentPage>
<ContentPage x:Class="MyApp.Pages.MyContentPage">
</ContentPage>
As you start to include classes of your own and from any of the many useful NuGet packages for .NET MAUI, the stack of xmlns
in your XAML starts to grow like a layer cake. This is the MainPage
from my app Telepathy
<ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:pageModels="clr-namespace:Telepathic.PageModels"
xmlns:viewModels="clr-namespace:Telepathic.ViewModels"
xmlns:models="clr-namespace:Telepathic.Models"
xmlns:converters="clr-namespace:Telepathic.Converters"
xmlns:controls="clr-namespace:Telepathic.Pages.Controls"
xmlns:sf="clr-namespace:Syncfusion.Maui.Toolkit.TextInputLayout;assembly=Syncfusion.Maui.Toolkit"
xmlns:cards="clr-namespace:Syncfusion.Maui.Toolkit.Cards;assembly=Syncfusion.Maui.Toolkit"
xmlns:b="clr-namespace:Syncfusion.Maui.Toolkit.Buttons;assembly=Syncfusion.Maui.Toolkit"
xmlns:pullToRefresh="clr-namespace:Syncfusion.Maui.Toolkit.PullToRefresh;assembly=Syncfusion.Maui.Toolkit"
xmlns:bottomSheet="clr-namespace:Syncfusion.Maui.Toolkit.BottomSheet;assembly=Syncfusion.Maui.Toolkit"
xmlns:toolkit="http://schemas.microsoft.com/dotnet/2022/maui/toolkit"
xmlns:aloha="clr-namespace:AlohaKit.Animations;assembly=AlohaKit.Animations"
xmlns:effectsView="http://schemas.syncfusion.com/maui/toolkit"
x:Class="Telepathic.Pages.MainPage"
x:DataType="pageModels:MainPageModel"
Title="{Binding Today}">
</ContentPage>
Yikes! Many of my XAML files use the same namespaces over and over such as pageModels
, models
, converters
, controls
, and so on. To globalize these, I created a GlobalXmlns.cs
file where I can register these namespaces using XmlnsDefinition
.
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"Telepathic.PageModels")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"Telepathic.Models")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"Telepathic.Converters")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"Telepathic.Pages.Controls")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"Telepathic.ViewModels")]
I can also use the same to globalize third-party controls like the Syncfusion Toolkit for .NET MAUI, the Community Toolkit for .NET MAUI, and AlohaKit Animations used in this file.
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"Syncfusion.Maui.Toolkit.TextInputLayout", AssemblyName = "Syncfusion.Maui.Toolkit")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"Syncfusion.Maui.Toolkit.Cards", AssemblyName = "Syncfusion.Maui.Toolkit")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"Syncfusion.Maui.Toolkit.Buttons", AssemblyName = "Syncfusion.Maui.Toolkit")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"Syncfusion.Maui.Toolkit.PullToRefresh", AssemblyName = "Syncfusion.Maui.Toolkit")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"Syncfusion.Maui.Toolkit.BottomSheet", AssemblyName = "Syncfusion.Maui.Toolkit")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"http://schemas.syncfusion.com/maui/toolkit")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"http://schemas.microsoft.com/dotnet/2022/maui/toolkit")]
[assembly: XmlnsDefinition(
"http://schemas.microsoft.com/dotnet/maui/global",
"AlohaKit.Animations", AssemblyName = "AlohaKit.Animations")]
After this I can remove all those declarations from the head of the file, resulting in a much cleaner read.
<ContentPage x:Class="Telepathic.Pages.MainPage"
x:DataType="MainPageModel"
Title="{Binding Today}">
</ContentPage>
Now that the xmlns
are all gone, you may be wondering about the prefixes that had been defined in order to reference those controls in XAML. Well, you no longer need them! Notice in the previous example the x:DataType
omits the pageModels:
prefix previously required.
Let’s look at another example from this same page.
<BindableLayout.ItemTemplate>
<DataTemplate x:DataType="models:ProjectTask">
<controls:TaskView
TaskCompletedCommand="{Binding CompletedCommand,
Source={RelativeSource AncestorType={x:Type pageModels:MainPageModel}},
x:DataType=pageModels:MainPageModel}"/>
</DataTemplate>
</BindableLayout.ItemTemplate>
After
<BindableLayout.ItemTemplate>
<DataTemplate x:DataType="ProjectTask">
<TaskView
TaskCompletedCommand="{Binding CompletedCommand,
Source={RelativeSource AncestorType={x:Type MainPageModel}},
x:DataType=MainPageModel}"/>
</DataTemplate>
</BindableLayout.ItemTemplate>
There will be times when you have types that collide and you’ll need to disambiguate them. I ran into this in another application where I have a custom control called FlyoutItem
in my namespace ControlGallery.Views
. In the XAML file it was being reference like <views:FlyoutItem />
and all was well. When I added ControlGallery.Views
to the global namespace and removed the views:
prefix, I encountered a collision because .NET MAUI already has a type FlyoutItem
!
One way I could solve this would be to use the full path with namespace in the XAML like <ControlGallery.Views.FlyoutItem />
. That is pretty long and not my preference.
Instead, there’s another attribute I can use alongside XmlnsDefinition
and that’s XmlnsPrefix
. Back in the GlobalXmlns.cs
I can add a prefix for this namespace that’ll be usable globally in my application.
[assembly: XmlnsPrefix(
"clr-namespace:ControlGallery.Views",
"views")]
Now once again in XAML I can use that prefix like <views:FlyoutItem />
.
Note
Before you go all in on this, be aware it’s a preview and there are issues to be ironed out. For example, the XAML Language Service needs some love to make it aware of what’s happening so you don’t get red squiggles everywhere. Additionally, work needs to be done to address the negative startup and runtime performance impact.clr-namespace:
You can get started by installing .NET 10 Preview 5 with the .NET MAUI workload or Visual Studio 17.14 Preview, and then creating a new project.
dotnet new maui -n LessXamlPlease
We need your feedback! Give this a go and let us know what you think by opening an issue on GitHub or sending me a note directly at david.ortinau@microsoft.com.
The post Simpler XAML in .NET MAUI 10 appeared first on .NET Blog.
]]>The post Multimodal Vision Intelligence with .NET MAUI appeared first on .NET Blog.
]]>Previously I covered adding voice support to the “to do” app from our Microsoft Build 2025 session. Now I’ll review the vision side of multimodal intelligence. I want to let users capture or select an image and have AI extract actionable information from it to create a project and tasks in the Telepathic sample app. This goes well beyond OCR scanning by using an AI agent to use context and prompting to produce meaningful input.
From the floating action button menu on MainPage
the user selects the camera button immediately transitioning to the PhotoPage
where MediaPicker
takes over. MediaPicker
provides a single cross-platform API for working with photo gallery, media picking, and taking photos. It was recently modernized in .NET 10 Preview 4.
The PhotoPageModel
handles both photo capture and file picking, starting from the PageAppearing
lifecycle event that I’ve easily tapped into using the EventToCommandBehavior
from the Community Toolkit for .NET MAUI.
<ContentPage.Behaviors>
<toolkit:EventToCommandBehavior
EventName="Appearing"
Command="{Binding PageAppearingCommand}"/>
</ContentPage.Behaviors>
The PageAppearing
method is decorated with [RelayCommand]
which generates a command thanks to the Community Toolkit for MVVM (yes, toolkits are a recurring theme of adoration that you’ll hear from me). I then check for the type of device being used and choose to pick or take a photo. .NET MAUI’s cross-platform APIs for DeviceInfo
and MediaPicker
save me a ton of time navigating through platform-specific idiosyncrasies.
if (DeviceInfo.Idiom == DeviceIdiom.Desktop)
{
result = await MediaPicker.PickPhotoAsync(new MediaPickerOptions
{
Title = "Select a photo"
});
}
else
{
if (!MediaPicker.IsCaptureSupported)
{
return;
}
result = await MediaPicker.CapturePhotoAsync(new MediaPickerOptions
{
Title = "Take a photo"
});
}
Another advantage of using the built-in MediaPicker
is giving users the native experience for photo input they are already accustomed to. When you’re implementing this, be sure to perform the necessary platform-specific setup as documented.
Once an image is received, it’s desplayed on screen along with an optional Editor
field to capture any additional context and instructions the user might want to provide. I build the prompt with StringBuilder
(in other apps I like to use Scriban templates), grab an instance of the Microsoft.Extensions.AI
‘s IChatClient
from a service, get the image bytes, and supply everything to the chat client using a ChatMessage
that packs TextContent
and DataContent
.
private async Task ExtractTasksFromImageAsync()
{
// more code
var prompt = new System.Text.StringBuilder();
prompt.AppendLine("# Image Analysis Task");
prompt.AppendLine("Analyze the image for task lists, to-do items, notes, or any content that could be organized into projects and tasks.");
prompt.AppendLine();
prompt.AppendLine("## Instructions:");
prompt.AppendLine("1. Identify any projects and tasks (to-do items) visible in the image");
prompt.AppendLine("2. Format handwritten text, screenshots, or photos of physical notes into structured data");
prompt.AppendLine("3. Group related tasks into projects when appropriate");
if (!string.IsNullOrEmpty(AnalysisInstructions))
{
prompt.AppendLine($"4. {AnalysisInstructions}");
}
prompt.AppendLine();
prompt.AppendLine("If no projects/tasks are found, return an empty projects array.");
var client = _chatClientService.GetClient();
byte[] imageBytes = File.ReadAllBytes(ImagePath);
var msg = new Microsoft.Extensions.AI.ChatMessage(ChatRole.User,
[
new TextContent(prompt.ToString()),
new DataContent(imageBytes, mediaType: "image/png")
]);
var apiResponse = await client.GetResponseAsync<ProjectsJson>(msg);
if (apiResponse?.Result?.Projects != null)
{
Projects = apiResponse.Result.Projects.ToList();
}
// more code
}
Just like with the voice experience, the photo flow doesn’t blindly assume the agent got everything right. After processing, the user is shown a proposed set of projects and tasks for review and confirmation.
This ensures users remain in control while benefiting from AI-augmented assistance. You can learn more about designing these kinds of flows using best practices in the HAX Toolkit.
We’ve now extended our .NET MAUI app to see as well as hear. With just a few lines of code and a clear UX pattern, the app can take in images, analyze them using vision-capable AI models, and return structured, actionable data like tasks and projects.
Multimodal experiences are more accessible and powerful than ever. With cross-platform support from .NET MAUI and the modularity of Microsoft.Extensions.AI
, you can rapidly evolve your apps to meet your users where they are, whether that’s typing, speaking, or snapping a photo.
The post Multimodal Vision Intelligence with .NET MAUI appeared first on .NET Blog.
]]>The post Improve Your Productivity with New GitHub Copilot Features for .NET! appeared first on .NET Blog.
]]>The overall AI assisted development experience has seen huge improvements recently. From agent mode in Visual Studio, VS Code, and other editors to MCP support to next edit suggestions and beyond. Each of these features was designed to boost your day to day development experience with your powerful pair programmer copilot.
Have you ever received a code suggestion from GitHub Copilot that seems to be unaware of your project and doesn’t reflect your codebase? The .NET team is addressing this issue by providing added context to GitHub Copilot for each query, now available in Visual Studio 17.14 and VS Code with C# Dev Kit. It will now scan your code for supported coding scenarios that Copilot can store for better context, resulting in more consistent, relevant responses. Supported scenarios include the following:
We’ll be adding more scenarios to the context awareness provider soon!
.NET is constantly being updated and improved on…but the models used by Copilot are trained using data from a certain point in time. When you ask Copilot questions about something that was released after the model was trained, it can provide you with unverified or out-of-date information. With the new MS Learn Integration, now available in VS 17.14, when you ask Copilot about a topic it doesn’t know, it will pull info from existing MS Learn docs to provide you with the most up to date information!
To enable this feature, go to Tools > Options > Copilot > Feature Flags > Enable Microsoft Learn Function in chat. You’ll also need to be logged in with a Microsoft account in VS to use this feature, but GitHub authentication support is on the way!
Two popular refactorings many .NET developers use are Implement Method and Implement Interface, both of which let you automatically generate methods that are referenced but not implemented yet. In VS 17.14, once you’ve used the one of these refactorings, you can select the lightbulb (CTRL + .
), and choose the new Implement with Copilot refactoring, which will add the body to your method!
Sometimes, you are scrolling through code that you didn’t write or need a refresher on, and you just want a quick description of the variable, method, or class and what it does. Now, when you hover over a variable, method, or class and pull up its Quick Info tooltip in VS, you will see an option to Describe with Copilot. If you select that option, you will receive a Copilot-generated summary of that element for as long as you hover over it. This gives you a quick and temporary way to view information about your codebase.
If you want more permanent comments for your classes and methods, GitHub Copilot does that work for you! In VS 17.14, when you add the doc comment notation (///
) to the top of your class or method, GitHub Copilot will generate the full comment, complete with summary and descriptions for each parameter! This will appear as ghost text. Just hit tab to accept it!
We’ve introduced a lot of Copilot-powered features to improve your productivity, and we want to hear your feedback on how we can make them even better! Please try them all out, share your feedback in the Developer Community, and stay tuned for even more updates!
The post Improve Your Productivity with New GitHub Copilot Features for .NET! appeared first on .NET Blog.
]]>The post Multimodal Voice Intelligence with .NET MAUI appeared first on .NET Blog.
]]>At Microsoft Build 2025 I demonstrated expanding the .NET MAUI sample “to do” app from text input to supporting voice and vision when those capabilities are detected. Let me show you how .NET MAUI and our fantastic ecosystem of plugins makes this rather painless to do with a single implementation that works across all platforms starting with voice.
Being able to talk to an app isn’t anything revolutionary. We’ve all spoken to Siri, Alexa, and our dear Cortana a time or two, and the key is in knowing the keywords and recipes of things they can comprehend and act on. “Start a timer”, “turn down the volume”, “tell me a joke”, and everyone’s favorite “I wasn’t talking to you”.
The new and powerful capability we now have with large language models is having them take our unstructured ramblings and make sense of that in order to fit the structured format our apps expect and require.
The first thing to do is add the Plugin.Maui.Audio
NuGet which helps us request permissions to the microphone and start capturing a stream. The plugin is also capable of playback.
dotnet add package Plugin.Maui.Audio --version 4.0.0
In MauiProgram.cs
configure the recording settings and add the IAudioService
from the plugin to the services container.
public static class MauiProgram
{
public static MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp<App>()
.AddAudio(
recordingOptions =>
{
#if IOS || MACCATALYST
recordingOptions.Category = AVFoundation.AVAudioSessionCategory.Record;
recordingOptions.Mode = AVFoundation.AVAudioSessionMode.Default;
recordingOptions.CategoryOptions = AVFoundation.AVAudioSessionCategoryOptions.MixWithOthers;
#endif
});
builder.Services.AddSingleton<IAudioService, AudioService>();
// more code
}
}
Be sure to also review and implement any additional configuration steps in the documentation.
Now the app is ready to capture some audio. In VoicePage
the user will tap the microphone button, start speaking, and tap again to end the recording.
This is a trimmed version of the actual code for starting and stopping the recording.
[RelayCommand]
private async Task ToggleRecordingAsync()
{
if (!IsRecording)
{
var status = await Permissions.CheckStatusAsync<Permissions.Microphone>();
if (status != PermissionStatus.Granted)
{
status = await Permissions.RequestAsync<Permissions.Microphone>();
if (status != PermissionStatus.Granted)
{
// more code
return;
}
}
_recorder = _audioManager.CreateRecorder();
await _recorder.StartAsync();
IsRecording = true;
RecordButtonText = "⏹ Stop";
}
else
{
_audioSource = await _recorder.StopAsync();
IsRecording = false;
RecordButtonText = "🎤 Record";
// more code
TranscribeAsync();
}
}
Once it has the audio stream it can start transcribing and processing it. (source)
private async Task TranscribeAsync()
{
string audioFilePath = Path.Combine(FileSystem.CacheDirectory, $"recording_{DateTime.Now:yyyyMMddHHmmss}.wav");
if (_audioSource != null)
{
await using (var fileStream = File.Create(audioFilePath))
{
var audioStream = _audioSource.GetAudioStream();
await audioStream.CopyToAsync(fileStream);
}
Transcript = await _transcriber.TranscribeAsync(audioFilePath, CancellationToken.None);
await ExtractTasksAsync();
}
}
In this sample app, I used Microsoft.Extensions.AI
with OpenAI to perform the transcription with the whisper-1
model trained specifically for this use case. There are certainly other methods of doing this including on-device with SpeechToText in the .NET MAUI Community Toolkit.
By using Microsoft.Extensions.AI
I can easily swap out another cloud based AI service, use a local LLM with ONNX, or later choose another on-device solution.
using Microsoft.Extensions.AI;
using OpenAI;
namespace Telepathic.Services;
public class WhisperTranscriptionService : ITranscriptionService
{
public async Task<string> TranscribeAsync(string path, CancellationToken ct)
{
var openAiApiKey = Preferences.Default.Get("openai_api_key", string.Empty);
var client = new OpenAIClient(openAiApiKey);
try
{
await using var stream = File.OpenRead(path);
var result = await client
.GetAudioClient("whisper-1")
.TranscribeAudioAsync(stream, "file.wav", cancellationToken: ct);
return result.Value.Text.Trim();
}
catch (Exception ex)
{
// Will add better error handling in Phase 5
throw new Exception($"Failed to transcribe audio: {ex.Message}", ex);
}
}
}
Once I have the transcript, I can have my AI service make sense of it to return projects and tasks using the same client. This happens in the ExtractTasksAsync
method referenced above. The key parts of this method are below. (source)
private async Task ExtractTasksAsync()
{
var prompt = $@"
Extract projects and tasks from this voice memo transcript.
Analyze the text to identify actionable tasks I need to keep track of. Use the following instructions:
1. Tasks are actionable items that can be completed, such as 'Buy groceries' or 'Call Mom'.
2. Projects are larger tasks that may contain multiple smaller tasks, such as 'Plan birthday party' or 'Organize closet'.
3. Tasks must be grouped under a project and cannot be grouped under multiple projects.
4. Any mentioned due dates use the YYYY-MM-DD format
Here's the transcript: {Transcript}";
var chatClient = _chatClientService.GetClient();
var response = await chatClient.GetResponseAsync<ProjectsJson>(prompt);
if (response?.Result != null)
{
Projects = response.Result.Projects;
}
}
The _chatClientService
is an injected service class that handles the creation and retrieval of the IChatClient
instance provided by Microsoft.Extensions.AI
. Here I use the GetResponseAsync
method along with passing a strong type and a prompt, and the LLM (gpt-4o-mini
in this case) returns a ProjectsJson
response. The response includes a Projects
list with which I can proceed.
Now I’ve gone from having an app that only took data entry input via a form, to an app that can also take unstructured voice input and produce structure data. While I was tempted to just insert the results into the database and claim success, there was yet more to do to make this a truly satisfying experience.
There’s a reasonable chance that the project name needs to be adjusted for clarity, or some task was misheard or worse yet omitted. To address this, I add a step of approval where the use can see the projects and tasks as recommendations and choose to accept them as-is with changes. This is not much different than the experience we have now in Copilot when changes are make but we have the option to iterate further, keep, or discard.
For more guidance like this for designing great AI experiences in your apps, consider checking out the HAX Toolkit and Microsoft AI Principles.
Here are key resources mentioned in this article to help you implement multimodal AI capabilities in your .NET MAUI apps:
In this article, we explored how to enhance .NET MAUI applications with multimodal AI capabilities, focusing on voice interaction. We covered how to implement audio recording using Plugin.Maui.Audio, transcribe speech using Microsoft.Extensions.AI with OpenAI’s Whisper model, and extract structured data from unstructured voice input.
By combining these technologies, you can transform a traditional form-based app into one that accepts voice commands and intelligently processes them into actionable data. The implementation works across all platforms with a single codebase, making it accessible for any .NET MAUI developer.
With these techniques, you can significantly enhance user experience by supporting multiple interaction modes, making your applications more accessible and intuitive, especially on mobile devices where voice input can be much more convenient than typing.
The post Multimodal Voice Intelligence with .NET MAUI appeared first on .NET Blog.
]]>