Microsoft for Developers https://devblogs.microsoft.com/landing Get the latest information, insights, and news from Microsoft. en-US hourly 1 https://devblogs.microsoft.com/wp-content/uploads/2024/10/Microsoft-favicon-48x48.jpg DevBlogs RSS Feed - Microsoft for Developers https://devblogs.microsoft.com/landing 32 32 Exploring new Agent Quality and NLP evaluators for .NET AI applications https://devblogs.microsoft.com/dotnet/exploring-agent-quality-and-nlp-evaluators Tue, 05 Aug 2025 17:05:00 +0000 Shyam Namboodiripad https://devblogs.microsoft.com/dotnet/exploring-agent-quality-and-nlp-evaluators When building AI applications, comprehensive evaluation is crucial to ensure your systems deliver accurate, reliable, and contextually appropriate responses. We're excited to announce key enhancements to the Microsoft.Extensions.AI.Evaluation libraries with new evaluators that expand evaluation capabilities in two key areas: agent quality assessment and natural language processing (NLP) metrics.

Agent Quality evaluators

The Microsoft.Extensions.AI.Evaluation.Quality package now includes three new evaluators specifically designed to assess how well AI agents perform in conversational scenarios involving tool use:

NLP (Natural Language Processing) evaluators

We've also introduced a new package, Microsoft.Extensions.AI.Evaluation.NLP, containing evaluators that implement common NLP algorithms for evaluating text similarity:

  • BLEUEvaluator: Implements the BLEU (Bilingual Evaluation Understudy) metric for measuring text similarity
  • GLEUEvaluator: Provides the GLEU (Google BLEU) metric, a variant optimized for sentence-level evaluation
  • F1Evaluator: Calculates F1 scores for text similarity and information retrieval tasks

[alert type="note" heading="Note"]Unlike other evaluators in the Microsoft.Extensions.AI.Evaluation libraries, the NLP evaluators do not require an AI model to perform evaluations. Instead, they use traditional NLP techniques such as text tokenization and n-gram analysis to compute similarity scores.[/alert]

These new evaluators complement the quality and safety-focused evaluators we covered in earlier posts below. Together with custom, domain-specific evaluators that you can create using the Microsoft.Extensions.AI.Evaluation libraries, they provide a robust evaluation toolkit for your .NET AI applications.

Setting up your LLM connection

The agent quality evaluators require an LLM to perform evaluation. The code example that follows shows how to create an IChatClient that connects to a model deployed on Azure OpenAI for this. For instructions on how to deploy an OpenAI model in Azure see: Create and deploy an Azure OpenAI in Azure AI Foundry Models resource.

[alert type="note" heading="Note"]We recommend using the GPT-4o or GPT-4.1 series of models when running the below example.

While the Microsoft.Extensions.AI.Evaluation libraries and the underlying core abstractions in Microsoft.Extensions.AI support a variety of different models and LLM providers, the evaluation prompts used within the evaluators in the Microsoft.Extensions.AI.Evaluation.Quality package have been tuned and tested against OpenAI models such as GPT-4o and GPT-4.1. It is possible to use other models by supplying an IChatClient that can connect to your model of choice. However, the performance of those models against the evaluation prompts may vary and may be especially poor for smaller / local models.[/alert]

First, set the required environment variables. For this, you will need the endpoint for your Azure OpenAI resource, and the deployment name for your deployed model. You can copy these values from the Azure portal and paste them in the environment variables below.

SET EVAL_SAMPLE_AZURE_OPENAI_ENDPOINT=https://<your azure openai resource name>.openai.azure.com/
SET EVAL_SAMPLE_AZURE_OPENAI_MODEL=<your model deployment name (e.g., gpt-4o)>

The example uses DefaultAzureCredential for authentication. You can sign in to Azure using developer tooling such as Visual Studio or the Azure CLI.

Setting up a test project to run the example code

Next, let's create a new test project to demonstrate the new evaluators. You can use any of the following approaches:

Using Visual Studio

  1. Open Visual Studio
  2. Select File > New > Project...
  3. Search for and select MSTest Test Project
  4. Choose a name and location, then click Create

Using Visual Studio Code with C# Dev Kit

  1. Open Visual Studio Code
  2. Open Command Palette and select .NET: New Project...
  3. Select MSTest Test Project
  4. Choose a name and location, then select Create Project

Using the .NET CLI

dotnet new mstest -n EvaluationTests
cd EvaluationTests

After creating the project, add the necessary NuGet packages:

dotnet add package Azure.AI.OpenAI
dotnet add package Azure.Identity
dotnet add package Microsoft.Extensions.AI.Evaluation
dotnet add package Microsoft.Extensions.AI.Evaluation.Quality
dotnet add package Microsoft.Extensions.AI.Evaluation.NLP --prerelease
dotnet add package Microsoft.Extensions.AI.Evaluation.Reporting
dotnet add package Microsoft.Extensions.AI.OpenAI --prerelease

Next, copy the following code into the project (inside Test1.cs). The example demonstrates how to run agent quality and NLP evaluators via two separate unit tests defined in the same test class.

using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Extensions.AI;
using Microsoft.Extensions.AI.Evaluation;
using Microsoft.Extensions.AI.Evaluation.NLP;
using Microsoft.Extensions.AI.Evaluation.Quality;
using Microsoft.Extensions.AI.Evaluation.Reporting;
using Microsoft.Extensions.AI.Evaluation.Reporting.Storage;
using DescriptionAttribute = System.ComponentModel.DescriptionAttribute;

namespace EvaluationTests;

#pragma warning disable AIEVAL001 // The agent quality evaluators used below are currently marked as [Experimental].

[TestClass]
public class Test1
{
    private static readonly ReportingConfiguration s_agentQualityConfig = CreateAgentQualityReportingConfiguration();
    private static readonly ReportingConfiguration s_nlpConfig = CreateNLPReportingConfiguration();

    [TestMethod]
    public async Task EvaluateAgentQuality()
    {
        // This example demonstrates how to run agent quality evaluators (ToolCallAccuracyEvaluator,
        // TaskAdherenceEvaluator, and IntentResolutionEvaluator) that assess how well an AI agent performs tasks
        // involving tool use and conversational interactions.

        await using ScenarioRun scenarioRun = await s_agentQualityConfig.CreateScenarioRunAsync("Agent Quality");

        // Get a conversation that simulates a customer service agent using tools to assist a customer.
        (List<ChatMessage> messages, ChatResponse response, List<AITool> toolDefinitions) =
            await GetCustomerServiceConversationAsync(chatClient: scenarioRun.ChatConfiguration!.ChatClient);

        // The agent quality evaluators require tool definitions to assess tool-related behaviors.
        List<EvaluationContext> additionalContext =
        [
            new ToolCallAccuracyEvaluatorContext(toolDefinitions),
            new TaskAdherenceEvaluatorContext(toolDefinitions),
            new IntentResolutionEvaluatorContext(toolDefinitions)
        ];

        // Run the agent quality evaluators against the response.
        EvaluationResult result = await scenarioRun.EvaluateAsync(messages, response, additionalContext);

        // Retrieve one of the metrics (example: Intent Resolution).
        NumericMetric intentResolution = result.Get<NumericMetric>(IntentResolutionEvaluator.IntentResolutionMetricName);

        // By default, a Value < 4 is interpreted as a failing score for the Intent Resolution metric.
        Assert.IsFalse(intentResolution.Interpretation!.Failed);

        // Results are also persisted to disk under the storageRootPath specified below. You can use the dotnet aieval
        // command line tool to generate an HTML report and view these results.
    }

    [TestMethod]
    public async Task EvaluateNLPMetrics()
    {
        // This example demonstrates how to run NLP (Natural Language Processing) evaluators (BLEUEvaluator,
        // GLEUEvaluator and F1Evaluator) that measure text similarity between a model's output and supplied reference
        // text.

        await using ScenarioRun scenarioRun = await s_nlpConfig.CreateScenarioRunAsync("NLP");

        // Set up the text similarity evaluation inputs. Response represents an example model output, and
        // referenceResponses represent a set of ideal responses that the model's output will be compared against.
        const string Response =
            "Paris is the capital of France. It's famous for the Eiffel Tower, Louvre Museum, and rich cultural heritage";

        List<string> referenceResponses =
        [
            "Paris is the capital of France. It is renowned for the Eiffel Tower, Louvre Museum, and cultural traditions.",
            "Paris, the capital of France, is famous for its landmarks like the Eiffel Tower and vibrant culture.",
            "The capital of France is Paris, known for its history, art, and iconic landmarks like the Eiffel Tower."
        ];

        // The NLP evaluators require one or more reference responses to compare against the model's output.
        List<EvaluationContext> additionalContext =
        [
            new BLEUEvaluatorContext(referenceResponses),
            new GLEUEvaluatorContext(referenceResponses),
            new F1EvaluatorContext(groundTruth: referenceResponses.First())
        ];

        // Run the NLP evaluators.
        EvaluationResult result = await scenarioRun.EvaluateAsync(Response, additionalContext);

        // Retrieve one of the metrics (example: F1).
        NumericMetric f1 = result.Get<NumericMetric>(F1Evaluator.F1MetricName);

        // By default, a Value < 0.5 is interpreted as a failing score for the F1 metric.
        Assert.IsFalse(f1.Interpretation!.Failed);

        // Results are also persisted to disk under the storageRootPath specified below. You can use the dotnet aieval
        // command line tool to generate an HTML report and view these results.
    }

    private static ReportingConfiguration CreateAgentQualityReportingConfiguration()
    {
        // Create an IChatClient to interact with a model deployed on Azure OpenAI.
        string endpoint = Environment.GetEnvironmentVariable("EVAL_SAMPLE_AZURE_OPENAI_ENDPOINT")!;
        string model = Environment.GetEnvironmentVariable("EVAL_SAMPLE_AZURE_OPENAI_MODEL")!;
        var client = new AzureOpenAIClient(new Uri(endpoint), new DefaultAzureCredential());
        IChatClient chatClient = client.GetChatClient(deploymentName: model).AsIChatClient();

        // Enable function invocation support on the chat client. This allows the chat client to invoke AIFunctions
        // (tools) defined in the conversation.
        chatClient = chatClient.AsBuilder().UseFunctionInvocation().Build();

        // Create a ReportingConfiguration for the agent quality evaluation scenario.
        return DiskBasedReportingConfiguration.Create(
            storageRootPath: "./eval-results", // The evaluation results will be persisted to disk under this folder.
            evaluators: [new ToolCallAccuracyEvaluator(), new TaskAdherenceEvaluator(), new IntentResolutionEvaluator()],
            chatConfiguration: new ChatConfiguration(chatClient),
            enableResponseCaching: true);

        // Since response caching is enabled above, all LLM responses produced via the chatClient above will also be
        // cached under the storageRootPath so long as the inputs being evaluated stay unchanged, and so long as the
        // cache entries do not expire (cache expiry is set at 14 days by default).
    }

    private static ReportingConfiguration CreateNLPReportingConfiguration()
    {
        // Create a ReportingConfiguration for the NLP evaluation scenario.
        // Note that the NLP evaluators do not require an LLM to perform the evaluation. Instead, they use traditional
        // NLP techniques (text tokenization, n-gram analysis, etc.) to compute text similarity scores.

        return DiskBasedReportingConfiguration.Create(
            storageRootPath: "./eval-results", // The evaluation results will be persisted to disk under this folder.
            evaluators: [new BLEUEvaluator(), new GLEUEvaluator(), new F1Evaluator()]);
    }

    private static async Task<(List<ChatMessage> messages, ChatResponse response, List<AITool> toolDefinitions)>
        GetCustomerServiceConversationAsync(IChatClient chatClient)
    {
        // Get a conversation that simulates a customer service agent using tools (such as GetOrders() and
        // GetOrderStatus() below) to assist a customer.

        List<ChatMessage> messages =
        [
            new ChatMessage(ChatRole.System, "You are a helpful customer service agent. Use tools to assist customers."),
            new ChatMessage(ChatRole.User, "Could you tell me the status of the last 2 orders on my account #888?")
        ];

        List<AITool> toolDefinitions = [AIFunctionFactory.Create(GetOrders), AIFunctionFactory.Create(GetOrderStatus)];
        var options = new ChatOptions() { Tools = toolDefinitions, Temperature = 0.0f };

        ChatResponse response = await chatClient.GetResponseAsync(messages, options);

        return (messages, response, toolDefinitions);
    }

    [Description("Gets the orders for a customer")]
    private static IReadOnlyList<CustomerOrder> GetOrders(
        [Description("The customer account number")] int accountNumber)
    {
        return accountNumber switch
        {
            888 => [new CustomerOrder(123), new CustomerOrder(124)],
            _ => throw new InvalidOperationException($"Account number {accountNumber} is not valid.")
        };
    }

    [Description("Gets the delivery status of an order")]
    private static CustomerOrderStatus GetOrderStatus(
        [Description("The order ID to check")] int orderId)
    {
        return orderId switch
        {
            123 => new CustomerOrderStatus(orderId, "shipped", DateTime.Now.AddDays(1)),
            124 => new CustomerOrderStatus(orderId, "delayed", DateTime.Now.AddDays(10)),
            _ => throw new InvalidOperationException($"Order with ID {orderId} not found.")
        };
    }

    private record CustomerOrder(int OrderId);
    private record CustomerOrderStatus(int OrderId, string Status, DateTime ExpectedDelivery);
}

Running the tests and generating the evaluation report

Next, let’s run the above unit tests. You can either use Visual Studio or Visual Studio Code’s Test Explorer or run dotnet test from the command line.

After running the tests, you can generate an HTML report containing results for both the "Agent Quality" and "NLP" scenarios in the example above using the dotnet aieval tool.

First, install the tool locally in your project:

dotnet tool install Microsoft.Extensions.AI.Evaluation.Console --create-manifest-if-needed

Then generate and open the report:

dotnet aieval report -p <path to 'eval-results' folder under the build output directory for the above project> -o .\report.html --open

The --open flag will automatically open the generated report in your default browser, allowing you to explore the evaluation results interactively. Here’s a peek at the generated report – this screenshot shows the details revealed when you click on the "Intent Resolution" metric under the "Agent Quality" scenario.

A screenshot depicting the generated evaluation report

Learn more and provide feedback

For more comprehensive examples that demonstrate various API concepts, functionality, best practices and common usage patterns for the Microsoft.Extensions.AI.Evaluation libraries, explore the API Usage Examples in the dotnet/ai-samples repository. Documentation and tutorials for the evaluation libraries are also available under - The Microsoft.Extensions.AI.Evaluation libraries.

We encourage you to try out these evaluators in your AI applications and share your feedback. If you encounter any issues or have suggestions for improvements, please report them on GitHub. Your feedback helps us continue to enhance the evaluation libraries and build better tools for the .NET AI development community.

Happy evaluating!

]]>
Visual Studio 2015 Retirement: Support reminder for older versions of Visual Studio https://devblogs.microsoft.com/visualstudio/visual-studio-2015-retirement-support-reminder-for-older-versions-of-visual-studio Tue, 05 Aug 2025 16:16:53 +0000 Paul Chapman https://devblogs.microsoft.com/visualstudio/visual-studio-2015-retirement-support-reminder-for-older-versions-of-visual-studio Support Timeframe Reminders for older versions If you’re still using an older version of Visual Studio, here’s a reminder of key support lifecycle dates.
  1. Visual Studio 2015 extended support ends October 14, 2025.
  2. Visual Studio 2017 version 15.9 remains in extended support until April 13, 2027. During extended support we provide fixes only for security issues. You must be using version 15.9 to receive security updates and support.
  3. Visual Studio 2019 version 16.11 remains in extended support through April 10, 2029. During extended support we provide fixes only for security issues. You must be using version 16.11 to receive security updates and support.
  4. Visual Studio 2022 version 17.14 is in mainstream support through January 12, 2027, and then transitions to extended support through January 13, 2032. Additionally, version 17.10 LTSC is supported until January 13, 2026, and version 17.12 is supported until July 14, 2026. You must be on one of these versions to receive updates and support.

Visual Studio 2015 Retirement

On October 14, 2025, support will end for all Visual Studio 2015 editions, associated products, runtimes, and components, and they will no longer receive security or any other updates. These include:
  • Visual Studio 2015 Enterprise, Professional, Community, Build Tools, Team Explorer, Test Professional editions, and Visual Studio 2015 Shell (Integrated and Isolated).
  • Visual Studio SDK, Remote Tools, Agents, Feedback Client for Visual Studio Team Foundation Server 2015, Deployment Agent, Release Management, and Azure Tools.
  • The MSVC Tools v140 – Visual Studio 2015. When using Visual Studio 2017 or later, update any project using MSVC v140 to use the latest MSVC toolset.
  • Visual C++ Redistributable for Visual Studio 2015. Update your applications to use the latest release of Visual C++ Redistributable.

Check out the Visual Studio Hub

Stay connected with everything Visual Studio in one place! Visit the Visual Studio Hub for the latest release notes, YouTube videos, social updates, and community discussions.

Appreciation for your feedback

Your feedback helps us improve Visual Studio, making it an even more powerful tool for developers. We are immensely grateful for your contributions and look forward to your continued support. By sharing your thoughts, ideas, and any issues you encounter through Developer Community, you help us improve and shape the future of Visual Studio.]]>
.NET Conf 2025 - Announcing the Call for Content https://devblogs.microsoft.com/dotnet/dotnet-conf-2025-announcing-the-call-for-content Tue, 05 Aug 2025 16:00:00 +0000 Jeffrey T. Fritz https://devblogs.microsoft.com/dotnet/dotnet-conf-2025-announcing-the-call-for-content Hey .NET developers! 🎉 The moment we've all been waiting for is here - the Call for Content for .NET Conf 2025 is officially open! We want YOU to be part of this amazing celebration of everything .NET. Mark your calendars: November 11-13, 2025, will be three days of pure .NET goodness as we launch .NET 10 and dive into all the cool stuff happening in our ecosystem. Head over to dotnetconf.net to add it to your calendar so you don't miss it!

Why .NET Conf Rocks

.NET Conf is our free, three-day virtual celebration that brings together .NET developers from around the world. It's a community effort - we partner with Microsoft, the .NET Foundation, and awesome sponsors to make it happen. But honestly? The best part is YOU - the incredible .NET community. This is where we get together to share what we're building, learn from each other, and get genuinely excited about the future of .NET development. This year we're launching .NET 10, diving deep into .NET Aspire's latest features, and exploring how AI is changing the game for .NET developers.

We Want to Hear from YOU

Here's the thing - .NET Conf has always been about our amazing community. We're looking for passionate developers who want to share their stories, show off their cool projects, and teach others what they've learned along the way. Whether you've been speaking at conferences for years or you've never presented before but have something awesome to share - we want to hear from you! Seriously, some of our best sessions have come from first-time speakers who just had something cool they wanted to show the world.

What Kind of Sessions Are We Looking For?

We want 30-minute sessions (including Q&A time) that show off what's possible with .NET. Here's what gets us excited:
  • Web stuff: Cool ASP.NET Core projects, Blazor adventures, or that awesome web architecture you built
  • Mobile & Desktop: .NET MAUI apps, cross-platform tricks, or how you modernized that old desktop app
  • AI & Machine Learning: ML.NET projects, AI integrations, or how you're using AI in your .NET apps
  • IoT & Edge: .NET running on tiny devices, IoT solutions, or embedded projects
  • Games: Building games with .NET, Unity integrations, or game development tips
  • Cloud & Containers: Your containerization journey, microservices patterns, or cloud-native adventures
  • DevOps: CI/CD pipelines that actually work, deployment strategies, or how you made your team more productive
  • Open Source: That library you built, your contribution story, or community projects you love
We'd love to see what you're doing with .NET 9 or 10, but honestly, if you've got something cool and .NET-related that'll make developers go "wow, I want to try that!" - we want to hear about it!

What Makes a Session Stand Out?

[alert type="tip" heading="Insider Tip"] The Microsoft .NET team will be showing off the shiny new features and big announcements. To give your session the best shot, focus on real-world content—your experiences, your projects, and your "aha!" moments. [/alert] Think about sessions like:
  • Your war stories: What you learned building that challenging project
  • Architecture deep-dives: How you solved complex problems in your apps
  • Open source adventures: That library you created or contributed to
  • Best practices you discovered: Patterns and techniques that actually work
  • Side projects: That fun thing you built that shows off .NET in a cool way
  • Productivity hacks: Tools and techniques that make you a more effective developer
The community wants to hear about what you're building with .NET, not rehash what they can already read in the release notes. Show us your creativity!

The Important Stuff

[alert type="important" heading="Don't Procrastinate!"] The Call for Content closes on August 31, 2025, at 11:59 PM PDT. Trust us, you don't want to be scrambling at the last minute! [/alert]

Here's What You Need to Know

  • When: November 11-13, 2025
  • Where: Online (present from wherever you are!)
  • How long: 30 minutes including Q&A
  • Time zones: Present in your own time zone - we'll figure out the schedule magic
  • Sessions per speaker: We're limiting it to 1 session per person so more folks can participate

Ready to Submit? Here's How

  1. Head over to: sessionize.com/net-conf-2025
  2. Write an awesome proposal: Give us a catchy title and tell us why your session will be amazing
  3. Tell us about you: Share your background and any speaking experience (but don't stress if you're new!)
  4. Show your creds: Got videos of past talks? Links to your projects? Throw them in there!
[alert type="tip" heading="Insider Tip"] Include videos of past talks or demos of your projects in your proposal. It helps us see your presentation style and gets us excited about your session! [/alert]

Come Celebrate with the .NET Family

Look, .NET Conf isn't just another conference - it's our yearly family reunion! It's where developers from every corner of the planet come together to geek out about code, share those "I can't believe that worked!" moments, and push each other to build even cooler stuff. Whether you're the person building the next big web app, creating mobile experiences that users love, or figuring out how to make AI work in your business apps - your story matters. The .NET community wants to celebrate your wins, learn from your mistakes, and cheer you on as you tackle your next challenge.

Your Turn to Shine

Here's the thing - the .NET ecosystem is amazing because of people like you. That unique way you solved a problem, that library you built in your spare time, that "what if I tried this?" experiment that actually worked - that stuff is pure gold to other developers. Don't let that voice in your head tell you "someone else probably knows this better." Nope! Your perspective, your journey, your hard-won insights could be exactly what another developer needs to hear right now. [cta-button align='center' text='Submit Your Session Now!' url='https://sessionize.com/net-conf-2025' color='#5c33b8']

Let's Make This the Best .NET Conf Yet

.NET Conf 2025 is going to be incredible, but it won't be complete without voices from our amazing community. We've got the submission portal open through August 31st, and we're genuinely excited to see what awesome sessions you'll propose. Here's what we know: .NET Conf is where magic happens. It's where a casual conversation in the chat leads to your next big project idea. It's where that demo you're nervous about becomes the solution someone's been searching for. It's where you realize you're part of something way bigger than just writing code. Your session could be the one that sparks the next breakthrough, solves a problem thousands of developers are facing, or just makes someone's day a little brighter with a cool demo. Don't wait - submit your session today and let's make .NET Conf 2025 absolutely unforgettable! And while you're at it, head over to dotnetconf.net to add the event to your calendar so you don't miss the big day. Happy coding, friends! We can't wait to see you there! 🚀✨]]>
Scalable AI with Azure Cosmos DB - Video Series https://devblogs.microsoft.com/cosmosdb/scalable-ai-with-azure-cosmos-db-video-series Tue, 05 Aug 2025 14:13:08 +0000 Manish Sharma https://devblogs.microsoft.com/cosmosdb/scalable-ai-with-azure-cosmos-db-video-series Scalable AI in Action with Azure Cosmos DB – A Monthly Partner Showcase As AI continues to reshape industries, customers are seeking scalable, real-time solutions that integrate seamlessly with their existing data platforms. Azure Cosmos DB, with its global distribution, low latency, and multi-model support, is uniquely positioned to power intelligent applications at scale. To help customers explore what’s possible, we’re launching the Scalable AI in Action with Azure Cosmos DB series—a monthly video session that highlights how partners are building transformative AI solutions using Azure Cosmos DB and Azure AI services. This initiative is designed to:
  • Showcase real-world architectures and use cases from trusted partners
  • Help customers understand how to scale AI workloads with Azure Cosmos DB
  • Provide actionable insights and demos that accelerate solution development

What’s Included in Each Session

Each session runs for 1 hour and follows a consistent format to maximize learning and engagement:

Session Agenda

  • Welcome & Theme Introduction - Kick things off with an overview of why scalable AI matters and how Azure Cosmos DB helps make it really work.
  • Leadership Fireside Chat - Hear directly from partner leadership about their AI vision, product strategy, and how Azure Cosmos DB powers scale and performance.
  • Architecture Spotlight & Demo - Dive into the technical details of the partner’s solution—see Azure Cosmos DB in action with vector embeddings, hybrid search, model serving, and Azure AI integration.
  • What’s Hot in Azure Cosmos DB - Get the latest updates, roadmap insights, and performance tips across NoSQL, MongoDB vCore, and Cassandra APIs.
  • Partner Call to Action - Learn how to engage with the featured partner—accelerators, reference architectures, and next steps to get started.
  • Live Q&A - Ask your questions live or submit them in advance—our speakers are ready to answer!

Execution Format

  • Live Timing: Sessions are held every month refer upcoming sessions section for the current schedule.
  • Registration: Attendees can register via https://aka.ms/scalable-ai-cosmosdb and submit questions in advance to be addressed during the live Q&A.
  • On-Demand Access: Recordings will be available for those who cannot attend live.

Completed Sessions

  • 31st July 2025: We have hosted Arpit Gupta, founder of MLAI, who have shared challenges & opportunities in BFSI world for GenAI application during leadership fireside chat which was followed by deep dive architecture walkthrough with demonstration of his Intelligent conversational platform for BFSI, powered by Azure Cosmos DB. Thereafter Manish Sharma who is part of Azure Cosmos DB Engineering also the host have provided the walkthrough of latest features with addressing Audience’s submitted / live Q&A. Click here to watch OnDemand: https://aka.ms/scalableai-live-july25

Upcoming Sessions

  • 28th August 2025: Featuring MAQ Software, expert in Analytical solutions & Azure Cosmos DB, join us at - youtube live link https://aka.ms/scalableai-live-aug25
  • 25th September 2025: Featuring Affine.ai, expert in Gen AI and Azure Cosmos DB, join us at – youtube live link https://aka.ms/scalableai-live-sept25
  • 30th October 2025 and beyond: Spotlighting other capable partners with deep expertise in Aure Cosmos DB and AI.

Who Should Attend

This series is ideal for:
  • Developers and architects building AI-powered applications
  • Pre-sales professionals and technical decision-makers
  • ISVs and solution partners exploring GenAI, MongoDB, Cassandra, and DocumentDB
  • Anyone interested in real-time AI, scalable databases, and intelligent cloud solutions

Key Takeaways for Readers

  • Learn how Azure Cosmos DB powers real-time, scalable AI solutions across industries.
  • Discover partner-led architectures and demos that you can adapt to your own projects.
  • Stay updated on the latest Azure Cosmos DB features and roadmap.
  • Engage directly with experts and get your questions answered live.
  • Find new ways to collaborate with partners and accelerate your AI journey.

Register Now

Register yourself at https://aka.ms/scalable-ai-cosmosdb and join us each month to see how scalable AI is being brought to life with Azure Cosmos DB. Whether you're just starting your AI journey or scaling production workloads, this series is designed to inspire and inform.

About Azure Cosmos DB

Azure Cosmos DB is a fully managed and serverless distributed database for modern app development, with SLA-backed speed and availability, automatic and instant scalability, and support for open-source PostgreSQL, MongoDB, and Apache Cassandra. To stay in the loop on Azure Cosmos DB updates, follow us on X, YouTube, and LinkedIn. To easily build your first database, watch our Get Started videos on YouTube and explore ways to dev/test free.  ]]>
Why are Windows semiannual updates named H1 and H2? https://devblogs.microsoft.com/oldnewthing/20250805-00/?p=111435 Tue, 05 Aug 2025 14:00:00 +0000 Raymond Chen https://devblogs.microsoft.com/oldnewthing/20250805-00/?p=111435 Windows issues monthly updates, but the bigger updates happen twice a year. The one that happens in the first half of the year is called the H1 release, and the one in the second half is the H2 release. The letter H refers to "half", which is the same pattern used by finance people to refer to the first and second halves of fiscal years. (Quarters are abbreviated Q, so Q2 means "second quarter", for example.)

You may remember that the semiannual updates used to be called the Spring and Fall releases. For example, we had the 2017 Fall Creators Update and the 2018 Spring Update. Why the name change?

It was during an all-hands meeting that a senior executive asked if the organization had any unconscious biases. One of my colleagues raised his hand. He grew up in the Southern Hemisphere, where the seasons are opposite from those in the Northern Hemisphere. He pointed out that naming the updates Spring and Fall shows a Northern Hemisphere bias and is not inclusive of our customers in the Southern Hemisphere.

The names of the semiannual releases were changed the next day to be hemisphere-neutral.

]]>
Dynamically Update C++ syntax using Next Edit Suggestions https://devblogs.microsoft.com/cppblog/dynamically-update-c-syntax-using-next-edit-suggestions Mon, 04 Aug 2025 16:36:10 +0000 Sinem Akinci https://devblogs.microsoft.com/cppblog/dynamically-update-c-syntax-using-next-edit-suggestions public and private sections of the class, adding getter/setter methods, and updating all references to respect this new access level. GitHub Copilot now supports Next Edit Suggestions (or NES for short) to predict the next edits to come. NES in GitHub Copilot helps you stay in flow by not only helping predict where you’ll need to make updates, but also what you’ll need to change next.

Example: Converting C code to C++

At Microsoft Build, we showed how NES can dynamically update C++ code, including an example of updating code syntax that was using C functions to use the C++ Standard Template Library (STL). For example, when updating code that reads from stdin from C-style code that uses raw character arrays to C++ code that uses the std::string type, NES predicts and suggests updates across all applicable areas near the cursor. NES replaces calls to fgets with calls to std::getline and replaces atoi with the C++ std::stoi, which has better error handling. NES dynamically updating fgets to getline across a C++ file You can then review and make any relevant updates – for example, in this case, any other areas that call on strings. Next Edit Suggestions is now available in both Visual Studio and VS Code.  As you try out NES, we'd love to hear your feedback. Share your feedback on Developer Community for Visual Studio or on GitHub for VS Code to help shape what’s next and how we can improve. If NES streamlines your workflow or saves you time, let us know - drop a comment or email us at visualcpp@microsoft.com. We’re excited to see how you’re using NES for C++!

Agent Mode and other new Copilot features

GitHub Copilot is evolving beyond typical code completion, with features like Next Edit Suggestions, Agent Mode, and MCP transforming how developers interact with AI. The Visual Studio session at Build not only showcased NES in action, but also Agent Mode and MCP and how they each revolutionize the traditional Copilot interfaces. While NES predicts your next code edits in the editor, agent mode can work as an iterative AI assistant that understands your intent to provide dynamic edits and information. To learn more about the C++ NES use cases detailed above and these other new features available for developers in Visual Studio, watch “Top GitHub Copilot features you missed in Visual Studio 2022”.

What do you think?

Try out the latest Copilot features for your C++ workflows today. To access these updates to Copilot, you’ll need an active GitHub Copilot subscription and the latest version of Visual Studio. Our team is working hard on improving C++ integrations with Copilot, so please let us know any other enhancements you’d like to see. Share your thoughts with us on Developer Community for Visual Studio or on GitHub for VS Code to help shape what’s next and how we can improve. You can also reach us via email at visualcpp@microsoft.com, via X at @VisualC, or via Bluesky at @msftcpp.bsky.social.]]>
Automate your open-source dependency scanning with Advanced Security https://devblogs.microsoft.com/devops/automate-your-open-source-dependency-scanning-with-advanced-security Mon, 04 Aug 2025 17:17:37 +0000 Laura Jiang https://devblogs.microsoft.com/devops/automate-your-open-source-dependency-scanning-with-advanced-security Any experiences that require additional setup is cumbersome, especially when there are multiple people needed. In GitHub Advanced Security for Azure DevOps, we're working to make it easier to enable features and scale out enablement across your enterprise.

You can now automatically inject the dependency scanning task into any pipeline run targeting your default branch. This is a quick way to ensure that your production code (and any code being merged into your production branch) are evaluated for open-source dependency vulnerabilities.

Enabling one-click dependency scanning for your repository

You'll need to have the Advanced Security: manage settings permission to make changes to your repository's Advanced Security enablement. Navigate to a specific repository's settings page: Project settings > Repositories > Select your repository.

If you're using the standalone products, you first need Code Security enabled. Then, navigate to Options and confirm your selection of Dependency alerts default setup.

Advanced Security repository options for Code Security plan

If you're using the bundled Advanced Security, enable the checkbox to Scan default branch for vulnerable dependencies.

Advanced Security repository enablement options

Receiving results from dependency scanning

Upon the next execution of a pipeline run targeting your repository's default branch, the Advanced Security dependency scanning task will be injected near the end of your pipeline. Dependency scanning completes evaluation of your dependencies and any associated vulnerabilities within a few minutes. For repositories where you may not have consistent CI/CD running, we recommend scheduled pipeline runs.

If the task is already in your pipeline or you've set up your pipelines to skip the dependency scanning task via the DependencyScanning.Skip: true environment variable, the injected task will be skipped. The environment variable is a great option if there are certain pipelines you don't want to include in your scanning surface area. Alternatively, if there are certain pipeline jobs you wish to skip automated scanning in, you can also set the pipeline variable dependencyScanningInjectionEnabled to false.

Upon successful execution of the task, results are uploaded to Advanced Security and available in the Repos > Advanced Security tab for developers to fix any findings.

Advanced Security dependency scanning alerts in repository

You can also use this to easily set up pull request annotations for dependency scanning. If you have a build validation policy configured for your repository, dependency scanning will also automatically inject into any pull requests that target your default branch. Annotations for new findings appear directly on your pull request after you've scanned your default branch at least once, while any findings that exist in both branches will show up in the Advanced Security tab as well.

Next steps

Give this feature a try! Our team is also working on more experiences to smooth out the enablement process across Advanced Security. Have any feedback? Please share that with us directly or on Developer Community.

Learn more about Advanced Security and dependency scanning.

]]>
What's new in Azure AI Foundry | July 2025 https://devblogs.microsoft.com/foundry/whats-new-in-azure-ai-foundry-july-2025 Mon, 04 Aug 2025 15:00:27 +0000 Nick Brady https://devblogs.microsoft.com/foundry/whats-new-in-azure-ai-foundry-july-2025 TL;DR Deep Research Agent (Public Preview): Automate complex, multi-step web research with OpenAI’s o3-deep-research model, now available in Azure AI Foundry Agent Service. GPT-image-1 model: Adds input fidelity controls and partial image streaming for advanced image editing and generation. AgentOps, Red Teaming, and Tracing: Enhanced agent monitoring, evaluation, and compliance tools. Platform & Security: Pay-as-you-go compute, Prompt Shields GA, Entra Agent ID, and Purview integration for secure, scalable AI. SDK & Docs: Updates for Python, .NET, Java, and JavaScript SDKs; new guides for Deep Research and agent development.

🚀 Join the Azure AI Foundry Developer Community

Never build alone, join the community! Connect with 25,000+ developers via Discord, GitHub Discussions, and live events. Access open-source courses, code samples, and weekly office hours for AI Agents, MCP, and Generative AI.

Models

GPT-image-1 model adds two new features (in preview):

  • Input fidelity parameter: The new input_fidelity parameter in the image edits API lets you control how closely the model preserves the style and features of the original image. This is useful for editing photos (e.g., facial features, avatars), maintaining brand identity, and realistic product imagery.
  • Partial image streaming: The image generation and edits APIs now support partial image streaming, providing users with early visual feedback as images are rendered.

Agents

Deep Research Agent (Public Preview)

Azure AI Foundry Agent Service now offers the Deep Research Agent in a limited public preview. Sign up for preview access to Deep Research enabling developers to automate complex, multi-step web research using OpenAI’s advanced agentic research model (o3-deep-research), tightly integrated with Bing Search for authoritative, up-to-date results. Read the full announcement.

Tools

AgentOps: Tracing, Evaluation, and Monitoring

AgentOps brings enhanced tracing, evaluation, and monitoring for agent performance, enabling robust validation and optimization in production environments.

AI Red Teaming Agent (Preview)

Run the AI Red Teaming Agent in the cloud, with a new UI for reviewing red teaming results directly in Azure AI Foundry projects.

Platform (API, SDK, UI, and more)

Python SDK Updates

AI Agents, Evals, Projects, and Search
  • AI Agents 1.0.2/1.1.0b4: Bug fixes for tracing, new Deep Research and MCP tool support, and a new tool_resources parameter for custom tool resource overrides. Changelog
  • AI Evaluation 1.9.0: Major improvements to evaluators (faster, less variance), new Azure OpenAI score model grader, and new risk categories for red teaming. Changelog
  • AI Projects 1.0.0b12: Breaking changes—removal/renaming of inference client methods, argument renames, and telemetry fixes. Changelog
  • Azure AI Search 11.5.3: Bug fix for search operation handling. Changelog

.NET SDK Updates

AI Agents, Evals, Projects, and Search
  • AI Agents Persistent: New features for Deep Research, MCP, and tracing. Breaking changes to method/property names and agent management APIs.
  • AI Foundry: AIProjectClient now requires a Project endpoint; connection classes consolidated and renamed; deprecated UploadFileRequest in favor of UploadFile under Datasets; OpenAI chat client now supports authenticated use in projects.
  • Azure AI Search: Bug fixes and minor enhancements.

Java SDK Updates

AI Agents, Evals, Projects, and Search
  • AI Agents Persistent (beta): Client merges, new InferenceClient, and breaking changes to client structure for improved usability.
  • AI Projects (beta): Major refactor—clients merged, new authentication patterns, and improved sample coverage.

JavaScript/TypeScript SDK Updates

AI Agents, Evals, Projects, and Search
  • AI Agents/Projects: Bug fixes, breaking changes to constructor parameter types and method signatures, and new features for agent orchestration and evaluation.

Fine-tuning & Evaluation Updates

  • RFT Observability (Public Preview): Gain real-time visibility into Reinforcement Fine-tuning (RFT) jobs with automatic evaluations at each checkpoint. Monitor progress, debug issues early, and optimize training without guesswork.
  • Quick Evals (Public Preview): Rapidly assess model outputs from Stored Completions with a single click, without launching a full evaluation job.
  • Python Grader (Public Preview): Build custom evaluation logic using Python, enabling more flexible and domain-specific evaluations.
  • Development Tier (Generally Available): Experiment with fine-tuning and model customization at zero hosting cost for 24-hours, now generally available for all developers. Ideal for prototyping and testing before scaling.

Platform & Service API Enhancements

Model Context Protocol (MCP) connections are in public preview. Model Router and Model Leaderboard features are now available via SDKs and APIs. All SDKs now support reinforcement fine-tuning for select advanced models. The new Developer Tier eliminates hosting costs for experimentation.

Documentation Updates

  • Guide: Use Terraform to create/configure Azure AI Foundry resources (link)
  • Limitation: Agent subnet must be dedicated per Foundry resource (link)
  • Deep Research: New detailed guide and code samples (link)
  • Known Issue: GPT-4.1 models fail with tool/function calls >300K tokens (link)
  • Deprecation: Connection strings deprecated for Agent Service (link)
  • Role: "Azure AI Administrator" role is now GA (link)
  • Quota: Increased agent file upload limit to 300GB, new per-agent/tool limits (link)
  • Region: Updated Llama, Phi, Mistral model region availability (link)
  • CLI: Azure CLI is for control plane ops only (link)
  • Portal: New instructions for deleting projects and hubs (link)
  • New: Introduction of capability hosts (link)
  • Python: Extensive MCP agent service code samples (link)
  • Model: Major updates to model docs, new models, and region support (link)
  • Security: New PII filter documentation (link)
  • Cost: Guide to managing fine-tuning costs (link)
  • Guide: Updated guide to getting started with fine-tuning (link)

Happy building—and let us know what you ship with #AzureAIFoundry!]]>
The new Dependabot NuGet updater: 65% faster with native .NET https://devblogs.microsoft.com/dotnet/the-new-dependabot-nuget-updater Mon, 04 Aug 2025 15:00:00 +0000 Jamie Magee https://devblogs.microsoft.com/dotnet/the-new-dependabot-nuget-updater If you've ever waited impatiently for Dependabot to update your .NET dependencies, or worse, watched it fail with cryptic errors, we have some great news. Over the past year, the Dependabot team has worked on a refactor of the NuGet updater, and the results are impressive.

From hybrid to native

The previous NuGet updater used a hybrid solution that relied heavily on manual XML parsing and string replacement operations written in Ruby. While this approach worked for basic scenarios, it struggled with the complexity and nuances of modern .NET projects. The new updater takes a completely different approach by using .NET's native tooling directly.

Instead of trying to reverse-engineer what NuGet and MSBuild do, the new updater leverages actual .NET tooling:

This shift from manual XML manipulation to using the actual .NET toolchain means the updater now behaves exactly like the tools developers use every day.

Performance and reliability improvements

The improvements in the new updater are dramatic. The test suite that previously took 26 minutes now completes in just 9 minutes—a 65% reduction in runtime. But speed is only part of the story. The success rate for updates has jumped from 82% to 94%, meaning significantly fewer failed updates that require manual intervention.

These improvements work together to deliver a faster, more reliable experience. When Dependabot runs on your repository, it spends less time processing updates and succeeds more often—reducing both the wait time and the manual intervention needed to keep your dependencies current.

Real dependency detection with MSBuild

One of the most significant improvements is how the updater discovers and analyzes dependencies. Previously, the Ruby-based parser would attempt to parse project files as XML and guess what the final dependency graph would look like. This approach was fragile and missed complex scenarios.

The new updater uses MSBuild's project evaluation engine to properly understand your project's true dependency structure. This means it can now handle complex scenarios that previously caused problems.

For example, the old parser missed conditional package references like this:

<ItemGroup Condition="'$(TargetFramework)' == 'net8.0'">
  <PackageReference Include="Microsoft.Extensions.Hosting" Version="8.0.0" />
</ItemGroup>

With the new MSBuild-based approach, the updater can handle

  • Conditional package references based on target framework or build configuration
  • Directory.Build.props and Directory.Build.targets that modify dependencies
  • MSBuild variables and property evaluation throughout the project hierarchy
  • Complex package reference patterns that weren't reliably detected before

Dependency resolution solving

One of the most impressive features of the new updater is its sophisticated dependency resolution engine. Instead of updating packages in isolation, it now performs comprehensive conflict resolution. This includes two key capabilities:

Transitive dependency updates

When you have a vulnerable transitive dependency that can't be directly updated, the updater will now automatically find the best way to resolve the vulnerability. Let's look at a real scenario where your app depends on a package that has a vulnerable transitive dependency:

YourApp
└── PackageA v1.0.0
    └── TransitivePackage v2.0.0 (CVE-2024-12345)

The new updater follows a smart resolution strategy:

  1. First, it checks if PackageA has a newer version available that depends on a non-vulnerable version of TransitivePackage. If PackageA v2.0.0 depends on TransitivePackage v3.0.0 (which fixes the vulnerability), Dependabot will update PackageA to v2.0.0.

  2. If no updated version of PackageA is available, Dependabot will add a direct dependency on a non-vulnerable version of TransitivePackage to your project. This leverages NuGet's 'direct dependency wins' rule, where direct dependencies take precedence over transitive ones:

<PackageReference Include="PackageA" Version="1.0.0" />
<PackageReference Include="TransitivePackage" Version="3.0.0" />

With this approach, even though PackageA v1.0.0 still references TransitivePackage v2.0.0, NuGet will use v3.0.0 because it's a direct dependency of your project. This ensures your application uses the secure version without waiting for PackageA to be updated.

Related package updates

The updater also identifies and updates related packages to avoid version conflicts. If updating one package in a family (like Microsoft.Extensions.* packages) would create version mismatches with related packages, the updater automatically updates the entire family to compatible versions.

This intelligent conflict resolution dramatically reduces the number of failed updates and eliminates the manual work of resolving package conflicts.

Honoring global.json

The new updater now properly respects global.json files, a feature that was inconsistently supported in the previous version. If your project specifies a particular .NET SDK version, the updater will install the exact SDK version specified in your global.json. This ensures that the updater evaluates dependency updates using the same .NET SDK version that your development team and CI/CD pipelines use, eliminating a common source of inconsistencies.

This improvement complements Dependabot's recently added capability to update .NET SDK versions in global.json files. While the SDK updater keeps your .NET SDK version current with security patches and improvements, the NuGet updater respects whatever SDK version you've chosen—whether manually specified or automatically updated by Dependabot. This seamless integration means you get the best of both worlds: automated SDK updates when you want them, and consistent package dependency resolution that honors your SDK choices.

Full Central Package Management support

Central Package Management (CPM) has become increasingly popular in .NET projects for managing package versions across multiple projects. The previous updater had limited support for CPM scenarios, often requiring manual intervention.

The new updater provides comprehensive CPM support. It automatically detects Directory.Packages.props files, properly updates versions in centralized version files, supports package overrides in individual projects, and handles transitive dependencies managed through CPM. Whether you're using CPM for version management, security vulnerability management, or both, the new updater handles these scenarios seamlessly.

Support for all compliant NuGet feeds

The previous updater struggled with private NuGet feeds, especially those with non-standard authentication or API implementations. The new updater uses NuGet's official client libraries. This means it automatically supports all NuGet v2 and v3 feeds, including nuget.org, Azure Artifacts, and GitHub Packages. It also:

  • Works with standard authentication mechanisms like API keys or personal access tokens
  • Handles feed-specific behaviors and quirks that the NuGet client manages
  • Supports package source mapping configurations for enterprise scenarios

If your .NET tools can access a feed, Dependabot can too.

What this means for you

If you're using Dependabot for .NET projects, you should notice these improvements immediately. Faster updates mean dependency scans and update generation happen more quickly. More successful updates result in fewer failed updates that require manual intervention. Better accuracy ensures updates that properly respect your project's configuration and constraints. And when updates do fail, you'll get clearer errors with actionable error messages.

You don't need to change anything in your dependabot.yml configuration—you automatically get these improvements for all .NET projects.

Looking forward

This rewrite represents more than just performance improvements—it's a foundation for future enhancements. By building on .NET's native tooling, the Dependabot team will be able to add support for new .NET features as they're released, improve integration with .NET developer workflows, extend capabilities to handle more complex enterprise scenarios, and provide better diagnostics and debugging information.

The new architecture also makes it easier for the community to contribute improvements and fixes, as we rewrote the codebase in C# and leverage the same tools and libraries that .NET developers use every day. This means that developers can make contributions using familiar .NET development practices, making it easier for the community to help shape the future of Dependabot's NuGet support.

Try it out

The new NuGet updater is already live and processing updates for .NET repositories across GitHub. If you haven't enabled Dependabot for your .NET projects yet, now is a great time to start. Here's a minimal configuration to get you started:

version: 2
updates:
  - package-ecosystem: "nuget"
    directory: "/"
    schedule:
      interval: "weekly"

And if you're already using Dependabot, you should already be seeing the improvements. Faster updates, fewer failures, and clearer error messages—all without changing a single line of configuration.

The rewrite demonstrates how modern dependency management should work: fast, accurate, and transparent. By leveraging the same tools that developers use every day, Dependabot can now provide an experience that feels native to the .NET ecosystem while delivering the automation and security benefits that make dependency management less of a chore.

]]>
How can I read more than 4GB of data from a file in a single call to <CODE>Read­File</CODE>? https://devblogs.microsoft.com/oldnewthing/20250804-00/?p=111432 Mon, 04 Aug 2025 14:00:00 +0000 Raymond Chen https://devblogs.microsoft.com/oldnewthing/20250804-00/?p=111432 The nNumber­Of­Bytes­To­Read parameter to Read­File is a 32-bit unsigned integer, which limits the number of bytes that could be read at once to 4GB. What if you need to read more than 4GB?

The Read­File function cannot read more than 4GB of data at a time. At the time the function was originally written, all Win32 platforms were 32-bit, so reading more than 4GB of data into memory was impossible because the address space didn't have room for a buffer that large.

When Windows was expanded from 32-bit to 64-bit, the byte count was not expanded. I don't know the reason for certain, but it was probably a combination of (1) not wanting to change the ABI more than necessary, so that it would be easier to port 32-bit device drivers to 64-bit, and (2) having no practical demand for reading that much data in a single call.

You can work around the problem by writing a helper function that breaks the large read into chunks of less than 4GB each.

But reading 4GB of data into memory seems awfully unusual. Do you really need all of it in memory at once? Maybe you can just read the parts you need as you need them. Or you can use a memory-mapped file to make this on-demand reading transparent. (Though at a cost of having to deal with in-page exceptions if the read cannot be satisfied.)

]]>