Showing posts with label Clean Code. Show all posts
Showing posts with label Clean Code. Show all posts

Tuesday, June 1, 2021

Microservices in ASP.NET

Microservices is the last significant change in modern development. Let's learn some tools and related design patterns by building a simplified e-commerce website using modern tools and techniques such as ASP.NET Core and Docker.
Photo by Adi Goldstein on Unsplash

For some time we've been discussing tools and technologies adjacent to microservices on this blog. Not randomly though. Most of these posts derived from my open-source project aspnet-microservices, a simple (yet complicated 😉) distributed application built primarily with .NET Core and Docker. While still work in progress, the project demoes important concepts in distributed architectures.

What's included in the project

This project uses popular tools such as:
On the administrative side, the project also includes:

Disclaimer

When you create a sample microservice-based application, you need to deal with complexity and make tough choices. For the aspnet-microservices application, I deliberately chose to balance complexity and architecture by reducing the emphasis on design patterns focusing on the development of the services themselves. The project was built to serve as an introduction and a start-point for those looking forward to working of Docker, Compose and microservices.

This project is not production-ready! Check Areas for Improvement for more information.

Microservices included in this project

So far, the project consists of the following services:

  • Web: the frontend for our e-commerce application;
  • Catalog: provides catalog information for the web store;
  • Newsletter: accepts user emails and stores them in the newsletter database for future use;
  • Order: provides order features for the web store;
  • Account: provides account services (login, account creation, etc) for the web store;
  • Recommendation: provides simple recommendations based on previous purchases;
  • Notification: sends email notifications upon certain events in the system;
  • Payment: simulates a fake payment store;
  • Shipping: simulates a fake shipping store;

Technologies Used

The technologies used were cherry-picked from the most commonly used by the community. I chose to favour open-source alternatives over proprietary (or commercially-oriented) ones. You'll find in this bundle:
  • ASP.NET Core: as the base of our microservices;
  • Docker and Docker Compose: to build and run containers;
  • MySQL: serving as a relational database for some microservices;
  • MongoDB: serving as the catalog database for the Catalog microservice;
  • Redis: serving as distributed caching store for the Web microservice;
  • RabbitMQ: serving as the queue/communication layer over which our services will communicate;
  • MassTransit: the interface between our apps and RabbitMQ supporting asynchronous communications between them;
  • Dapper: lightweight ORM used to simplify interaction with the MySQL database;
  • SendGrid: used to send emails from our Notification service as described on a previous post;
  • Vue.js and Axios.Js to abstract the frontend of the Web microservice on a simple and powerful  JavaScript framework.

Conventions and Design Considerations

Among others, you'll find in this project that:
  • The Web microservice serves as the frontend for our e-commerce application and implements the API Gateway / BFF design patterns routing the requests from the user to other services on an internal Docker network;
  • Web caches catalog data a Redis data store; Feel free to use Redis Commander to delete cached entries if you wish or need to.
  • Each microservice has its own database isolating its state from external services. MongoDB and MySQL were chosen as the main databases due to their popularity.
  • All services were implemented as ASP.NET Core webapps exposing the endpoints /help and /ping so they can be inspected from and observed automatically the the running engine.
  • No special logging infrastructure was added. Logs can be easily accessed via docker logs or indexed by a different application if you so desire.
  • Microservices communicate between themselves via Pub/Sub and asynchronous request/response using MassTransit and RabbitMQ.
  • The Notification microservice will eventually send emails. This project was tested with SendGrid but other SMTP servers should work from within/without the containers.
  • Monitoring is experimental and includes Grafana sourcing its data from a Prometheus backend.

Technical Requirements

To run this project on your machine, please make sure you have installed:

If you want to develop/extend/modify it, then I'd suggest you to also have:

Running the microservices

So let's get quickly learn how to load and build our own microservices.

Initializing the project

Get your copy by cloning the project:
git clone https://github.com/hd9/aspnet-microservices

Next open the solution src/AspNetContainers.sln with Visual Studio 2019. Since code is always the best documentation, the easiest way to understand the containers and their configurations is by reading the src/docker-compose.yml file.

Debugging with Visual Studio

Building and debugging with Visual Studio 2019 is straightforward. Simply open the AspNetMicroservices.sln solution from the src folder, build and run the project as debug (F5). Next, run the dependencies (Redis, MongoDB, RabbitMQ and MySQL) by issuing the below command from the src folder:

docker-compose -f docker-compose.debug.yml up

Running the services with Docker Compose

In order to run the services you'll need Docker and Docker Compose installed on your machine. Type the command below from the src folder on a terminal to start all services:
docker-compose up
Then to stop them:
docker-compose down
To remove everything, run:
docker-compose down -v
To run a specific service, do:
docker-compose up <service-name>
As soon as you run your services, Compose should start emitting on the console logs for each service:
The output of our docker-compose command

You can also query individual logs for services as usual with docker logs <svc-name>. For example:

~> docker logs src_catalog_1
info: CatalogSvc.Startup[0]
      DB Settings: ConnStr: mongodb://catalog-db:27017, Db: catalog, Collection: products
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app

Database Initialization

Database initialization is automatically handled by Compose. Check the docker-compose.yml file to understand how that happens. You'll find examples on how to initialize both MySQL and MongoDB.

Dockerfiles

Each microservice contains a Dockerfile in their respective roots and understanding them should be straightforward. If you never wrote a Dockerfile before, consider reading the official documentation.

Docker Compose

There are two docker-compose files in the solution. Their use is described below:
  • docker-compose.yml: this is the main Compose file. Running this file means you won't be able to access some of the services as they'll not be exposed.
  • docker-compose.debug.yml: this is the file you should run if you want to debug the microservices from Visual Studio. This file only contains the dependencies (Redis, MySQL, RabbitMQ, Mongo + admin interfaces) you'll need to use when debugging.

Accessing our App

If the application booted up correctly, go to http://localhost:8000 to access it. You should see a simple catalog and some other widgets. Go ahead and try to create an account. Just make sure that you have the settings correctly configured on your docker-compose.yml file:
Our simple e-commerce website. As most things, its beauty is in the details 😊.

    Admin Interfaces

    You'll still have available admin interfaces for our services on:
    I won't go over the details about each of these apps. Feel free to explore on your own.

    Monitoring

    Experimental monitoring is available with Grafana, Prometheus and cadvisor. Open Grafana at http://localhost:3000/ and login with admin | admin, select the Docker dashboard and you should see metrics for the services similar to:

    Grafana capturing and emitting telemetry about our microservices.

    Quick Reference

    As a summary, the microservices are configured to run at:

    The management tools are available on:

    And you can access the databases at:
    • MySql databases: use Adminer at: http://localhost:8010/, enter the server name (ex. order-db for the order microservice) and use root | todo as username/password.
    • MongoDB: use MongoExpress at: http://localhost:8011/. No username/password is required.

    Final Thoughts

    On this post I introduce to you my open-source project aspnet-microservices. This application was built as a way to present the foundations of Docker, Compose and microservices for the whole .NET community and hopefully serves as an intuitive guide for those starting in this area.

    Microservices is the last significant change in modern development and requires learning lots (really, lots!) of new technologies and new design patterns. This project is by far complete and should not be used in production as it lacks basic cross-cutting concerns any production-ready project would need. I deliberately omitted them for simplicity else I could simply point you to this project. For more information, check the project's README on GitHub.

    Feel free to play with it and above all, learn and have fun!

    Source Code

    As always, the source code is available on GitHub at: github.com/hd9/spnet-microservices.

    Monday, November 2, 2020

    Async Request/Response with MassTransit, RabbitMQ, Docker and .NET core

    Let's review how to implement an async resquest/response exchange between two ASP.NET Core websites via RabbitMQ queues using MassTransit
    Photo by Pavan Trikutam on Unsplash

    Undoubtedly the most popular design pattern when writing distributed application is Pub/Sub. Turns out that there's another important design pattern used in distributed applications not as frequently mentioned, that can also be implemented with queues: async requests/responses. Async requests/responses are very useful and widely used to exchange data between microservices in non-blocking calls, allowing the requested service to throttle incoming requests via a queue preventing its own exhaustion.

    On this tutorial, we'll implement an async request/response exchange between two ASP.NET Core websites via RabbitMQ queues using MassTransit. We'll also wire everything up using Docker and Docker Compose.

    On this post we will:
    • Scaffold two ASP.NET Core websites
    • Configure each website to use MassTransit to communicate via a local RabbitMQ queue
    • Explain how to write the async request/response logic
    • Run a RabbitMQ container using Docker
    • Test and validate the results

    Understanding MassTransit Async Requests

    If you understand how to wire everything up, setting up async request/response with MassTransit is actually very simple. So before getting our hands into the code, let's review the terminology you'll need to know:
    • Consumer: a class in your service that'll respond for requests (over a queue on this case);
    • IRequestClient<T>: the interface we'll have to implement to implement the client and invoke async requests via the queue;
    • ReceiveEndpoint: a configuration that we'll have to setup to enable our Consumer to listen and respond to requests;
    • AddRequestClient: a configuration that we'll have to setup to allow our own async request implementation;
    Keep that info in mind as we'll use them in the following sections.

    Creating our Project

    Let's quickly scaffold two ASP.NET Core projects by using the dotnet CLI with:
    dotnet new mvc -o RequestSvc
    dotnet new mvc -o ResponseSvc

    Adding the Dependencies

    The dependencies we'll need today are:

    Adding Configuration

    The configuration we'll need  is also straightforward. Paste this in your RequestSvc/appsettings.json:
    "MassTransit": {
        "Host": "rabbitmq://localhost",
        "Queue": "requestsvc"
    }
    And this in your ResponseSvc/appsettings.json:
    "MassTransit": {
        "Host": "rabbitmq://localhost",
        "Queue": "responsesvc"
    }
    Next, bind the config classes to those settings. Since I covered in detail how configurations work in ASP.NET Core 3.1 projects on a previous article I'll skip that to keep this post short. But if you need, feel free to take a break and understand that part first before you proceed.

    Adding Startup Code

    Wiring up MassTransit in ASP.NET DI framework is also well documented. For our solution it would look like this for the RequestSvc project:
    services.AddMassTransit(x =>
    {
        x.AddBus(context => Bus.Factory.CreateUsingRabbitMq(c =>
        {
            c.Host(cfg.MassTransit.Host);
            c.ConfigureEndpoints(context);
        }));
       
        x.AddRequestClient<ProductInfoRequest>();
    });

    services.AddMassTransitHostedService();
    And like this for the  ResponseSvc project:
    services.AddMassTransit(x =>
    {
        x.AddConsumer<ProductInfoRequestConsumer>();

        x.AddBus(context => Bus.Factory.CreateUsingRabbitMq(c =>
        {
            c.Host(cfg.MassTransit.Host);
            c.ReceiveEndpoint(cfg.MassTransit.Queue, e =>
            {
                e.PrefetchCount = 16;
                e.UseMessageRetry(r => r.Interval(2, 3000));
                e.ConfigureConsumer<ProductInfoRequestConsumer>(context);
            });
        }));
    });

    services.AddMassTransitHostedService();
    Stop for a second and compare the differences between both initializations. Spot the differences?

    Building our Consumer

    Before we can issue our requests, we have to build a consumer to handle these messages. In MassTransit's world, this is the same consumer you'd build for your regular pub/sub. For this demo, our ProductInfoRequestConsumer looks like this:
    public async Task Consume(ConsumeContext<ProductInfoRequest> context)
    {
        var msg = context.Message;
        var slug = msg.Slug;

        // a fake delay
        var delay = 1000 * (msg.Delay > 0 ? msg.Delay : 1);
        await Task.Delay(delay);

        // get the product from ProductService
        var p = _svc.GetProductBySlug(slug);

        // this responds via the queue to our client
        await context.RespondAsync(new ProductInfoResponse
        {
            Product = p
        });
    }

    Async requests

    With consumer, configuration and the startup logic in place, it's time to write the request code. In essence, this is the piece of code that will mediate the async communication between the caller and the responder using a queue (abstracted obviously by MassTransit). A simple async request to a remote service using a backend queue looks like:
    using (var request = _client.Create(new ProductInfoRequest { Slug = slug, Delay = timeout }))
    {
        var response = await request.GetResponse<ProductInfoResponse>();
        p = response.Message.Product;
    }

    Running the dependencies

    To run RabbitMQ, we'll use Docker Compose. Running RabbitMQ with Compose is as simple as running the below command from the src folder:
    docker-compose up
    If everything correctly initialized, you should expect to see RabbitMQ's logs emitted by Docker Compose on the terminal:
    To shutdown Compose and RabbitMQ, either click Ctrl-C or run:
    docker-compose down
    Finally, to remove everything, run:
    docker-compose down -v

    Testing the Application

    Open the project from Visual Studio 2019, and run it as debug (F5) and VS will open 2 windows - one for RequestSvc and another for ResponseSvc. RequestSvc looks like this:

    Go ahead and run some queries. If you got your debugger running, it will stop in both services allowing you to validate the exchange between them. To reduce Razor boilerplate the project uses VueJS and AxiosJs so we get responses in the UI without unnecessary roundtrips.

    RabbitMQ's Management Interface

    The last thing worth mentioning is how to get to RabbitMQ's management interface. This project also allows you to play with RabbitMQ at http://localhost:8012. By logging in with guest | guest and clicking on the Queues tab you should see something similar to:
    RabbitMQ is a powerful message-broker service. However, if you're running your applications on the cloud, I'd suggest using a fully-managed service such as Azure Service Bus since it increases the resilience of your services.

    Final Thoughts

    On this article we reviewed how to implement an asynchronous request/response using queues. Async resquests/responses are very useful and widely used to exchange data between microservices in non-blocking calls, allowing the resqueted service to throttle incoming requests via a queue preventing its own exhaustion. On this example we still leveraged Docker and Docker Compose to simplify the setup and the initialization of our backend services.

    I hope you liked the demo and will consider using this pattern in your applications.

    Source Code

    As always, the source code for this article is available on my GitHub.

    References

    See Also

    Wednesday, July 15, 2020

    Hosting NuGet packages on GitHub

    On this post let's review how to build, host and consume our own NuGet packages using GitHub Packages
    Photo by Leone Venter on Unsplash

    Long gone are the days we had to pay to host our NuGet packages. Today, things have changed. We have many options to host our own NuGet packages for free (including privately if we wish) including in our own GitHub repositories. On this tutorial let's review how to build our own packages using .NET Core's CLI, push them to GitHub and finally, how to consume from our own projects.

    About NuGet

    NuGet is a free and open-source package manager designed by Microsoft and used extensively in the .NET /.NET Core ecosystem. NuGet is the name of the tool and of the package itself. The most common repository for NuGet packages is NuGet.org hosting more than 200k packages! But we can host our own packages on different repos (including private ones) such as GitHub Packages. NuGet is bundled with Visual Studio and with the .NET Core SDK so you probably have it already available on your machine.

    About GitHub Packages

    GitHub Packages is GitHub's free offering for those wanting to host their own packages. GitHub Packages allows hosting public and private packages. The benefits of using GitHub Packages is that it's free, you can share your packages privately or with the rest of the world, integrate with GitHub APIs, GitHub Actions, webhooks and even create complex end-to-end DevOps workflows. For more information about GitHub Packages, click here.

    Why build our own packages

    But why build our own packages? Mainly because packages simplify using and distributing self-contained and reusable software (tools, libraries, etc) in a clean and organized way of doing so. Beyond that, other common reasons are:
    1. sharing packages with someone else (and possibly the world)
    2. sharing that package privately with your coworkers so they can be used in different projects.
    3. packaging software so it can be installed or deployed elsewhere.

    Building NuGet Packages

    So let's get started and build our first NuGet package. The project we'll build is a simple library consisting of POCOs I frequently use as standard onfiguration bindings when developing microservices: Smtp, Redis, RabbitMQ, MassTransit and MongoDB. I chose this example because this is the type of code we frequently duplicate, so why not isolate them in a shareable package and keep our codebase DRY?

    Creating our project

    To quickly create my project let's use the .NET Core CLI (feel free to use Visual Studio if you will):
    dotnet new classlib -o HildenCo.Core
    Then I'll add those config classes. For example the SmtpOptions looks like:
    public class SmtpOptions
    {
        public string Host { get; set; }
        public int  Port { get; set; }
        public string Username { get; set; }
        public string Password { get; set; }
        public string FromName { get; set; }
        public string FromEmail { get; set; }
    }

    Creating our first NuGet package

    Let's then create our first package. The simplest way to do so is by configuring it via Visual Studio. For that, select the Project and Alt-Enter it (or right-click it with the mouse) to view Project Properties and check Generate NuGet package on build on the Package tab:
    Don't forget to add relevant information about your package such as Id, Name, Version, Authors, Description, Copyright, License and RepositoryUrl. All that information is required by GitHub:
    If you prefer, you can edit the above metadata directly in the csproj file.
    Now, build again to confirm our package was built by inspecting the Build Output in VS (Ctrl-W, O):
    1>------ Build started: Project: HildenCo.Core, Configuration: Debug Any CPU ------
    1>HildenCo.Core -> C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Debug\netstandard2.0\HildenCo.Core.dll
    1>Successfully created package 'C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Debug\HildenCo.Core.0.0.1.nupkg'.
    ========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
    Congrats! You now have built your first package!
    Don't forget to add RepositoryUrl with your correct username/repo name. We'll need it to push to GitHub later.

    Creating our package using the CLI

    As always, the CLI may be a better alternative. Why? In summary because it allows automating package creation on continuous integration, integrating with APIs, webhooks and even creating end-to-end DevOps workflows. So, go ahead and uncheck that box and build it again with:
    dotnet pack --configuration Release
    This time, we should see this as output:
    Microsoft (R) Build Engine version 16.6.0+5ff7b0c9e for .NET Core
    Copyright (C) Microsoft Corporation. All rights reserved.

      Determining projects to restore...
      All projects are up-to-date for restore.
      HildenCo.Core -> C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Release\netstandard2.0\HildenCo.Core.dll
      Successfully created package 'C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Release\HildenCo.Core.0.0.1.nupkg'.
    TIP: You may have realized that we now built our package as release. This is another immediate benefit from decoupling our builds from VS. On rare occasions should we push packages built as Debug.

    Pushing packages to GitHub

    With the basics behind, let's review how to push your own packages to GitHub.

    Generating an API Key

    In order to authenticate to GitHub Packages the first thing we'll need is an access token. Open your GitHub account, go to Settings -> Developer Settings -> Personal access tokens, click Generate new Token, give it a name, select write:packages and save:

    Creating a nuget.config file

    With the API key created, let's create our nuget.config file. This file should contain the authentication for the package to be pushed to the remote repo. A base config is listed below with the fields to be replaced in bold:
    <?xml version="1.0" encoding="utf-8"?>
    <configuration>
        <packageSources>
            <clear />
            <add key="github" value="https://nuget.pkg.github.com/<your-github-username>/index.json" />
        </packageSources>
        <packageSourceCredentials>
            <github>
                <add key="Username" value="<your-github-username>" />
                <add key="ClearTextPassword" value="<your-api-key>" />
            </github>
        </packageSourceCredentials>
    </configuration>

    Pushing a package to GitHub

    With the correct configuration in place, we can push our package to GitHub with:
    dotnet nuget push ./bin/Release/HildenCo.Core.0.0.1.nupkg --source "github"
    This is what happened when I pushed mine:
    dotnet nuget push ./bin/Release/HildenCo.Core.0.0.1.nupkg --source "github"
    Pushing HildenCo.Core.0.0.1.nupkg to 'https://nuget.pkg.github.com/hd9'...
      PUT https://nuget.pkg.github.com/hd9/
      OK https://nuget.pkg.github.com/hd9/ 1927ms
    Your package was pushed.
    Didn't work? Check if you added RepositoryUrl to your project's metadata as nuget uses it  need it to push to GitHub.

    Reviewing our Package on GitHub

    If you managed to push your first package (yay!), go ahead and review it in GitHub on the Package tab of your repository. For example, mine's available at: github.com/hd9/nuget-pkg-demo/packages and looks like this:

    Using our Package

    To complete the demo let's create an ASP.NET project to use our own package:
    dotnet new mvc -o TestNugetPkg
    To add a reference to your package, we'll use our own nuget.config since it contains pointers to our own repo. If your project has a solution, copy the nuget.config to the solution folder. Else, leave it in the project's folder. Open your project with Visual Studio and open the Manage NuGet Packages. You should see your newly created package there:
    Select it and install:
    Review the logs to make sure no errors happened:
    Restoring packages for C:\src\TestNugetPkg\TestNugetPkg.csproj...
      GET https://nuget.pkg.github.com/hd9/download/hildenco.core/index.json
      OK https://nuget.pkg.github.com/hd9/download/hildenco.core/index.json 864ms
      GET https://nuget.pkg.github.com/hd9/download/hildenco.core/0.0.1/hildenco.core.0.0.1.nupkg
      OK https://nuget.pkg.github.com/hd9/download/hildenco.core/0.0.1/hildenco.core.0.0.1.nupkg 517ms
    Installing HildenCo.Core 0.0.1.
    Installing NuGet package HildenCo.Core 0.0.1.
    Committing restore...
    Writing assets file to disk. Path: C:\src\TestNugetPkg\obj\project.assets.json
    Successfully installed 'HildenCo.Core 0.0.1' to TestNugetPkg
    Executing nuget actions took 960 ms
    Time Elapsed: 00:00:02.6332352
    ========== Finished ==========

    Time Elapsed: 00:00:00.0141177
    ========== Finished ==========
    And finally we can use it from our second project and harvest the benefits of clean code and code reuse:

    Final Thoughts

    On this post we reviewed how to build our own NuGet packages using .NET Core's CLI, pushed them to GitHub and finally described how to consume them from our own .NET projects. Creating and hosting our own NuGet packages is important for multiple reasons including sharing code between projects and creating deployable artifacts.

    Source Code

    As always, the source code for this post is available on GitHub.

    See Also

    Wednesday, April 1, 2020

    Monitor ASP.NET applications using Application Insights and Azure Alerts

    Using Application Insights? Learn how to trigger custom alerts for your application.
    Photo by Hugo Jehanne on Unsplash

    On this third article about Application Insights, we will review how to create custom alerts that trigger based on telemetry emitted by our applications. Creating Alerts on Azure is simple, cheap, fast and adds a lot of value to our operations.

    Azure Alerts

    But what are Azure Alerts? According to Microsoft:
    Azure Alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them.
    With Azure Alerts we can create custom alerts based on metrics or logs using as data sources:
    • Metric values
    • Log search queries
    • Activity log events
    • Health of the underlying Azure platform
    • Tests for website availability
    • and more...

    Customization

    The level of customization is fantastic. Alert rules can be customized with:
    • Target Resource: Defines the scope and signals available for alerting. A target can be any Azure resource. For example: a virtual machine, a storage account or an Application Insights resource
    • Signal: Emitted by the target resource, can be of the following types: metric, activity log, Application Insights, and log.
    • Criteria: A combination of signal and logic. For example, percentage CPU, server response time, result count of a query, etc.
    • Alert Name: A specific name for the alert rule configured by the user.
    • Alert Description: A description for the alert rule configured by the user.
    • Severity: The severity of the alert when the rule is met. Severity can range from 0 to 4 where 0=Critical, 4=Verbose.
    • Action: A specific action taken when the alert is fired. For more information, see Action Groups.

    Can I use Application Insights data and Azure Alerts?

    Application Insights can be used as source for your alerts. Once you have your app hooked with Application Insights, creating an alert is simple. We will review how later on this post.

    Creating an Alert

    Okay so let's jump to the example. In order to understand how that works, we will:
    1. Create and configure an alert in Azure targeting our Application Insights instance;
    2. Change our application by creating a slow endpoint that will be used on this exercise.

    To start, go to the Azure Portal and find the Application Insights instance your app is hooked to and click on the Alerts icon under Monitoring. (Don't know how to do it? Check this article where it's covered in detail).
    And click on Manage alert rules and New alert rule to create a new one:

    Configuring our alert

    You should now be in the Create rule page. Here we will specify a resource, a condition, who should be notified, notes and severity. As previously, I'll use the same aspnet-ai AppInsights instance as previously:

    Specifying a Condition

    For this demo, we will be tracking response time so for the Configure section, select Server response time:

    Setting Alert logic

    On the Alert logic step we specify an operator and a threshold value. I want to track requests that take more than 1s (1000ms), evaluating every minute and aggregating data up to 5 minutes:

    Action group

    On the Add action group tab we specify who should be notified. For now, I'll send it only to myself:

    Email, SMS and Voice

    If you want to be alerted by Email and/or SMS, enter them below. We'll need them to confirm that notifications are sent to our email and phone:

    Confirming and Reviewing our Alert

    After confirming, saving and deploying your alert, you should see a summary like:

    Testing the Alerts

    With our alert deployed, let's do some code. If you want to follow along, download the source for this article and switch to the alerts branch by doing:
    git clone https://github.com/hd9/aspnet-ai.git
    cd aspnet-ai
    git branch alerts

    # insert your AppInsights instrumentation key on appSettings.Development.json
    dotnet run
    That code contains the endpoint SlowPage to simulate slow pages (and exceptions). To test it, make sure you have correctly set your instrumentation key and send a request to the endpoint at https://localhost:5001/home/slowpage and. It should throw an exception after 3s:

    Reviewing the Exception in AppInsights

    It may take up to 5 minutes to get the notifications. In the meantime, we can explore how AppInsights tracked our exception by going to the Exceptions tab. This is what was captured by default for us:
    Clicking on the SlowPage link, I can see details about the error:

    So let's quickly discuss the above information. I added an exception on that endpoint because I also wanted to highlight that without any extra line of code, the exception was automatically tracked for us. And look how much information we have there for free! Another reason we should use these resources proactively as much as possible.

    Getting the Alert

    Okay but where's our alert? If you remember, we configured our alerts to track intervals of 5 minutes. That's good for sev3 alerts but probably to much for sev1. So, after those long 5 minutes, you should get an easier alert describing the failure:
    If you registered your phone number, you should also get an SMS. I got both with the SMS arriving a few seconds before the email.

    Reviewing Alerts

    After the first alert send, you should now have a consolidate view under your AppInsights instance where you'd be able to view previously send alerts group by severity. You can even click on them to get information and telemetry related to that event. Beautiful!

    Types of Signals

    Before we end, I'd like to show some of the signals that are available for alerts. You can use any of those (plus custom metrics) to create an alert for your cloud service.

    Conclusion

    Application Insights is an excellent tool to monitor, inspect, profile and alert on failures of your cloud resources. And given that it's extremely customizable, it's a must if you're running services on Azure.

    More about AppInsights

    Want to know more about Application Insights? Consider reading the following articles:
    1. Adding Application Insights telemetry to your ASP.NET Core website
    2. How to suppress Application Insights telemetry
    3. How to profile ASP.NET apps using Application Insights

    References

    See Also

    For more posts about AppInsights, please click here.

    Monday, March 16, 2020

    How to suppress Application Insights telemetry

    Application Insights is an excellent tool to capture telemetry for your application but it may be too verbose. On this post we will review how to exclude unnecessary requests.
    Photo by Michael Dziedzic on Unsplash

    On a previous post we learned how to add Application Insights telemetry to ASP.NET applications. Turns out that we also saw that a lot of unnecessary telemetry was being sent. How to filter some of that telemetry?

    On this post we will explain how to suppress telemetry by using AppInsights' DependencyTelemetry and ITelemetryProcessor.

    Understanding telemetry pre-processing

    You can write and configure plug-ins for the Application Insights API to customize how telemetry can be enriched and processed before it's sent to the Application Insights service in four different ways:
    1. Sampling - reduces the volume of telemetry without affecting statistics. Keeps together related data points so we can navigate between points when diagnosing a problem.
    2. Filtering with Telemetry Processors - filters out telemetry before it is sent to the server. For example, you could reduce the volume of telemetry by excluding requests from robots. Filtering is a more basic approach to reducing traffic than sampling. It allows you more control over what is transmitted, but affects your statistics.
    3. Telemetry Initializers - lets you add or modify properties to any telemetry sent from your app. For example, you could add calculated values; or version numbers by which to filter the data in the portal.

    Suppressing Dependencies

    So let's create a custom filter that will suppress some of that telemetry. For example, below I show all the telemetry that was automatically sent from my application to Azure with just one line of code. Our code will suppress the data shown in yellow.

    Creating a Telemetry Processor

    There are different types of telemetry filters being RequestTelemetry and DependencyTelemetry the most common. These classes contain properties that are useful for our logic. So let's create a filter that excludes requests to favicon.ico, bootstrap, jquery, site.css and site.js.

    To get rid of some of these requests, we will implement a custom filter that I named SuppressStaticResourcesFilter to ignore requests those static resources. Our telemetry processor should implement the interface ITelemetryProcessor. Note the logic to exclude requests in the Process method:

    Registering a Telemetry Processor

    Now, we just need to wire it up on the initialization of our app. As stated on this document, the initialization is different for ASP.NET Core and ASP.NET MVC. Let's take a look at each of them.

    The registration of a telemetry processor in ASP.NET Core is done in Startup.cs:
    Configuring a telemetry processor on ASP.NET is done in Global.asax:
    Wait a couple of minutes for the telemetry to show up online. You should no longer see those requests targeting static resources:

    How many filters?

    Well, that's up to you. You could create as many filters as you want. The only change would be on the initialization. For example, assuming For ASP.NET Core web apps, it would look like:
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddApplicationInsightsTelemetry();
        services.AddApplicationInsightsTelemetryProcessor<SuppressStaticResourcesFilter>();
        services.AddApplicationInsightsTelemetryProcessor<YourOtherFilter1>();
        services.AddApplicationInsightsTelemetryProcessor<YourOtherFilter2>();
        services.AddControllersWithViews();
    }
    For classic ASP.NET MVC Web, the initialization would be:
    var builder = TelemetryConfiguration.Active.DefaultTelemetrySink.TelemetryProcessorChainBuilder;
    builder.Use((next) => new SuppressStaticResourcesFilter(next));
    builder.Use((next) => new YourOtherFilter1(next));
    builder.Use((next) => new YourOtherFilter2(next));
    builder.Build();

    Enabling based on LogLevel

    To finish up, we could  extend the solution above by enabling the telemetry processors based on your LogLevel. For example, the code below wouldn't register processors to suppress data if LogLevel == debug:
    public void ConfigureServices(IServiceCollection services)
    {
         services.AddApplicationInsightsTelemetry();
         if (logLevel != "debug")
         {
              services.AddApplicationInsightsTelemetryProcessor<SuppressStaticResourcesFilter>();
         }
         services.AddControllersWithViews();
    }

    Conclusion

    On this post we extended the discussion on How to add Application Insights telemetry to ASP.NET websites and reviewed how to suppress some of that telemetry. I hope it's clear how to register your telemetry processor and how/when to use RequestTelemetry and DependencyTelemetry.

    If you're using Azure, AppInsights may be a valuable asset for your organization as it provides a simple, fast, centralized and customizable service to access your logs. For more information about AppInisights, check the official documentation.

    References

    More about AppInsights

    Want to know more about Application Insights? Consider reading previous articles on the same topic:
    1. Adding Application Insights telemetry to your ASP.NET Core website
    2. Monitor ASP.NET applications using Application Insights and Azure Alerts
    3. How to profile ASP.NET apps using Application Insights

    Source Code

    The source code used on this article is available on GitHub on the telemetry branch.

    See Also

    For more posts about .NET Core, please click here.

    About the Author

    Bruno Hildenbrand      
    Principal Architect, HildenCo Solutions.