Showing posts with label Microsoft. Show all posts
Showing posts with label Microsoft. Show all posts

Monday, May 2, 2022

Managed Grafana now available on Azure

It's now possible to run Grafana natively on Azure. Read to understand.

Source: Azure

Grafana, the most popular open-source analytics visualization tool is now available on Azure as a managed service. With it, customers can run Grafana natively within the Azure cloud platform without needing to provision or managing the backend services needed to run it.

Why use Grafana?

With Grafana, users can bring together logs, traces, metrics, and other disparate data from across an organization, regardless of where they are stored. With Azure Managed Grafana, the Grafana dashboards our customers are familiar with are now integrated seamlessly with the services and security of Azure.

Features

Azure Managed Grafana is a fully managed service for analytics and monitoring solutions. It's supported by Grafana Enterprise, which provides extensible data visualizations. Quickly and easily deploy Grafana dashboards with built-in high availability and control access with Azure security.

Source: Azure

Azure Managed Grafana also provides a rich set of built-in dashboards for various Azure Monitor features to help customers easily build new visualizations. For example, some features with built-in dashboards include Azure Monitor application insights, Azure Monitor container insights, Azure Monitor virtual machines insights, and Azure Monitor alerts.

How to get started

Getting started with Grafana on Azure is easy. Here are some links you should check:

References

Tuesday, March 2, 2021

Continuous Integration with Azure App Services and Docker Containers

Enabling continuous integration between your Azure App Services and Docker Containers is simple. Learn how.
Photo by Jason Leung on Unsplash

Following up on a previous post where we learned how to deploy our own Docker Images to Azure App Services, today we will learn how to enable continuous deployment between our App Service and our Azure Container Registry so that our ASP.NET website is automatically updated whenever a new image is pushed to our private repository.

On this post we will:

If you want to follow along, please check the previous tutorials discussing how to:

  • Build a simple ASP.NET Core image on your local Docker repository
  • Create and push a Docker Image to your own Azure Container Registry
  • Deploy Docker images to Azure App Services

Requirements

As requirements, please make sure you have:

Why Continuous Deployment?

Before getting to the code, let's understand a little more about continuous deployment. Wikipedia defines it as
a software engineering approach in which software functionalities are delivered frequently through automated deployments.
And why practice CD? Still according to Wikipedia, CD is especially important because in an environment in which data-centric microservices provide the functionality, and where the microservices can be multiply instantiated, CD consists of instantiating the new version of a microservice and retiring the old version as it has drained all the requests in flight.

Reviewing our App Service

So let's start by reviewing our application. We will resume from a previous post where we explained how to deploy our Docker images to App Services. Our app looked like this:

Our App Service Panel

Here's its Azure panel:

Container Services

And here's the configuration used on the previous deployment:

Image Setup

Notice that because we're switching to continuous deployment and we'll be constantly changing the version number so sticking with v1 will no longer work. On this case, tagging our images as latest is preferred since we want automatic deployments whenever a new webapp:latest reaches the registry. As you expect, tagging an existing image is a simple process:
docker image tag webapp hildenco.azurecr.io/webapp:latest

Then we push that image again just so our repo contains a latest tag to configure our webhook:

docker image push hildenco.azurecr.io/webapp:latest

We now should now see webapp:latest in our registry:

Enabling Continuous Deployment

With the requirements in place, let's configure the necessary settings to deploy whenever a new webapp:latest reaches the registry.

Enabling App Service Continuous Deployment

To enable continuous deployment for our App Service, open your App Service -> Container Settings, set Continuous Deployment to on and the tag to latest, then save:
This operation may take a little longer than you expect because it will create a webhook with the above configuration in our registry. See the next item for more information

Reviewing the Container Registry webhook

Now go to Container Registry -> Webhooks to confirm that the previous operation created a webhook for us. As seen from the history, it was never triggered so let's push a new image to test it.

Preparing a new Version

So let's prepare another version to test if our CD works. On this step we will change the code, rebuild the image, tag it as latest and push it to our private repo.
Keep track of your versions. Images can contain multiple tags and they don't occupy any space. Treat your tags as releases. In case you want to restore or redeploy an older version, it's easier to find them by tag and by image ID.

Changing the Source Code

Firstly let's change our super-complexcode and add a link to this site in our landing page:

Rebuilding the Image

Next, we rebuild our webapp with:
docker image build . -t webapp
Then we tag it with the registry's FQDN with:
docker image tag webapp hildenco.azurecr.io/webapp:latest

Testing our Continuous Deployment

With our local image ready and tagged, let's push it to our registry and verify if the webhook was triggered.

Pushing our Image

In order to push our image, login to ACR with:
az acr login -n hildenco
Then we push it with:
docker image push hildenco.azurecr.io/webapp:latest

Reviewing the webhook

Refresh the webhook page and see that the hook executed successfully:

Reviewing the logs

And on the logs tab under Container settings, I also see that the webhook was triggered (UTC time):

Reviewing the App

Lastly, we can confirm that our awesome app was updated on the public URL:

Conclusion

On this post we reviewed how to do continuous integration from Docker containers into our Azure App Services using our private Azure Container Registry. Docker containers today are the standard way to build, pack and ship our applications and it's important to learn how tools such as private container registries can help us be more effective.

References

See Also 

Tuesday, February 2, 2021

Deploying Docker images to Azure App Services

Deploying Docker images to Azure App Services is simple. Learn how to deploy your Docker images to Azure App Services using Azure Container Registry (ACR)
Photo by Glenn Carstens-Peters on Unsplash

We've been discussing Docker, containers and microservices for some time on the blog. On previous posts we learned how to create our own ASP.NET Docker images and how to push them to Azure Container Registry. Today we'll learn how to deploy these same Docker images on Azure App Services.

On this post we will:

Requirements

As requirements, please make sure you have:
If you want to follow along, please check the previous tutorials discussing how to:

    About Azure App Services

    Azure developers are quite familiar with Azure App Services. But for those who don't know, App services are:
    HTTP-based services for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments.

    Why use App Services

    And why use Azure App Services? Essentially because App Services:
    • support multiple languages and frameworks: such as ASP.NET, Java, Ruby, Python and Node.js
    • can be easily plugged into your CI/CD pipelines, for example to deploy from Docker Hub or Azure Container Registries
    • can be used as serverless services
    • runs webjobs allowing us to deploy backend services without any additional costs
    • have a very powerful and intuitive admin interface 
    • are integrated with other Azure services

    Creating our App Service

    So let's get started and create our App Service. While this shouldn't be new to anyone, I'd like to review the workflow so readers understand the step-by-step. To create your App Service, in Azure, click Create -> App Service:
    On this screen, make sure you select:
    • Publish: Docker Container
    • OS: Linux

    Select the free plan

    Click on Change Plan to choose the free one (by default you're set on a paid one). Click Dev/Test and select F1:

    Selecting Docker Container/Linux

    Review the info and don't forget to select Docker Container/Linux for Publish and Operating System:

    Specifying Container Information

    Next, we specify the container information. On this step we will choose:
    • Options: Single Container
    • Image Source: Azure Container Registry
    • Registry: Choose yours
    Change Image Source to Azure Container Registry:
    On this step, Azure should auto-populate your repository. However, if you do not have admin user enabled (I didn't), you'll get this error:

    Enabling Admin in your Azure Container Registry

    To enable admin access to your registry, open it using the portal and on the Identity tab, change from Disable:
    To Enable and Azure will auto-generate the credentials for you:

    Specify your Container

    Back to the creation screen, as soon as the admin access is enabled on your registry, Azure should auto-populate the required information with your registry, image and tag (if one exists):
    Startup Command allows you to specify additional commands for the image (for example environment vars, volumes, configurations, etc).

    Review and Confirm

    Review and confirm. The deployment should take less than 1 second:

    Accessing our App Service in Azure

    As seen above, as soon as confirm, the deployment starts. It shouldn't take more than 1 minute to have it complete.

    Accessing our Web Application

    Let's check if our image is running. From the above image you can see my image's URL highlighted in yellow. Open that on a browser to confirm the site is accessible:

    Container Features

    To finish, let's summarize some features that Azure offers us to easily manage our containers. 

    Container Settings

    Azure still offers a Container Settings tab that allows us to inspect, change container settings for our web app.

    Container Logs

    We can inspect logs for our containers to easily troubleshoot them.
    As an example, here's an excerpt of what I got for my own container log:
    2020-04-10 04:32:51.913 INFO  -  Status: Downloaded newer image for hildenco.azurecr.io/webapp:v1
    2020-04-10 04:32:52.548 INFO  - Pull Image successful, Time taken: 0 Minutes and 47 Seconds
    2020-04-10 04:32:52.627 INFO  - Starting container for site
    2020-04-10 04:32:52.627 INFO  - docker run -d -p 5021:80 --name hildenco-docker_0_e1384f56 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=hildenco-docker -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=hildenco-docker.azurewebsites.net -e WEBSITE_INSTANCE_ID=[redacted] hildenco.azurecr.io/webapp:v1 
    2020-04-10 04:32:52.627 INFO  - Logging is not enabled for this container.
    Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
    2020-04-10 04:32:57.601 INFO  - Initiating warmup request to container hildenco-docker_0_e1384f56 for site hildenco-docker
    2020-04-10 04:33:02.177 INFO  - Container hildenco-docker_0_e1384f56 for site hildenco-docker initialized successfully and is ready to serve requests.

    Continuous Deployment (CD)

    Another excellent feature that you should explore in the future is enabling continuous deployment on your web apps. Enabling continuous deployment is essential to help your team gain agility by releasing faster and often. We'll try to cover this fantastic topic in the future, keep tuned.

    Conclusion

    On this post we reviewed how to create an Azure App Service and learned how to deploy our own Docker Images from our very own Azure Container Registry (ACR) to it. By using ACR greatly simplified the integration between our own Docker Images and our App Services. From here I'd urge you to explore continuous integration to automatically push your images to your App Services as code lands in your git repository.

    References

    See Also

    Wednesday, July 15, 2020

    Hosting NuGet packages on GitHub

    On this post let's review how to build, host and consume our own NuGet packages using GitHub Packages
    Photo by Leone Venter on Unsplash

    Long gone are the days we had to pay to host our NuGet packages. Today, things have changed. We have many options to host our own NuGet packages for free (including privately if we wish) including in our own GitHub repositories. On this tutorial let's review how to build our own packages using .NET Core's CLI, push them to GitHub and finally, how to consume from our own projects.

    About NuGet

    NuGet is a free and open-source package manager designed by Microsoft and used extensively in the .NET /.NET Core ecosystem. NuGet is the name of the tool and of the package itself. The most common repository for NuGet packages is NuGet.org hosting more than 200k packages! But we can host our own packages on different repos (including private ones) such as GitHub Packages. NuGet is bundled with Visual Studio and with the .NET Core SDK so you probably have it already available on your machine.

    About GitHub Packages

    GitHub Packages is GitHub's free offering for those wanting to host their own packages. GitHub Packages allows hosting public and private packages. The benefits of using GitHub Packages is that it's free, you can share your packages privately or with the rest of the world, integrate with GitHub APIs, GitHub Actions, webhooks and even create complex end-to-end DevOps workflows. For more information about GitHub Packages, click here.

    Why build our own packages

    But why build our own packages? Mainly because packages simplify using and distributing self-contained and reusable software (tools, libraries, etc) in a clean and organized way of doing so. Beyond that, other common reasons are:
    1. sharing packages with someone else (and possibly the world)
    2. sharing that package privately with your coworkers so they can be used in different projects.
    3. packaging software so it can be installed or deployed elsewhere.

    Building NuGet Packages

    So let's get started and build our first NuGet package. The project we'll build is a simple library consisting of POCOs I frequently use as standard onfiguration bindings when developing microservices: Smtp, Redis, RabbitMQ, MassTransit and MongoDB. I chose this example because this is the type of code we frequently duplicate, so why not isolate them in a shareable package and keep our codebase DRY?

    Creating our project

    To quickly create my project let's use the .NET Core CLI (feel free to use Visual Studio if you will):
    dotnet new classlib -o HildenCo.Core
    Then I'll add those config classes. For example the SmtpOptions looks like:
    public class SmtpOptions
    {
        public string Host { get; set; }
        public int  Port { get; set; }
        public string Username { get; set; }
        public string Password { get; set; }
        public string FromName { get; set; }
        public string FromEmail { get; set; }
    }

    Creating our first NuGet package

    Let's then create our first package. The simplest way to do so is by configuring it via Visual Studio. For that, select the Project and Alt-Enter it (or right-click it with the mouse) to view Project Properties and check Generate NuGet package on build on the Package tab:
    Don't forget to add relevant information about your package such as Id, Name, Version, Authors, Description, Copyright, License and RepositoryUrl. All that information is required by GitHub:
    If you prefer, you can edit the above metadata directly in the csproj file.
    Now, build again to confirm our package was built by inspecting the Build Output in VS (Ctrl-W, O):
    1>------ Build started: Project: HildenCo.Core, Configuration: Debug Any CPU ------
    1>HildenCo.Core -> C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Debug\netstandard2.0\HildenCo.Core.dll
    1>Successfully created package 'C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Debug\HildenCo.Core.0.0.1.nupkg'.
    ========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
    Congrats! You now have built your first package!
    Don't forget to add RepositoryUrl with your correct username/repo name. We'll need it to push to GitHub later.

    Creating our package using the CLI

    As always, the CLI may be a better alternative. Why? In summary because it allows automating package creation on continuous integration, integrating with APIs, webhooks and even creating end-to-end DevOps workflows. So, go ahead and uncheck that box and build it again with:
    dotnet pack --configuration Release
    This time, we should see this as output:
    Microsoft (R) Build Engine version 16.6.0+5ff7b0c9e for .NET Core
    Copyright (C) Microsoft Corporation. All rights reserved.

      Determining projects to restore...
      All projects are up-to-date for restore.
      HildenCo.Core -> C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Release\netstandard2.0\HildenCo.Core.dll
      Successfully created package 'C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Release\HildenCo.Core.0.0.1.nupkg'.
    TIP: You may have realized that we now built our package as release. This is another immediate benefit from decoupling our builds from VS. On rare occasions should we push packages built as Debug.

    Pushing packages to GitHub

    With the basics behind, let's review how to push your own packages to GitHub.

    Generating an API Key

    In order to authenticate to GitHub Packages the first thing we'll need is an access token. Open your GitHub account, go to Settings -> Developer Settings -> Personal access tokens, click Generate new Token, give it a name, select write:packages and save:

    Creating a nuget.config file

    With the API key created, let's create our nuget.config file. This file should contain the authentication for the package to be pushed to the remote repo. A base config is listed below with the fields to be replaced in bold:
    <?xml version="1.0" encoding="utf-8"?>
    <configuration>
        <packageSources>
            <clear />
            <add key="github" value="https://nuget.pkg.github.com/<your-github-username>/index.json" />
        </packageSources>
        <packageSourceCredentials>
            <github>
                <add key="Username" value="<your-github-username>" />
                <add key="ClearTextPassword" value="<your-api-key>" />
            </github>
        </packageSourceCredentials>
    </configuration>

    Pushing a package to GitHub

    With the correct configuration in place, we can push our package to GitHub with:
    dotnet nuget push ./bin/Release/HildenCo.Core.0.0.1.nupkg --source "github"
    This is what happened when I pushed mine:
    dotnet nuget push ./bin/Release/HildenCo.Core.0.0.1.nupkg --source "github"
    Pushing HildenCo.Core.0.0.1.nupkg to 'https://nuget.pkg.github.com/hd9'...
      PUT https://nuget.pkg.github.com/hd9/
      OK https://nuget.pkg.github.com/hd9/ 1927ms
    Your package was pushed.
    Didn't work? Check if you added RepositoryUrl to your project's metadata as nuget uses it  need it to push to GitHub.

    Reviewing our Package on GitHub

    If you managed to push your first package (yay!), go ahead and review it in GitHub on the Package tab of your repository. For example, mine's available at: github.com/hd9/nuget-pkg-demo/packages and looks like this:

    Using our Package

    To complete the demo let's create an ASP.NET project to use our own package:
    dotnet new mvc -o TestNugetPkg
    To add a reference to your package, we'll use our own nuget.config since it contains pointers to our own repo. If your project has a solution, copy the nuget.config to the solution folder. Else, leave it in the project's folder. Open your project with Visual Studio and open the Manage NuGet Packages. You should see your newly created package there:
    Select it and install:
    Review the logs to make sure no errors happened:
    Restoring packages for C:\src\TestNugetPkg\TestNugetPkg.csproj...
      GET https://nuget.pkg.github.com/hd9/download/hildenco.core/index.json
      OK https://nuget.pkg.github.com/hd9/download/hildenco.core/index.json 864ms
      GET https://nuget.pkg.github.com/hd9/download/hildenco.core/0.0.1/hildenco.core.0.0.1.nupkg
      OK https://nuget.pkg.github.com/hd9/download/hildenco.core/0.0.1/hildenco.core.0.0.1.nupkg 517ms
    Installing HildenCo.Core 0.0.1.
    Installing NuGet package HildenCo.Core 0.0.1.
    Committing restore...
    Writing assets file to disk. Path: C:\src\TestNugetPkg\obj\project.assets.json
    Successfully installed 'HildenCo.Core 0.0.1' to TestNugetPkg
    Executing nuget actions took 960 ms
    Time Elapsed: 00:00:02.6332352
    ========== Finished ==========

    Time Elapsed: 00:00:00.0141177
    ========== Finished ==========
    And finally we can use it from our second project and harvest the benefits of clean code and code reuse:

    Final Thoughts

    On this post we reviewed how to build our own NuGet packages using .NET Core's CLI, pushed them to GitHub and finally described how to consume them from our own .NET projects. Creating and hosting our own NuGet packages is important for multiple reasons including sharing code between projects and creating deployable artifacts.

    Source Code

    As always, the source code for this post is available on GitHub.

    See Also

    Monday, May 4, 2020

    Configuration in .NET Core console applications

    If you search the official .NET documentation, you will probably not find much information on how to add config files to your .NET Core console applications. Let's learn how.
    Photo by Christopher Gower on Unsplash

    With the release of .NET Core 3.1, Microsoft changed a few things in how we access configuration in our files. While with ASP.NET documentation is really solid and scaffolding an ASP.NET Core website should include all the dependencies to get that right, the same does not happen with Console Applications. On this quick  tutorial let's see how we can replicate the same setup for our console apps.

    Why replicate ASP.NET Configuration

    The maturity that the .NET Core framework achieved includes the configuration framework. And all of that, despite the lack of documentation, can be shared between web and console apps. That said, here are some reasons why you should be using some of the ASP.NET tolling on your console projects:
    • the configuration providers read configuration data from key-value pairs using a variety of configuration sources including appsettings.json, environment variables, and command-line arguments
    • it can be used with custom providers
    • it can be used with in-memory .NET objects
    • if you're developing with Azure, integrates with Azure Key Vault, Azure App Configuration 
    • if you're running Docker, you can override your settings via the command line or environment variables
    • you will find parsers for most formats (we'll see an example here)

    The Solution

    So let's take a quick look at how to integrate some of these tools in our console apps.

    Adding NuGet packages

    Once you create your .NET Core app, the first thing to do is to add the following packages:
    • Microsoft.Extensions.Configuration
    • Microsoft.Extensions.Configuration.Binder
    • Microsoft.Extensions.Configuration.EnvironmentVariables
    • Microsoft.Extensions.Configuration.FileExtensions
    • Microsoft.Extensions.Configuration.Json
    Next, add the following initialization code:
    var env = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");
    var builder = new ConfigurationBuilder()
        .AddJsonFile($"appsettings.json", true, true)
        .AddJsonFile($"appsettings.{env}.json", true, true)
        .AddEnvironmentVariables();

    var config = builder.Build();
    If set, the env var above will auto-load the configuration as per the environment variable ASPNETCORE_ENVIRONMENT that comes preset on a new ASP.NET Core project. So for dev, it will try to use appSettings.Development.json sticking with appSettings.Development.json if the former doesn't exist.

    Creating a configuration file

    Now add an empty appSettings.json file in the root of your project and add your configuration. Remember that this is a json file so your config should be a valid json document. For example, to config file for one of my microservices is:
    {
      "MassTransit": {
        "Host": "rabbitmq://localhost",
        "Queue": "hildenco"
      },
      "ConnectionString": "Server=localhost;Database=hildenco;Uid=<username>;Pwd=<pwd>",
      "Smtp": {
        "Host": "<smtp-server>",
        "Port": "<smtp-port>",
        "Username": "<username>",
        "Password": "<password>",
        "From": "HildenCo Notification Service"
      }
    }

    Parsing the configuration

    There are two ways to access the configuration: by accessing each entry individually or by mapping the whole config file (or specific sections) to a class of our own. Let's see both.

    Accessing config entries

    With the config instance above, accessing our configurations is now simple. For example, accessing a root property is:
    var appName = config["ConnectionString"];
    While accessing a sub-property is:
    var rmqHost = config["RabbitMQ:Host"];

    Mapping the configuration

    Despite working well, the previous example is verbose and error prone. So let's see a better alternative: mapping the configuration to a POCO class that Microsoft calls the options pattern. Despite its fancy name, it's probably something that you'll recognize.

    We'll also see two examples: mapping the whole configuration and mapping one specific section. For both, the procedure will require these steps:
    • creating an options file
    • mapping to/from the settings
    • binding the configuration.

    Mapping the whole config

    Because our configuration contains 3 main sections - MassTransit, a MySQL ConnectionString and a SMTP config -, we'll model our AppConfig file the same way:
    public class AppConfig
    {
        public SmtpOptions Smtp { get; set; }
        public MassTransitOptions MassTransit { get; set; }
        public string ConnectionString { get; set; }
    }
    SmtpOptions should also be straight-forward:
    public class SmtpOptions
    {
        public string Host { get; set; }
        public int  Port { get; set; }
        public string Username { get; set; }
        public string Password { get; set; }
    }
    As MassTransitOptions:
    public class MassTransitOptions
    {
        public string Host { get; set; }
        public string Queue { get; set; }
    }
    The last step is binding the whole configuration with our config:
    var cfg = config.Get<AppConfig>();

    Accessing Configuration Properties

    With the config loaded, accessing our configs becomes trivial:
    var cs = cfg.ConnectionString;
    var smtpFrom = cfg.Smtp.From;

    Mapping a Section

    To map a section we use the method .GetSetcion("<section-name>").Bind() present on the Microsoft.Extensions.Configuration.Binder NuGet package that we added earlier. For example, to map just SmtpOptions we'd do:
    var mailOptions = new SmtpOptions();
    config.GetSection("Mail").Bind(mailOptions);

    Making it Generic

    Turns out that quickly the previous procedure also gets verbose. So let's shortcut it all with the following generic method (static if ran from Program.cs):
     private static T InitOptions<T>(string section)
        where T : new()
    {
        var config = InitConfig();
        return config.GetSection<T>(section);
    }
    And using it with:
    var smtpCfg = InitOptions<SmtpConfig>("Smtp");

    Reviewing the solution

    Everything should be good at this point. Remember to leverage your options classes along with your Dependency Injection framework instead of accessing the IConfiguration for performance reasons. To conclude, here's our final program.cs file:
    static async Task Main(string[] args)
    {
        var cfg = InitOptions<AppConfig>();
        // ...
    }

    private static T InitOptions<T>()
        where T : new()
    {
        var config = InitConfig();
        return config.Get<T>();
    }

    private static IConfigurationRoot InitConfig()
    {
        var env = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");
        var builder = new ConfigurationBuilder()
            .AddJsonFile($"appsettings.json", true, true)
            .AddJsonFile($"appsettings.{env}.json", true, true)
            .AddEnvironmentVariables();

        return builder.Build();
    }

    Conclusion

    On this post we reviewed how to use the ASP.NET tooling to bind and access configuration from our console applications. While .NET Core matured a lot, the documentation for console applications is not that great. For more information on the topic I suggest reading about Configuration in ASP.NET Core and understanding .NET Generic Host.

    References

    See Also

    Wednesday, April 1, 2020

    Monitor ASP.NET applications using Application Insights and Azure Alerts

    Using Application Insights? Learn how to trigger custom alerts for your application.
    Photo by Hugo Jehanne on Unsplash

    On this third article about Application Insights, we will review how to create custom alerts that trigger based on telemetry emitted by our applications. Creating Alerts on Azure is simple, cheap, fast and adds a lot of value to our operations.

    Azure Alerts

    But what are Azure Alerts? According to Microsoft:
    Azure Alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them.
    With Azure Alerts we can create custom alerts based on metrics or logs using as data sources:
    • Metric values
    • Log search queries
    • Activity log events
    • Health of the underlying Azure platform
    • Tests for website availability
    • and more...

    Customization

    The level of customization is fantastic. Alert rules can be customized with:
    • Target Resource: Defines the scope and signals available for alerting. A target can be any Azure resource. For example: a virtual machine, a storage account or an Application Insights resource
    • Signal: Emitted by the target resource, can be of the following types: metric, activity log, Application Insights, and log.
    • Criteria: A combination of signal and logic. For example, percentage CPU, server response time, result count of a query, etc.
    • Alert Name: A specific name for the alert rule configured by the user.
    • Alert Description: A description for the alert rule configured by the user.
    • Severity: The severity of the alert when the rule is met. Severity can range from 0 to 4 where 0=Critical, 4=Verbose.
    • Action: A specific action taken when the alert is fired. For more information, see Action Groups.

    Can I use Application Insights data and Azure Alerts?

    Application Insights can be used as source for your alerts. Once you have your app hooked with Application Insights, creating an alert is simple. We will review how later on this post.

    Creating an Alert

    Okay so let's jump to the example. In order to understand how that works, we will:
    1. Create and configure an alert in Azure targeting our Application Insights instance;
    2. Change our application by creating a slow endpoint that will be used on this exercise.

    To start, go to the Azure Portal and find the Application Insights instance your app is hooked to and click on the Alerts icon under Monitoring. (Don't know how to do it? Check this article where it's covered in detail).
    And click on Manage alert rules and New alert rule to create a new one:

    Configuring our alert

    You should now be in the Create rule page. Here we will specify a resource, a condition, who should be notified, notes and severity. As previously, I'll use the same aspnet-ai AppInsights instance as previously:

    Specifying a Condition

    For this demo, we will be tracking response time so for the Configure section, select Server response time:

    Setting Alert logic

    On the Alert logic step we specify an operator and a threshold value. I want to track requests that take more than 1s (1000ms), evaluating every minute and aggregating data up to 5 minutes:

    Action group

    On the Add action group tab we specify who should be notified. For now, I'll send it only to myself:

    Email, SMS and Voice

    If you want to be alerted by Email and/or SMS, enter them below. We'll need them to confirm that notifications are sent to our email and phone:

    Confirming and Reviewing our Alert

    After confirming, saving and deploying your alert, you should see a summary like:

    Testing the Alerts

    With our alert deployed, let's do some code. If you want to follow along, download the source for this article and switch to the alerts branch by doing:
    git clone https://github.com/hd9/aspnet-ai.git
    cd aspnet-ai
    git branch alerts

    # insert your AppInsights instrumentation key on appSettings.Development.json
    dotnet run
    That code contains the endpoint SlowPage to simulate slow pages (and exceptions). To test it, make sure you have correctly set your instrumentation key and send a request to the endpoint at https://localhost:5001/home/slowpage and. It should throw an exception after 3s:

    Reviewing the Exception in AppInsights

    It may take up to 5 minutes to get the notifications. In the meantime, we can explore how AppInsights tracked our exception by going to the Exceptions tab. This is what was captured by default for us:
    Clicking on the SlowPage link, I can see details about the error:

    So let's quickly discuss the above information. I added an exception on that endpoint because I also wanted to highlight that without any extra line of code, the exception was automatically tracked for us. And look how much information we have there for free! Another reason we should use these resources proactively as much as possible.

    Getting the Alert

    Okay but where's our alert? If you remember, we configured our alerts to track intervals of 5 minutes. That's good for sev3 alerts but probably to much for sev1. So, after those long 5 minutes, you should get an easier alert describing the failure:
    If you registered your phone number, you should also get an SMS. I got both with the SMS arriving a few seconds before the email.

    Reviewing Alerts

    After the first alert send, you should now have a consolidate view under your AppInsights instance where you'd be able to view previously send alerts group by severity. You can even click on them to get information and telemetry related to that event. Beautiful!

    Types of Signals

    Before we end, I'd like to show some of the signals that are available for alerts. You can use any of those (plus custom metrics) to create an alert for your cloud service.

    Conclusion

    Application Insights is an excellent tool to monitor, inspect, profile and alert on failures of your cloud resources. And given that it's extremely customizable, it's a must if you're running services on Azure.

    More about AppInsights

    Want to know more about Application Insights? Consider reading the following articles:
    1. Adding Application Insights telemetry to your ASP.NET Core website
    2. How to suppress Application Insights telemetry
    3. How to profile ASP.NET apps using Application Insights

    References

    See Also

    For more posts about AppInsights, please click here.

    About the Author

    Bruno Hildenbrand