Showing posts with label backend. Show all posts
Showing posts with label backend. Show all posts

Monday, July 27, 2020

Send emails from ASP.NET Core websites using SendGrid and Azure

Today we have multiple free options to send email from our apps. Let's review how to configure and use SendGrid and Azure to send emails from our ASP.NET Core apps and benefit from their extraordinary free plan.
Photo by Carol Jeng on Unsplash

Long are the days that we had to use Gmail App Passwords to send and test emails from our apps. Today we have a plethora of alternatives that cost nothing or close to nothing. On that category, SendGrid offers Azure subscribers 25,000 free emails per month! So let's review how to setup a free SendGrid account and build a simple ASP.NET website to send emails from it.

On this post we will:
  • create a SendGrid account directly in Azure
  • build a simple ASP.NET Core web app and review how to properly configure it
  • access and configure our SendGrid settings in SendGrid
  • send emails using SMTP (not the RESTful API) from our console application
For a quick start, download the code from GitHub at:

Creating a SendGrid account in Azure

The good news is that in case you don't have one already, you can create a SendGrid account directly from Azure. Let's get straight to it. Open your Azure portal and type sendgrid on the search tool bar and click on SendGrid Accounts:
Click Add to create our account from Azure:
Enter your information on the next screen:
Review and confirm your package:
Wait until your deployment completes (it should take no more than 10 seconds). Now go back to SendGrid Accounts and you should see your new account there:
Clicking on it would take you to the SendGrid pane showing you essential information about your new resource:
Did you notice that Manage button? Clicking that button will take us directly to SendGrid where we'll be able to configure out account, create api keys, monitor our usage and a lot more.

I won't expand much in what SendGrid offers (tldr; a lot!). For more of that, feel free to visit their website.

Configuring SendGrid

The first time you login to SendGrid, you'll be requested to confirm your email address. After confirmation, this is the what you should see a screen similar to the below, showing you a general overview of your account:

Creating our SMTP API Key

To be able to send emails from SendGrid, we'll have first to generate a password. First click on Settings -> API Keys:
Choose Restricted Access:

Select Mail Send (for this demo we only need that one):

And click create. You'll be presented with your password (api key). Copy it safely:

SendGrid Configuration

With the password in hand, here's a summary about the configuration we'll need:
  • Host:
  • Port: 587
  • Username: apikey
  • Password: SG.zGNcZ-**********************

Building our App

I guess that at this point, creating an ASP.NET web app is no surprise to anyone. But if you're new to .NET Core, please check this documentation on how to build and run ASP.NET Core on Linux. It's a different perspective from the Visual Studio-centric approach you'll see elsewhere. To quickly create with VS, File -> Create a new project and select Web Application (Model-View-Controller).

Configuring our App

With the configuration in hand, let's now review how to use it. To simplify things, I built already a simple web app that captures 2 fields: name and email of a potential newsletter subscriber. It looks like this and is available on GitHub:
Apart from the visual, there are a couple of things on this app that are worth looking into. Let's start with the configuration. If you open appsettings.json on the root of the project you will see:
  "SmtpOptions": {
    "Host": "<smtp>",
    "Port": "587",
    "Username": "<account>",
    "Password": "<password>",
    "FromName": "<from-name>",
    "FromEmail": "<from-email>",
    "EmailOverride": "<email>"
  "EmailTemplate": {
    "Subject": "[HildenCo WebStore] Welcome to our newsletter!",
    "Body": "Hello {0},\nThanks for signing up for our newsletter!\n\nBest Regards,\nHildenCo."

Since I already explained how to bind that config to a class of our own, I'll not extend too much on the topic. Essentially we will:
  • map the SmtpOptions configuration into a SmtpOptions class
  • map the EmailTemplate config into the EmailConfig class
That mapping is done elegantly by the framework as this line from Startup.cs shows:
cfg = configuration.Get<AppConfig>();
Inspecting cfg during debug confirms the successful binding:

Dependency Injection

Next, it's time to setup dependency injection. For our objective here, ASP.NET's default DependencyInjection utility is good enough. Put the below in your ConfigureServices method to wire everything up:
services.AddTransient<IMailSender, MailSender>();
Next, inject the dependencies needed by our Controller and our MailSender classes:
readonly IMailSender _mailSender;
readonly ILogger<HomeController> _logger;

public HomeController(
    IMailSender mailSender,
    ILogger<HomeController> logger)
    _logger = logger;
    _mailSender = mailSender;

Invoking SendMail from our controller

To call MailSender from our controller, simply inject a SendMail command into and invoke it:
await _mailSender.Send(new SendMail
    Name = signup.Name,
    Email = signup.Email

Our MailSender class

To finish, here's an excerpt of our MailSender class (see the full source on GitHub):
// init our smtp client
var smtpClient = new SmtpClient
    Host = _smtpOptions.Host,
    Port = _smtpOptions.Port,
    EnableSsl = true,
    DeliveryMethod = SmtpDeliveryMethod.Network,
    UseDefaultCredentials = false,
    Credentials = new NetworkCredential(_smtpOptions.Username, _smtpOptions.Password)
// init our mail message
var mail = new MailMessage
    From = new MailAddress(_smtpOptions.FromEmail, _smtpOptions.FromName),
    Subject = _tpl.Subject,
    Body = string.Format(_tpl.Body, msg)
// send the message
await smtpClient.SendMailAsync(mail);

Testing the App

Run the app with Visual Studio 2019, enter a name and an email address. If all's configured correctly, you should soon get an email in your inbox:
As well as SendGrid reporting a successful delivery:

Final Thoughts

The only remaining question is why SMTP? The advantages of using SMTP instead of the API is that SMTP is a pretty standard protocol, works with .NET's primitives, works with any programming language or framework and, contrary to the restful API, does not require any specific packages. SMTP also works well with containers, but I'll leave that fore a future post. 😉


On this tutorial we reviewed how to create a SendGrid account directly from Azure, we demoed how to configure it so we can send emails from our applications. SendGrid is a great email service with a very powerful API that I recommend exploring and learning other topics such as creating your own templates, using webhooks, etc. On the future we'll revisit this example to send email from our own ASP.NET containers in a microservice application. Keep tuned!

Source Code

As always, the source is available on GitHub.


See Also

Wednesday, July 15, 2020

Hosting NuGet packages on GitHub

On this post let's review how to build, host and consume our own NuGet packages using GitHub Packages
Photo by Leone Venter on Unsplash

Long gone are the days we had to pay to host our NuGet packages. Today, things have changed. We have many options to host our own NuGet packages for free (including privately if we wish) including in our own GitHub repositories. On this tutorial let's review how to build our own packages using .NET Core's CLI, push them to GitHub and finally, how to consume from our own projects.

About NuGet

NuGet is a free and open-source package manager designed by Microsoft and used extensively in the .NET /.NET Core ecosystem. NuGet is the name of the tool and of the package itself. The most common repository for NuGet packages is hosting more than 200k packages! But we can host our own packages on different repos (including private ones) such as GitHub Packages. NuGet is bundled with Visual Studio and with the .NET Core SDK so you probably have it already available on your machine.

About GitHub Packages

GitHub Packages is GitHub's free offering for those wanting to host their own packages. GitHub Packages allows hosting public and private packages. The benefits of using GitHub Packages is that it's free, you can share your packages privately or with the rest of the world, integrate with GitHub APIs, GitHub Actions, webhooks and even create complex end-to-end DevOps workflows. For more information about GitHub Packages, click here.

Why build our own packages

But why build our own packages? Mainly because packages simplify using and distributing self-contained and reusable software (tools, libraries, etc) in a clean and organized way of doing so. Beyond that, other common reasons are:
  1. sharing packages with someone else (and possibly the world)
  2. sharing that package privately with your coworkers so they can be used in different projects.
  3. packaging software so it can be installed or deployed elsewhere.

Building NuGet Packages

So let's get started and build our first NuGet package. The project we'll build is a simple library consisting of POCOs I frequently use as standard onfiguration bindings when developing microservices: Smtp, Redis, RabbitMQ, MassTransit and MongoDB. I chose this example because this is the type of code we frequently duplicate, so why not isolate them in a shareable package and keep our codebase DRY?

Creating our project

To quickly create my project let's use the .NET Core CLI (feel free to use Visual Studio if you will):
dotnet new classlib -o HildenCo.Core
Then I'll add those config classes. For example the SmtpOptions looks like:
public class SmtpOptions
    public string Host { get; set; }
    public int  Port { get; set; }
    public string Username { get; set; }
    public string Password { get; set; }
    public string FromName { get; set; }
    public string FromEmail { get; set; }

Creating our first NuGet package

Let's then create our first package. The simplest way to do so is by configuring it via Visual Studio. For that, select the Project and Alt-Enter it (or right-click it with the mouse) to view Project Properties and check Generate NuGet package on build on the Package tab:
Don't forget to add relevant information about your package such as Id, Name, Version, Authors, Description, Copyright, License and RepositoryUrl. All that information is required by GitHub:
If you prefer, you can edit the above metadata directly in the csproj file.
Now, build again to confirm our package was built by inspecting the Build Output in VS (Ctrl-W, O):
1>------ Build started: Project: HildenCo.Core, Configuration: Debug Any CPU ------
1>HildenCo.Core -> C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Debug\netstandard2.0\HildenCo.Core.dll
1>Successfully created package 'C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Debug\HildenCo.Core.0.0.1.nupkg'.
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
Congrats! You now have built your first package!
Don't forget to add RepositoryUrl with your correct username/repo name. We'll need it to push to GitHub later.

Creating our package using the CLI

As always, the CLI may be a better alternative. Why? In summary because it allows automating package creation on continuous integration, integrating with APIs, webhooks and even creating end-to-end DevOps workflows. So, go ahead and uncheck that box and build it again with:
dotnet pack --configuration Release
This time, we should see this as output:
Microsoft (R) Build Engine version 16.6.0+5ff7b0c9e for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

  Determining projects to restore...
  All projects are up-to-date for restore.
  HildenCo.Core -> C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Release\netstandard2.0\HildenCo.Core.dll
  Successfully created package 'C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Release\HildenCo.Core.0.0.1.nupkg'.
TIP: You may have realized that we now built our package as release. This is another immediate benefit from decoupling our builds from VS. On rare occasions should we push packages built as Debug.

Pushing packages to GitHub

With the basics behind, let's review how to push your own packages to GitHub.

Generating an API Key

In order to authenticate to GitHub Packages the first thing we'll need is an access token. Open your GitHub account, go to Settings -> Developer Settings -> Personal access tokens, click Generate new Token, give it a name, select write:packages and save:

Creating a nuget.config file

With the API key created, let's create our nuget.config file. This file should contain the authentication for the package to be pushed to the remote repo. A base config is listed below with the fields to be replaced in bold:
<?xml version="1.0" encoding="utf-8"?>
        <clear />
        <add key="github" value="<your-github-username>/index.json" />
            <add key="Username" value="<your-github-username>" />
            <add key="ClearTextPassword" value="<your-api-key>" />

Pushing a package to GitHub

With the correct configuration in place, we can push our package to GitHub with:
dotnet nuget push ./bin/Release/HildenCo.Core.0.0.1.nupkg --source "github"
This is what happened when I pushed mine:
dotnet nuget push ./bin/Release/HildenCo.Core.0.0.1.nupkg --source "github"
Pushing HildenCo.Core.0.0.1.nupkg to ''...
  OK 1927ms
Your package was pushed.
Didn't work? Check if you added RepositoryUrl to your project's metadata as nuget uses it  need it to push to GitHub.

Reviewing our Package on GitHub

If you managed to push your first package (yay!), go ahead and review it in GitHub on the Package tab of your repository. For example, mine's available at: and looks like this:

Using our Package

To complete the demo let's create an ASP.NET project to use our own package:
dotnet new mvc -o TestNugetPkg
To add a reference to your package, we'll use our own nuget.config since it contains pointers to our own repo. If your project has a solution, copy the nuget.config to the solution folder. Else, leave it in the project's folder. Open your project with Visual Studio and open the Manage NuGet Packages. You should see your newly created package there:
Select it and install:
Review the logs to make sure no errors happened:
Restoring packages for C:\src\TestNugetPkg\TestNugetPkg.csproj...
  OK 864ms
  OK 517ms
Installing HildenCo.Core 0.0.1.
Installing NuGet package HildenCo.Core 0.0.1.
Committing restore...
Writing assets file to disk. Path: C:\src\TestNugetPkg\obj\project.assets.json
Successfully installed 'HildenCo.Core 0.0.1' to TestNugetPkg
Executing nuget actions took 960 ms
Time Elapsed: 00:00:02.6332352
========== Finished ==========

Time Elapsed: 00:00:00.0141177
========== Finished ==========
And finally we can use it from our second project and harvest the benefits of clean code and code reuse:

Final Thoughts

On this post we reviewed how to build our own NuGet packages using .NET Core's CLI, pushed them to GitHub and finally described how to consume them from our own .NET projects. Creating and hosting our own NuGet packages is important for multiple reasons including sharing code between projects and creating deployable artifacts.

Source Code

As always, the source code for this post is available on GitHub.

See Also

Monday, April 20, 2020

How to profile ASP.NET apps using Application Insights

Application Insights can monitor, log, alert and even help us understand performance problems with our apps.
Photo by Marc-Olivier Jodoin on Unsplash
We've been discussing AppInsights in depth on this blog and to complete the series, I'd like to discuss the performance features it offers. On the previous posts, we learned how to collect, suppress and monitor our applictions using AppInsights data.

On this post let's understand how to use the performance features to identify and fix performance problems with our app.

What's profiling?

Wikipedia defines profiling as:
a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization.
Profiles usually monitor:
  • Memory
  • CPU
  • Disk IO
  • Network IO
  • Latency
  • Speed the of application
  • Access to resources
  • Databases
  • etc

Profiling ASP.NET Applications

ASP.NET developers have multiple ways of profiling our web applications, being the most popular: 
Those are awesome tools that definitely you should use. But today we'll focus on what can we do to inspect our deployed application using Application Insights.

How can Application Insights help

Azure Application Insights collects telemetry from your application to help analyze its operation and performance. You can use this information to identify problems that may be occurring or to identify improvements to the application that would most impact users. This tutorial takes you through the process of analyzing the performance of both the server components of your application and the perspective of the client so you understand how to:
  • Identify the performance of server-side operations
  • Analyze server operations to determine the root cause of slow performance
  • Identify slowest client-side operations
  • Analyze details of page views using query language

Using performance instrumentation to identify slow resources

Let's illustrate how to detect performance bottlenecks in our app with some some. The code for this exercise is available on my github. You can quickly get it by:
git clone
cd aspnet-ai
git branch performance
# insert your AppInsights instrumentation key on appSettings.Development.json
dotnet run
This project contains 5 endpoints that we'll use to simulate slow operations:
  • SlowPage - async, 3s to load, throws exception
  • VerySlowPage - async, 8s to load
  • CpuHeavyPage - sync, loops over 1 million results with 25ms of interval
  • DiskHeavyPage - sync, writing 1000 lines to a file
 Running the tool and get back to azure. We should have some data there.

Performance Tools in AppInsights

Our AppInsights resource in Azure greets us with an overview page already that shows us consolidaded information about failed requests, server response time, server requests and availability:

Now, click on the Performance section. Out of the box, AppInsights has already captured previous requests and shows a consolidated view. Look below to already see our endpoints sorted out by duraction:

You should also have access to an Overall panel where you'd see requests per time:
There's also good stuff on the The End-to-end transaction details widget:

For example, we could click on a given request and  get additional information about it:


We now know which are the slowest pages on our site, let's now try to understand why. Essentially, have two options:
  1. use AppInsights's telemetry api (as on this example) 
  2. or integrating directly to your logging provider, using System.Diagnostics.Trace on this case.

Tracing with AppInsights SDK

Tracing with AppInsights SDK is done via the TrackTrace method from TelemetryClient class an is as simple as:
public IActionResult Index()
    return View();

Tracing with System.Diagnostics.Trace

Tracing with System.Diagnostics.Trace is also not complicated but requires the NuGet package Microsoft.ApplicationInsights.TraceListener. For more information regarding other logging providers, please check this page. Let's start by installing it with:
dotnet add package Microsoft.ApplicationInsights.TraceListener --version 2.13.0

C:\src\aspnet-ai\src>dotnet add package Microsoft.ApplicationInsights.TraceListener --version 2.13.0
  Writing C:\Users\bruno.hildenbrand\AppData\Local\Temp\tmpB909.tmp
info : Adding PackageReference for package 'Microsoft.ApplicationInsights.TraceListener' into project 'C:\src\aspnet-ai\src\aspnet-ai.csproj'.
info : Restoring packages for C:\src\aspnet-ai\src\aspnet-ai.csproj...
info : Installing Microsoft.ApplicationInsights 2.13.0.
info : Installing Microsoft.ApplicationInsights.TraceListener 2.13.0.
info : Package 'Microsoft.ApplicationInsights.TraceListener' is compatible with all the specified frameworks in project 'C:\src\aspnet-ai\src\aspnet-ai.csproj'.info : PackageReference for package 'Microsoft.ApplicationInsights.TraceListener' version '2.13.0' added to file 'C:\src\aspnet-ai\src\aspnet-ai.csproj'.
info : Committing restore...
info : Writing assets file to disk. Path: C:\src\aspnet-ai\src\obj\project.assets.json
log  : Restore completed in 4.18 sec for C:\src\aspnet-ai\src\aspnet-ai.csproj.

Reviewing the results

Back in Azure we should now see more information about the performance of the pages:
And more importantly, we can verify that our traces (in green) were correctly logged:

Where from here

If you used the tools cited above, you now should have a lot of information to understand how your application performs on production. What next?

We did two important steps here: understood the slowest pages and added trace information to them. From here, it's with up to you. Start by identifying the slowest endpoints and add extra telemetry on them. The root cause could be in a specific query in your app or even on an external resource. The point is, each situation is peculiar and extends the scope of this post. But the essential you have: which are the pages, methods and even calls that take longer. On that note, I'd recommend adding custom telemetry data so you have a real, reproducible scenario.


On this post, the last on the discussion about AppInsights, we reviewed how Application Insights can be used to understand, quantify and report about the performance or our apps. Once again, AppInsights demonstrates to be an essential tool for developers using Azure.

More about AppInsights

For more information, consider reading my previous articles about App Insights:
  1. Adding Application Insights telemetry to your ASP.NET Core website
  2. Suppressing Application Insights telemetry on .NET applications
  3. Monitoring ASP.NET applications using Application Insights and Azure Alerts


See Also

Monday, February 17, 2020

Running NServiceBus on Azure WebJobs

On this post we will learn how to build and deploy NServiceBus on Azure WebJobs.
Photo by True Agency on Unsplash

Most of us aren't there yet with microservices. But that doesn't mean we shouldn't upgrade our infrastructure to reduce costs, increase performance, enhance security, simplify deployment, and scaling using simpler/newer technologies. On this article let's review how to deploy NServiceBus on Azure WebJobs and discuss:

  • why use WebJob
  • how to upgrade your NSB endpoints as Azure WebJob;
  • the refactorings you'll need
  • how to build, debug, test and deploy our WebJob;
  • production considerations
If you're starting a new project or have a light backend, I'd recommend you to consider going serverless on Azure Functions or AWS Lambda with SQS.


On Azure, NServiceBus is usually deployed on Cloud Services or Windows Services. The major problem with both is that, by being Platform on a Service (PAAS) they are difficult to update, secure, scale out and have to be managed separately.

Depending on your usage, migrating your NSB backend to Azure WebJobs could be a good alternative to reduce costs, maintenance and increase security. Plus, since webjobs are scaled out automatically with your Azure App Services, you would literally get autoscaling on your NSB backend for free!

Azure WebJobs

So let's start by quikcly recapping what are webjobs. According to Microsoft, webjobs are
a feature of Azure App Service that enables you to run a program or script in the same context as a web app, API app, or mobile app. There is no additional cost to use WebJobs.
In other words, a WebJob is nothing more than a script or an application run by Azure. Currently supported formats are:
  • .cmd, .bat, .exe (using Windows cmd)
  • .ps1 (using PowerShell)
  • .sh (using Bash)
  • .php (using PHP)
  • .py (using Python)
  • .js (using Node.js)
  • .jar (using Java)
As of the creation of this post, Azure still didnt' support WebJobs on App Service on Linux.

Types of WebJobs

WebJobs can be triggered and continuous. The differences are:
  • Continuous WebJobs: start immediately, can be run in parallel or be restricted to a single instance.
  • Triggered WebJobs: can be triggered manually or on a schedule and run on a single instance selected by Azure.
Since backend services like NServiceBus and MassTransit traditionally run continuously on the background, this post will focus on continuous WebJobs.

Benefits of running NServiceBus on WebJobs

So what are the benefits of transitioning our NServiceBus hosts to WebJobs? In summary, this appoach will:
  • reduce your costs as no VMs or Cloud Services are required
  • allow your backend to scale up automatically with your Azure App Service
  • eliminates your concerns about maintenance/upgrade/patch and security
  • is way simpler to deploy
  • differently than NServiceBus Host, is not being deprecated
So let's review how it works.

Migrating NServiceBus backends to WebJobs

Migrating NServiceBus backend to WebJobs couldn't be simpler. Since, NSB's official documentation does not clearly describes a migration process, let's address it here. Essentially you'll have to:
  • transform your host project in a console application
  • add a startup class and refactor the endpoint initialization
  • add a reference Microsoft.Azure.WebJobs so you can use the WebJob Api (optional)

Transforming our Host in a Console Application

The first part of our exercise requires converting our NServiceBus endpoint to a WebJob. Since WebJobs are essentially executable files we can start by simply transforming our endpoint project from a Class Library to a Console Aplication:

Referencing the WebJob package

As recommended, to leverage the Azure Api we'll have to add the Microsoft.Azure.WebJobs NuGet package to our solution. After that package is added, we'll also have to refactor our Main method to correctly initialize and shutdown our bus.

Adding a Startup class

Next, we have to add a startup class to our project. Essentially the compiler just needs a static void Main() method inside our solution so the project can be initialized. This is a simple example:
From here, not much will change. In summary, you will have to:
  • Remove the NServiceBus.Host pakage from the solution
  • Remove IWantToRunWhenEndpointStartsAndStops as it's no longer necessary
  • Refactor some of your settings because your deployment will likely change.
  • Optionally, add some sort of centralized logging like Application Insights since your backend will run in multiple instances. And having a consolidated logging infrastructure wouldn't hurt.

Potential Problems

If you updated to the 3.x series of the Microsoft.Azure.WebJobs NuGet package, you probably realized that Microsoft aligned .NET Core and the .NET Framework on this release, already preparing for .NET 5. While that's excellent news, I you may also have conflicting dependencies and build errors as the one listed below.
error CS0012: The type 'Object' is defined in an assembly that is not referenced. You must add a reference to assembly 'netstandard, Version=, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51'.
That error can be fixed by:
  1. Referencing the NETStandard.Libray NuGet package on your WebJob;
  2. Adding <Reference Include="netstandard" /> just below the <ItemGroup> section on your WebJob's csproj file.

    Building, testing and debugging the solution

    After upgrading packages, refactoring code and fixing dependencies issues, we'll now have to fix potential build errors, and assert that the unit-tests are passing for the solution solution. Since I don't expect any major issues here, guess we can move ahead and review necessary changes for debugging.


    Because we transformed our NServiceBus endpoint in a console app, we remain able to start it on debug as previously. However there are some important details ahead. Make sure you set your WebJob project to start when debugging. To do so, we have to configure our solution to start multiple projects at the same time by right-clicking your solution, clicking Startup scripts, selecting Multiple startup projects and setting the Action column to Start for the projects you want to start.
      Now set your Azure Storage connection string so you can debug your project with Azure.

      Running Locally

      But if you want to run the WebJob using the development connection string ("UseDevelopmentStorage=true"), you will realize that the initialization fails. WebJobs can't run with the Azure Storage Emulator:
      System.AggregateException: One or more errors occurred. ---> System.InvalidOperationException:
         Failed to validate Microsoft Azure WebJobs SDK Storage account.
         The Microsoft Azure Storage Emulator is not supported, please use a Microsoft Azure Storage account 
      hosted in Microsoft Azure.
         at Microsoft.Azure.WebJobs.Host.Executors.StorageAccountParser.ParseAccount(String connectionString,
      String connectionStringName, IServiceProvider services)
         at Microsoft.Azure.WebJobs.Host.Executors.DefaultStorageAccountProvider.set_StorageConnectionString(String value)
         at Microsoft.Azure.WebJobs.JobHostConfiguration.set_StorageConnectionString(String value)
         at ATIS.Services.Program.d__1.MoveNext() in C:\src\ATIS\ATIS.Services\Program.cs:line 49
         --- End of stack trace from previous location where exception was thrown ---
         at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
         at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
         at System.Runtime.CompilerServices.TaskAwaiter.GetResult()
      So if we have a console application ready to run, why do even have to start the Jobhost at all?

      That's a common scenario in which we want to run a different logic on debug and release modes, I usually resort to preprocessor directives where I have different implementations for debug and release. The next snippet shows it on line 15:

      Going Async

      To finish, let's make code asynchronous. The two previous snippets could be refactored into  something like:


      Now let's discuss deployment. Three important things to note:
      1. Variable collisions - you will probably have to change or rename variables since some of them may overlap;
      2. Changes in the deployment process - to be run as a WebJob, your backend will have to be deployed with your web app;
      3. Transformations - will probably have to change some transformations so they're also available for the backend.

      Deploying your WebJob

      A continuous webjob should be deployed with your Azure App Service on App_data/jobs/continuous. Triggered jobs should go into the App_data/jobs/triggered folder. The screenshot below shows them running in my AppService:
      Another way to confirm that is by using the Azure Serial Console and cding into that folder:

        Changing the Deployment Process

        So how do we get our WebJobs on App_data/jobs/continuous? Well, that will obviously depend on how you're deploying your services. The most common deployment strategies are:
        1. ClickOnce from Visual Studio
        2. Custom PowerShell scripts
        3. Using an automated deployment tool (ex. Azure DevOps, CircleCI, AppVeyor, Octopus Deploy, etc)
        4. By hand 😢 
        Let's discuss the two most common ways: NuGet packages and PowerShell Scripts.

        NuGet Packaging

        A common way to package code is building NuGet Packages. I won't extend much into that as it's outside of the scope of this post but I want to highlight that getting our project within our NuGet package is very simple. If you're already building NuGet packages, by simply add a reference to your project on the <files> section, we're telling msbuild to package our project with our web application:


        If your CI/CD supports PowerShell, we could add the below snippet in a step just before the release:
        # PowerShell Copy-Item ..\MyApp.Backend\bin\release App_Data\jobs\continuous\MyApp.Backend -force -recurse

        Post-build event

        Another alternative would be running a specific command you your post-build event. Just keep in mind that this would also slow your local builds unless if add some contitional around it:
        # xcopy xcopy /Q /Y /E /I ..\MyApp.Backend\bin\release App_Data\jobs\continuous\MyApp.Backend

        Testing Considerations

        With the deployment out of the way, let's what should be considered when testings:
        1. Performance - I didn't see any degradation performance changes but that could not be your case. Test and compare the performance of this implementation.
        2. Failures - A crashing WebJob won't crash your App Service but, have you tested edge cases?
        3. Scale - the number of instances can be different from your current setup. Can you guarantee that no racing conditions exist? 
        4. Logging - Do you need to change how your application logs its data? Are the logs centralized and easily accessible?
        5. Remoting - Because you departed Windows VMs and Cloud Services doesn't mean that you can't access the instance remotely. The Azure Serial Console is an excellent tool to manage and inspect some aspects of your job.

          Production Considerations

          Still there? So let's finish this post with some considerations about running NServiceBus on WebJobs in production. I expect you tested your application against the items highlighted on the previous section.

          I'd recommend that before going to production that you:
          • build some metrics - around the performance before deploying so you know what to expect;
          • use Azure Deployment Slots - to validate production before setting it live;
          • doubletriple-check your configuration - because it's a new deployment to a new environment and some configurations were changed, weren't they?
          • keep an eye on the logs - as we always do, right? 😊 
          • do a post-mortem after the deployment - so your team reflects on the pros/cons of this transition.

          Final Thoughts

          Migrating NServiceBus from VMs to WebJobs was a refreshing and cost-saving experience. Over time, we felt the heavy burden of managing VMs (security patches, firewalling, extra configuration, redundancies, backups, storage, vNets, etc) not to mention how difficult it is to scale them out. Because WebJobs scale out automatically with the App Service at virtually no extra cost, we definitely gained a lot with this change. Some of the positive impacts I saw were:
          • quicker deployments
          • easier to scale out
          • cheaper to run
          • more secure
          • reduced zero ops
          • simpler deployments
          • decent performance
          If you're starting a new project or have a light backend, I'd recommend you to consider going serverless on Azure Functions or AWS Lambda with SQS.

          More about NServiceBus?

          Want to read other posts about NServiceBus, please also consider:


          See Also

            Monday, January 6, 2020

            Countdown to .NET 5

            2020 is an excellent year for .NET. This is the year we'll finally see .NET 5 merging .NET Core, .NET Framework and Xamarin.
            2020 brings great news for .NET developers. This is the year that, Microsoft expects to consolidate .NET Core and .NET Framework on a single platform called .NET 5 including .NET mobile (Xamarin), ASP.NET Core, Entity Framework Core, WinForms, WPF and ML.NET.. The first preview is expected in the first half of the year with the official release foretasted for Nov, 2020. Excited? You should be!

            Update: The first beta release is available now!

            Great news for .NET developers

            That's great news for folks working on .NET Core since there'll be an influx of projects to work, contribute and develop. But that's even better news for teams working on slow-moving projects (aka, most of us) which have been deferring an update to the more modern, faster and container-friendly .NET Core.

            I posted some time ago my insights regarding this update. TLDR, I'm super excited! Being a long time Fedora Linux user and a big open source enthusiast, I have been using .NET Core for my personal projects on Linux (and running them successfully on both Docker, AWS and Azure). While I'm transitioning more and more my workflow to open source software (such as Vim, i3, etc) and and have been working a decent portion of my time with Go and Python, .NET is still my default framework.

            So let's take another look at what's coming up next with .NET.

            Highlights of .NET 5

            Apart from the single codebase, my preferred highlights of .NET 5 are:
            • .NET will become a single platform including Xamarin, ASP.NET Core, Entity Framework Core, WinForms, WPF and ML.NET
            • Unified .NET SDK experience:
              • Single BCL (Base Class Library) across all .NET 5 applications. Today Xamarin applications use the Mono BCL but will move to use the.NET Core BCL, improving compatibility across our application models.
              • Mobile development (Xamarin) is integrated into .NET 5. This means the .NET SDK will support mobile. For example, you can use “dotnet new XamarinForms” to create a mobile application.
            • Native Applications supporting multiple platforms: Single Device project that supports an application that can work across multiple devices for example Window Desktop, Microsoft Duo (Android), and iOS using the native controls supported on those platforms.
            • Cloud Native Applications: High performance, single file (.exe) <50MB microservices and support for building multiple project (API, web front ends, containers) both locally and in the cloud.
            • Open source and hosted on GitHub
            • Cross-platform and better performance
            • Decent command-line interface (CLI)
            • Java, Objective-C and Swift interoperability
            • Support of static compilation of .NET (ahead-of-time – AOT)
            • Smaller footprints

            A Unified Platform

            This is a more holistic view of what .NET 5 will be:

            The Schedule

            The proposed merge is expected to happen by November 2020. Here's the plan:
            You can also check the release status in real time on GitHub (Jan, 2020):

            What's Next

            So what's next? Well, the best thing to do is to keep an eye on .NET's official blog as they'll be updating the status of the project through there. Would you like to contribute? Jump into .NET Core and CoreFx repositories in GitHub. For more information on the topic, consider reading .NET Core and .NET merging as .NET 5.0.


            See Also

            For more posts on .NET Core, please click here.

            Monday, December 17, 2018

            Accessing Entity Framework context on the background on .NET Core

            You got the "Cannot access a disposed object using Entity Framework Core". What should you do?
            Photo by Caspar Camille Rubin on Unsplash

            You tried to access Entity Framework Core's db context on a background thread in .NET Core and got this error:
            Cannot access a disposed object. A common cause of this error is disposing a context that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. This may occur if you are calling Dispose() on the context, or wrapping the context in a using statement. If you are using dependency injection, you should let the dependency injection container take care of disposing context instances.
            This exception happens because EF Core disposes the connection just after the request to the controller is closed. Indeed that's the right approach and is the default behaviour as we don't want those resources hanging open for much longer. But before going forward, I'd like you to know that most times you don't need or want to access EF Core on a background context. A few good explanations are described here. And, if you need an introduction, I'd also recommend reading Scott Hanselman's introduction on the topic.

            However, such an approach sometimes may be necessary. For example, I came across that issue writing a MVP, a proof of concept where a Vue.JS chat room using EF Core communicated with a SignalR Core backend running on Linux. In my opinion MVPs and proofs of concept are the only acceptable use cases for this solution. As always, the default approach should be accessing the service via the injected dependency.

            Enough talking. Let's take a look at how to address the problem.


            The key to doing this is using IServiceScopeFactory. Available on the Microsoft.Extensions.DependencyInjection Nuget package, IServiceScopeFactory provides us a singleton from which we can resolve services trough DI the same way the .NET Core framework does for us.

            Microsoft describes it as:
            A factory for creating instances of IServiceScope, which is used to create services within a scope. Create an IServiceScope which contains an IServiceProvider used to resolve dependencies from a newly created scope.

            The Implementation

            The implementation is divided in 3 (three) steps:
            1. Inject the IServiceScopeFactory singleton on your controller
            2. Pass the instance of IServiceScopeFactory to your background task or thread
            3. Resolve the service from the background task

            Step 1 - Inject IServiceScopeFactory in your controller

            First, you need to inject IServiceScopeFactory in your controller.

            Step 2 -  Pass it to your background thread

            Then, you have some code that supposedly invokes the bg thread/task. For example:

            Step 3 -  Resolve the service from the background task

            And finally, when your background thread is run, access the scope and have the framework initialize the EF context for you with:
            And because it's a singleton, IServiceScopeFactory won't throw an exception when you try to access it.
            at Microsoft.EntityFrameworkCore.DbContext.CheckDisposed()
            at Microsoft.EntityFrameworkCore.DbContext.Add[TEntity](TEntity entity)
            at Microsoft.EntityFrameworkCore.Internal.InternalDbSet`1.Add(TEntity entity)


            While you shouldn't use this as a pattern to process background tasks, there are situations where this is necessary. Since the there isn't much documentation around IServiceScopeFactory I thought it was good to document it. Hope it helps!


            See Also

            About the Author

            Bruno Hildenbrand      
            Principal Architect, HildenCo Solutions.