Showing posts with label Dependency Injection. Show all posts
Showing posts with label Dependency Injection. Show all posts

Monday, July 27, 2020

Send emails from ASP.NET Core websites using SendGrid and Azure

Today we have multiple free options to send email from our apps. Let's review how to configure and use SendGrid and Azure to send emails from our ASP.NET Core apps and benefit from their extraordinary free plan.
Photo by Carol Jeng on Unsplash

Long are the days that we had to use Gmail App Passwords to send and test emails from our apps. Today we have a plethora of alternatives that cost nothing or close to nothing. On that category, SendGrid offers Azure subscribers 25,000 free emails per month! So let's review how to setup a free SendGrid account and build a simple ASP.NET website to send emails from it.

On this post we will:
  • create a SendGrid account directly in Azure
  • build a simple ASP.NET Core web app and review how to properly configure it
  • access and configure our SendGrid settings in SendGrid
  • send emails using SMTP (not the RESTful API) from our console application
For a quick start, download the code from GitHub at: github.com/hd9/aspnet-sendgrid

Creating a SendGrid account in Azure

The good news is that in case you don't have one already, you can create a SendGrid account directly from Azure. Let's get straight to it. Open your Azure portal and type sendgrid on the search tool bar and click on SendGrid Accounts:
Click Add to create our account from Azure:
Enter your information on the next screen:
Review and confirm your package:
Wait until your deployment completes (it should take no more than 10 seconds). Now go back to SendGrid Accounts and you should see your new account there:
Clicking on it would take you to the SendGrid pane showing you essential information about your new resource:
Did you notice that Manage button? Clicking that button will take us directly to SendGrid where we'll be able to configure out account, create api keys, monitor our usage and a lot more.

I won't expand much in what SendGrid offers (tldr; a lot!). For more of that, feel free to visit their website.

Configuring SendGrid

The first time you login to SendGrid, you'll be requested to confirm your email address. After confirmation, this is the what you should see a screen similar to the below, showing you a general overview of your account:

Creating our SMTP API Key

To be able to send emails from SendGrid, we'll have first to generate a password. First click on Settings -> API Keys:
Choose Restricted Access:

Select Mail Send (for this demo we only need that one):

And click create. You'll be presented with your password (api key). Copy it safely:

SendGrid Configuration

With the password in hand, here's a summary about the configuration we'll need:
  • Host: smtp.sendgrid.net
  • Port: 587
  • Username: apikey
  • Password: SG.zGNcZ-**********************

Building our App

I guess that at this point, creating an ASP.NET web app is no surprise to anyone. But if you're new to .NET Core, please check this documentation on how to build and run ASP.NET Core on Linux. It's a different perspective from the Visual Studio-centric approach you'll see elsewhere. To quickly create with VS, File -> Create a new project and select Web Application (Model-View-Controller).

Configuring our App

With the configuration in hand, let's now review how to use it. To simplify things, I built already a simple web app that captures 2 fields: name and email of a potential newsletter subscriber. It looks like this and is available on GitHub:
Apart from the visual, there are a couple of things on this app that are worth looking into. Let's start with the configuration. If you open appsettings.json on the root of the project you will see:
  "SmtpOptions": {
    "Host": "<smtp>",
    "Port": "587",
    "Username": "<account>",
    "Password": "<password>",
    "FromName": "<from-name>",
    "FromEmail": "<from-email>",
    "EmailOverride": "<email>"
  },
  "EmailTemplate": {
    "Subject": "[HildenCo WebStore] Welcome to our newsletter!",
    "Body": "Hello {0},\nThanks for signing up for our newsletter!\n\nBest Regards,\nHildenCo."
  }

Since I already explained how to bind that config to a class of our own, I'll not extend too much on the topic. Essentially we will:
  • map the SmtpOptions configuration into a SmtpOptions class
  • map the EmailTemplate config into the EmailConfig class
That mapping is done elegantly by the framework as this line from Startup.cs shows:
cfg = configuration.Get<AppConfig>();
Inspecting cfg during debug confirms the successful binding:

Dependency Injection

Next, it's time to setup dependency injection. For our objective here, ASP.NET's default DependencyInjection utility is good enough. Put the below in your ConfigureServices method to wire everything up:
services.AddSingleton(cfg.SmtpOptions);
services.AddSingleton(cfg.EmailTemplate);
services.AddTransient<IMailSender, MailSender>();
Next, inject the dependencies needed by our Controller and our MailSender classes:
readonly IMailSender _mailSender;
readonly ILogger<HomeController> _logger;

public HomeController(
    IMailSender mailSender,
    ILogger<HomeController> logger)
{
    _logger = logger;
    _mailSender = mailSender;
}

Invoking SendMail from our controller

To call MailSender from our controller, simply inject a SendMail command into and invoke it:
await _mailSender.Send(new SendMail
{
    Name = signup.Name,
    Email = signup.Email
});

Our MailSender class

To finish, here's an excerpt of our MailSender class (see the full source on GitHub):
// init our smtp client
var smtpClient = new SmtpClient
{
    Host = _smtpOptions.Host,
    Port = _smtpOptions.Port,
    EnableSsl = true,
    DeliveryMethod = SmtpDeliveryMethod.Network,
    UseDefaultCredentials = false,
    Credentials = new NetworkCredential(_smtpOptions.Username, _smtpOptions.Password)
};
// init our mail message
var mail = new MailMessage
{
    From = new MailAddress(_smtpOptions.FromEmail, _smtpOptions.FromName),
    Subject = _tpl.Subject,
    Body = string.Format(_tpl.Body, msg)
};
// send the message
await smtpClient.SendMailAsync(mail);

Testing the App

Run the app with Visual Studio 2019, enter a name and an email address. If all's configured correctly, you should soon get an email in your inbox:
As well as SendGrid reporting a successful delivery:

Final Thoughts

The only remaining question is why SMTP? The advantages of using SMTP instead of the API is that SMTP is a pretty standard protocol, works with .NET's primitives, works with any programming language or framework and, contrary to the restful API, does not require any specific packages. SMTP also works well with containers, but I'll leave that fore a future post. 😉

Conclusion

On this tutorial we reviewed how to create a SendGrid account directly from Azure, we demoed how to configure it so we can send emails from our applications. SendGrid is a great email service with a very powerful API that I recommend exploring and learning other topics such as creating your own templates, using webhooks, etc. On the future we'll revisit this example to send email from our own ASP.NET containers in a microservice application. Keep tuned!

Source Code

As always, the source is available on GitHub.

References

See Also

Monday, April 20, 2020

How to profile ASP.NET apps using Application Insights

Application Insights can monitor, log, alert and even help us understand performance problems with our apps.
Photo by Marc-Olivier Jodoin on Unsplash
We've been discussing AppInsights in depth on this blog and to complete the series, I'd like to discuss the performance features it offers. On the previous posts, we learned how to collect, suppress and monitor our applictions using AppInsights data.

On this post let's understand how to use the performance features to identify and fix performance problems with our app.

What's profiling?

Wikipedia defines profiling as:
a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization.
Profiles usually monitor:
  • Memory
  • CPU
  • Disk IO
  • Network IO
  • Latency
  • Speed the of application
  • Access to resources
  • Databases
  • etc

Profiling ASP.NET Applications

ASP.NET developers have multiple ways of profiling our web applications, being the most popular: 
Those are awesome tools that definitely you should use. But today we'll focus on what can we do to inspect our deployed application using Application Insights.

How can Application Insights help

Azure Application Insights collects telemetry from your application to help analyze its operation and performance. You can use this information to identify problems that may be occurring or to identify improvements to the application that would most impact users. This tutorial takes you through the process of analyzing the performance of both the server components of your application and the perspective of the client so you understand how to:
  • Identify the performance of server-side operations
  • Analyze server operations to determine the root cause of slow performance
  • Identify slowest client-side operations
  • Analyze details of page views using query language

Using performance instrumentation to identify slow resources

Let's illustrate how to detect performance bottlenecks in our app with some some. The code for this exercise is available on my github. You can quickly get it by:
git clone https://github.com/hd9/aspnet-ai.git
cd aspnet-ai
git branch performance
# insert your AppInsights instrumentation key on appSettings.Development.json
dotnet run
This project contains 5 endpoints that we'll use to simulate slow operations:
  • SlowPage - async, 3s to load, throws exception
  • VerySlowPage - async, 8s to load
  • CpuHeavyPage - sync, loops over 1 million results with 25ms of interval
  • DiskHeavyPage - sync, writing 1000 lines to a file
 Running the tool and get back to azure. We should have some data there.

Performance Tools in AppInsights

Our AppInsights resource in Azure greets us with an overview page already that shows us consolidaded information about failed requests, server response time, server requests and availability:

Now, click on the Performance section. Out of the box, AppInsights has already captured previous requests and shows a consolidated view. Look below to already see our endpoints sorted out by duraction:

You should also have access to an Overall panel where you'd see requests per time:
There's also good stuff on the The End-to-end transaction details widget:

For example, we could click on a given request and  get additional information about it:

Tracing

We now know which are the slowest pages on our site, let's now try to understand why. Essentially, have two options:
  1. use AppInsights's telemetry api (as on this example) 
  2. or integrating directly to your logging provider, using System.Diagnostics.Trace on this case.

Tracing with AppInsights SDK

Tracing with AppInsights SDK is done via the TrackTrace method from TelemetryClient class an is as simple as:
public IActionResult Index()
{
    _telemetry.TrackPageView("Index");
    return View();
}

Tracing with System.Diagnostics.Trace

Tracing with System.Diagnostics.Trace is also not complicated but requires the NuGet package Microsoft.ApplicationInsights.TraceListener. For more information regarding other logging providers, please check this page. Let's start by installing it with:
dotnet add package Microsoft.ApplicationInsights.TraceListener --version 2.13.0

C:\src\aspnet-ai\src>dotnet add package Microsoft.ApplicationInsights.TraceListener --version 2.13.0
  Writing C:\Users\bruno.hildenbrand\AppData\Local\Temp\tmpB909.tmp
info : Adding PackageReference for package 'Microsoft.ApplicationInsights.TraceListener' into project 'C:\src\aspnet-ai\src\aspnet-ai.csproj'.
info : Restoring packages for C:\src\aspnet-ai\src\aspnet-ai.csproj...
(...)
info : Installing Microsoft.ApplicationInsights 2.13.0.
info : Installing Microsoft.ApplicationInsights.TraceListener 2.13.0.
info : Package 'Microsoft.ApplicationInsights.TraceListener' is compatible with all the specified frameworks in project 'C:\src\aspnet-ai\src\aspnet-ai.csproj'.info : PackageReference for package 'Microsoft.ApplicationInsights.TraceListener' version '2.13.0' added to file 'C:\src\aspnet-ai\src\aspnet-ai.csproj'.
info : Committing restore...
info : Writing assets file to disk. Path: C:\src\aspnet-ai\src\obj\project.assets.json
log  : Restore completed in 4.18 sec for C:\src\aspnet-ai\src\aspnet-ai.csproj.

Reviewing the results

Back in Azure we should now see more information about the performance of the pages:
And more importantly, we can verify that our traces (in green) were correctly logged:

Where from here

If you used the tools cited above, you now should have a lot of information to understand how your application performs on production. What next?

We did two important steps here: understood the slowest pages and added trace information to them. From here, it's with up to you. Start by identifying the slowest endpoints and add extra telemetry on them. The root cause could be in a specific query in your app or even on an external resource. The point is, each situation is peculiar and extends the scope of this post. But the essential you have: which are the pages, methods and even calls that take longer. On that note, I'd recommend adding custom telemetry data so you have a real, reproducible scenario.

Conclusion

On this post, the last on the discussion about AppInsights, we reviewed how Application Insights can be used to understand, quantify and report about the performance or our apps. Once again, AppInsights demonstrates to be an essential tool for developers using Azure.

More about AppInsights

For more information, consider reading my previous articles about App Insights:
  1. Adding Application Insights telemetry to your ASP.NET Core website
  2. Suppressing Application Insights telemetry on .NET applications
  3. Monitoring ASP.NET applications using Application Insights and Azure Alerts

References

See Also

Monday, December 17, 2018

Accessing Entity Framework context on the background on .NET Core

You got the "Cannot access a disposed object using Entity Framework Core". What should you do?
Photo by Caspar Camille Rubin on Unsplash

You tried to access Entity Framework Core's db context on a background thread in .NET Core and got this error:
Cannot access a disposed object. A common cause of this error is disposing a context that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. This may occur if you are calling Dispose() on the context, or wrapping the context in a using statement. If you are using dependency injection, you should let the dependency injection container take care of disposing context instances.
This exception happens because EF Core disposes the connection just after the request to the controller is closed. Indeed that's the right approach and is the default behaviour as we don't want those resources hanging open for much longer. But before going forward, I'd like you to know that most times you don't need or want to access EF Core on a background context. A few good explanations are described here. And, if you need an introduction, I'd also recommend reading Scott Hanselman's introduction on the topic.

However, such an approach sometimes may be necessary. For example, I came across that issue writing a MVP, a proof of concept where a Vue.JS chat room using EF Core communicated with a SignalR Core backend running on Linux. In my opinion MVPs and proofs of concept are the only acceptable use cases for this solution. As always, the default approach should be accessing the service via the injected dependency.

Enough talking. Let's take a look at how to address the problem.

IServiceScopeFactory

The key to doing this is using IServiceScopeFactory. Available on the Microsoft.Extensions.DependencyInjection Nuget package, IServiceScopeFactory provides us a singleton from which we can resolve services trough DI the same way the .NET Core framework does for us.

Microsoft describes it as:
A factory for creating instances of IServiceScope, which is used to create services within a scope. Create an IServiceScope which contains an IServiceProvider used to resolve dependencies from a newly created scope.

The Implementation

The implementation is divided in 3 (three) steps:
  1. Inject the IServiceScopeFactory singleton on your controller
  2. Pass the instance of IServiceScopeFactory to your background task or thread
  3. Resolve the service from the background task

Step 1 - Inject IServiceScopeFactory in your controller

First, you need to inject IServiceScopeFactory in your controller.

Step 2 -  Pass it to your background thread

Then, you have some code that supposedly invokes the bg thread/task. For example:

Step 3 -  Resolve the service from the background task

And finally, when your background thread is run, access the scope and have the framework initialize the EF context for you with:
And because it's a singleton, IServiceScopeFactory won't throw an exception when you try to access it.
at Microsoft.EntityFrameworkCore.DbContext.CheckDisposed()
at Microsoft.EntityFrameworkCore.DbContext.Add[TEntity](TEntity entity)
at Microsoft.EntityFrameworkCore.Internal.InternalDbSet`1.Add(TEntity entity)

Conclusion

While you shouldn't use this as a pattern to process background tasks, there are situations where this is necessary. Since the there isn't much documentation around IServiceScopeFactory I thought it was good to document it. Hope it helps!

References

See Also

Monday, August 13, 2018

Creating a MassTransit client/server application using RabbitMQ, .NET Core and Linux

Let's test the versatile MassTransit framework using RabbitMQ, .NET Core and Linux and see if it can serve as a reliable messaging system.
On a previous post we introduced MassTransit on this blog and presented and some reasons why it may be a real nice alternative to your system. Today we will review a simple use case: I will simulate a client/server architecture by building two console applications using MassTransit as the service bus, RabbitMQ as the transport running on Docker, .NET Core.

Sounds complicated? Let's take a look.

Installing RabbitMQ

The quickest way to install RabbitMQ locally is by using RabbitMQ's official Docker image. Assuming you have docker installed on your machine, pulling and running it is as simple as:
$ docker run --hostname rmqhost --name rmqcontainer -p 15672:15672 -p 5672:5672 rabbitmq:3.7.5-management
Before running that command, let's examine what each part means:
  • --hostname rmqhost : sets the host name
  • --name rmqcontainer : sets the name of the container
  • -p 15672:15672 : maps container port 15672 to host port 15672 so you can access on your localhost
  • -p 5672:5672 : maps container port 5672 to host port 5672 so you can access on your localhost
  • rabbitmq:3.7.5-management : the name of the image to donwload and run. I chose that one because it has a nice UI to manage RabbitMQ in case you want to play with it.
If you don't have that image yet, Docker will pull it for you and initialize a container based on the parameters above.

Once the download is complete, Docker will init RabbitMQ for us. On my Fedora box, I get:

With your image loaded and you can then access the Management UI on http://localhost:15672. Login with guest | guest:
Cool. Now that we have RabbitMQ running, let's take a look at MassTransit.

Building a shared .NET POCO Contract

Since we're building a client and a server, we also need to build a shared contract they can access. Because the client and the server may (and should) run on different servers, the common solution is to build a class library and reference it on both projects.

In .NET Core we simply type in the terminal:
dotnet new console -n MassTransit.RabbitMQ.Contracts
Then open that project and add this simple message:
That's it. Build that project to validate everything is ok and let's move to the client.

Building a MassTransit RabbitMQ Client

As previously said, the client will need to do two important things:
  • connect to the bus so it can start sending messages
  • send messages that the server recognizes. That's why we created the contract on the step above.
You can initialize your client with a command similar to the one executed above. Once you have your project created, the important bits are: adding a reference to the contract project previously created, initializing the bus and publishing a message to RabbitMQ.

Initializing the bus

To initialize the bus, write a code similar to:

Publishing a message

Now, we should be able to publish a message with a code like: Build this project with $ dotnet build to make sure everything is ok but don't run it yet. We still need to build the server.

Building a MassTransit RabbitMQ Server

The simple server is pretty similar to the client with the difference that:
  • the server does not publish messages
  • the server contains handlers (consumers) to messages published
Build a new console app and reference the contract project created above. The gist below shows how to init a service that will run on a console terminal and handle the CreateAccount message:

Testing

Now, let's test both the client and the server to see if they're working ok. Don't forget that now you'll need your RabbitMQ instance running. I suggest running the apps side by side and let's see how that goes:

My Client App:
 My Server App:
And my RabbitMQ console:
So, from the above, you see that the messages sent from the client are reaching the server. Remember, those applications do not know about each other and the only coupling between them is the standalone contract project that both references.

Conclusion

With all that we reach the end of this post. I hope this demo showed how simple it is to create a MassTransit client/server application using RabbitMQ, .NET Core. I've been using MassTransit for a couple of years now and definitely would recommend it as a base framework for your distributed systems. Want to learn more about MassTransit? Please consider reading the following posts:

Source Code

The source code used on this post is available on GitHub.

In case you're interested, I recently pushed a more updated MassTransit/Docker/.NET Core 3.1 implementation to GitHub at: github.com/hd9/masstransit-rabbitmq

See Also

Friday, September 1, 2017

Custom security on ASP.NET applications

Most of the time the default security of the ASP.NET web applications is insufficient. On this post, let's see how we can extend it with our custom implementation.
Photo by Shahadat Rahman on Unsplash

Due how the security of the ASP.NET framework is setup, very frequently ASP.NET developers will have to write custom security controls for their applications. A classic is example is roles/permissions. How do we extend what the framework offers to respond to business needs while keeping the previous features functioning?

Reviewing the current implementation

Your current implementation is probably based on Role Providers which's what everyone else is doing. That said, you're probably using a standard solution like this one where you specify a role to the AuthorizeAttribute:
[Authorize(Roles = "MyRole")]
Assuming you implemented your RoleProvider correctly authorization should work out of the box.

A custom requirement

The problem is that custom requirements require a more granular solution that cannot be accomplished with Roles because it would require us to write stuff we don't need and doesn't satisfy the business requirements. That's what we get when using an out of the box solution. So, most developers, would probably end up writing custom code.

Permission requirements

The required permissioning model is basic. We have to support the concepts of User, Groups, Roles, Permissions (or Claims) and the traditional relationships between them. Users can view everything but only be able to act on certain contexts.

In terms of development, the following use cases derive from it:
  • The web app should controls accordingly (that we don't want to do this with JavaScript);
  • Each endpoint if the user is authorized and has permissions to call that endpoint;
  • There should be a mapping users -> groups; groups -> roles; roles -> permissions;
  • When users log in, we need to load their permissions;
  • Every message sent to the backend should have a  SubmittedBy  and that should be validated to guarantee that the user had access to that feature.
  • Reject the request if he doesn't have the permission (either in the front or in the backend);
  • Log SecurityExceptions or what is relevant for you.

Introducing Claims

Claims are granular permissions linked to resources in your application. You could use it rather than relying on role based permissions. That will allow you to control in detail the access to resource/action rather than having a multitude of roles on the system. I personally like to use a mixture of both so I can use the standard authorization features available in ASP.NET MVC with roles, reading the granular claims/permissions on the more complicated rules.

The Solution

So let's review the solution. I broke it in four parts:
  • The basic Infrastructure - the code necessary to provide access to the permissioning system, validation, constants, etc.
  • ASP.NET - secure the web app so that all actions are authorized;
  • Backend - validating if the requests reaching the backend also respect the same concerns already printed in the previous layers;
  • Refactotings - you will probably be required to refactor your framework to apply authorization in certain parts of your application. Or even, creating a generic application filter to auto-process every command sent to it. Topic for a future blog post.
Before writing code though let's not forget some to review some good design practices:
  • Code Reuse - centralize your roles in one single class in an assembly so everyone talks the same dialect
  • Use consts / readonly strings - please, don't propagate hardcoding your permissions everywhere on your code
  • Write some extension methods / helper classes so you respect the single responsability principle
  • Performance - I used Dependency Injection to inject from the DB the permissioning configuration into my PermissionController class in case it would be loaded only once and kept in cache for the duration of the application. In case you forgot it, just set some code in your Application_Start  as it's your MVC's composition root
  • Review security principles
  • Write tests

The Basic Infrastructure

I like writing code bottom up so I can unit test my code as I'm writing it. So, the 1st thing I created on this project was a SecurityController or, a facade to control all my custom authorization:


The static init method is used to initialize our framework. It gets roles and groups from the db, memory or whatever you're using. That way, the logic is db-independent. Meaning it can be reused everywhere, is lightweight, can be easily cached and, performant by default.

The other methods are pretty self-descriptive. Interesting is that, because this class is so lightweight, it can be reused trough all your assemblies, tests and even different projects without overhead. And it's also easily extensible but still closed for modifications respecting the SOLID principle.

Initializing the Framework

In order to properly initialize the framework via Dependency Injection, we need to understand what our composition roots are. In our case we would have:
  • ASP.NET - global.asax;
  • A console Application - your static void main;
  • Backend - varies according to the technology.
The composition root for my ASP.NET web app is the Application_Start method. So, the initialization happens like this:
Let's now take a look at how to secure our app. Essentially we want to be covered on:
  • Views - render/hide stuff in razor according to user's permissions
  • Controllers - authorizing/revoking access on controllers and actions
  • ActionFilters - to auto-validate content that's being injected.

Securing our Views

The first obvious step is to hide content user isn't supposed to see. Remember, we should not do this via javascript as client-side security doesn't work. In order to hide content, we need to tell Razor not to render the content if the user doesn't have access to some resource. That could be done simply like this:


Here, I have a few interesting points:
  • using an extention method in my Razor to reduce duplication of code;
  • Moved all my permissions to the "Permission" class - to avoid hardcoding permission names troughout the application;

While it's true that the above code would work, more interesting would be trying to implement it elegantly by extending the HtmlHelper to create custom methods such as SecureLink and SecureButton and using those controls in Razor:

The advantage of this approach is to simplify your Razor markup and reuse the logic to validate like this:
@Html.SecureButton("Edit", "btn", Permissions.Edit)

The RequirePermission ActionFilter

In order to secure our controllers, we'll need three things:
  • create an attribute to auto-validate requests to a given action/controller;
  • secure your actions/controllers trough application filters;

I created the RequiredPermission Attribute to validate, as an ActionFilter all the requests issued to my app. It looks more a less like:

Adding Authorization to the Views

With all of that out of the way we can now decorate our actions like this:
Take a look at the RequirePermission ActionFilter above. With one liner in the controller, we got the validation working in our ASP.NET application using our custom security implementation. Now, it's just a matter of manual work to decorate our actions with the validations you want. Remember that we did all this work because we now require granular permission validation.

Securing the Backend

Because we separated the concerns in our application, delegating to the SecurityController all the logic to validate permissions against user, our backend, whatever it is, can re-utilize it.

Conclusion

On this post we analyzed a custom implementation for securing ASP.NET applications. There's a lot more to talk about this topic that will be reviewed in future blog posts. Keep tuned!

See Also

About the Author

Bruno Hildenbrand      
Principal Architect, HildenCo Solutions.