Showing posts with label CQRS. Show all posts
Showing posts with label CQRS. Show all posts

Monday, November 2, 2020

Async Request/Response with MassTransit, RabbitMQ, Docker and .NET core

Let's review how to implement an async resquest/response exchange between two ASP.NET Core websites via RabbitMQ queues using MassTransit
Photo by Pavan Trikutam on Unsplash

Undoubtedly the most popular design pattern when writing distributed application is Pub/Sub. Turns out that there's another important design pattern used in distributed applications not as frequently mentioned, that can also be implemented with queues: async requests/responses. Async requests/responses are very useful and widely used to exchange data between microservices in non-blocking calls, allowing the requested service to throttle incoming requests via a queue preventing its own exhaustion.

On this tutorial, we'll implement an async request/response exchange between two ASP.NET Core websites via RabbitMQ queues using MassTransit. We'll also wire everything up using Docker and Docker Compose.

On this post we will:
  • Scaffold two ASP.NET Core websites
  • Configure each website to use MassTransit to communicate via a local RabbitMQ queue
  • Explain how to write the async request/response logic
  • Run a RabbitMQ container using Docker
  • Test and validate the results

Understanding MassTransit Async Requests

If you understand how to wire everything up, setting up async request/response with MassTransit is actually very simple. So before getting our hands into the code, let's review the terminology you'll need to know:
  • Consumer: a class in your service that'll respond for requests (over a queue on this case);
  • IRequestClient<T>: the interface we'll have to implement to implement the client and invoke async requests via the queue;
  • ReceiveEndpoint: a configuration that we'll have to setup to enable our Consumer to listen and respond to requests;
  • AddRequestClient: a configuration that we'll have to setup to allow our own async request implementation;
Keep that info in mind as we'll use them in the following sections.

Creating our Project

Let's quickly scaffold two ASP.NET Core projects by using the dotnet CLI with:
dotnet new mvc -o RequestSvc
dotnet new mvc -o ResponseSvc

Adding the Dependencies

The dependencies we'll need today are:

Adding Configuration

The configuration we'll need  is also straightforward. Paste this in your RequestSvc/appsettings.json:
"MassTransit": {
    "Host": "rabbitmq://localhost",
    "Queue": "requestsvc"
}
And this in your ResponseSvc/appsettings.json:
"MassTransit": {
    "Host": "rabbitmq://localhost",
    "Queue": "responsesvc"
}
Next, bind the config classes to those settings. Since I covered in detail how configurations work in ASP.NET Core 3.1 projects on a previous article I'll skip that to keep this post short. But if you need, feel free to take a break and understand that part first before you proceed.

Adding Startup Code

Wiring up MassTransit in ASP.NET DI framework is also well documented. For our solution it would look like this for the RequestSvc project:
services.AddMassTransit(x =>
{
    x.AddBus(context => Bus.Factory.CreateUsingRabbitMq(c =>
    {
        c.Host(cfg.MassTransit.Host);
        c.ConfigureEndpoints(context);
    }));
   
    x.AddRequestClient<ProductInfoRequest>();
});

services.AddMassTransitHostedService();
And like this for the  ResponseSvc project:
services.AddMassTransit(x =>
{
    x.AddConsumer<ProductInfoRequestConsumer>();

    x.AddBus(context => Bus.Factory.CreateUsingRabbitMq(c =>
    {
        c.Host(cfg.MassTransit.Host);
        c.ReceiveEndpoint(cfg.MassTransit.Queue, e =>
        {
            e.PrefetchCount = 16;
            e.UseMessageRetry(r => r.Interval(2, 3000));
            e.ConfigureConsumer<ProductInfoRequestConsumer>(context);
        });
    }));
});

services.AddMassTransitHostedService();
Stop for a second and compare the differences between both initializations. Spot the differences?

Building our Consumer

Before we can issue our requests, we have to build a consumer to handle these messages. In MassTransit's world, this is the same consumer you'd build for your regular pub/sub. For this demo, our ProductInfoRequestConsumer looks like this:
public async Task Consume(ConsumeContext<ProductInfoRequest> context)
{
    var msg = context.Message;
    var slug = msg.Slug;

    // a fake delay
    var delay = 1000 * (msg.Delay > 0 ? msg.Delay : 1);
    await Task.Delay(delay);

    // get the product from ProductService
    var p = _svc.GetProductBySlug(slug);

    // this responds via the queue to our client
    await context.RespondAsync(new ProductInfoResponse
    {
        Product = p
    });
}

Async requests

With consumer, configuration and the startup logic in place, it's time to write the request code. In essence, this is the piece of code that will mediate the async communication between the caller and the responder using a queue (abstracted obviously by MassTransit). A simple async request to a remote service using a backend queue looks like:
using (var request = _client.Create(new ProductInfoRequest { Slug = slug, Delay = timeout }))
{
    var response = await request.GetResponse<ProductInfoResponse>();
    p = response.Message.Product;
}

Running the dependencies

To run RabbitMQ, we'll use Docker Compose. Running RabbitMQ with Compose is as simple as running the below command from the src folder:
docker-compose up
If everything correctly initialized, you should expect to see RabbitMQ's logs emitted by Docker Compose on the terminal:
To shutdown Compose and RabbitMQ, either click Ctrl-C or run:
docker-compose down
Finally, to remove everything, run:
docker-compose down -v

Testing the Application

Open the project from Visual Studio 2019, and run it as debug (F5) and VS will open 2 windows - one for RequestSvc and another for ResponseSvc. RequestSvc looks like this:

Go ahead and run some queries. If you got your debugger running, it will stop in both services allowing you to validate the exchange between them. To reduce Razor boilerplate the project uses VueJS and AxiosJs so we get responses in the UI without unnecessary roundtrips.

RabbitMQ's Management Interface

The last thing worth mentioning is how to get to RabbitMQ's management interface. This project also allows you to play with RabbitMQ at http://localhost:8012. By logging in with guest | guest and clicking on the Queues tab you should see something similar to:
RabbitMQ is a powerful message-broker service. However, if you're running your applications on the cloud, I'd suggest using a fully-managed service such as Azure Service Bus since it increases the resilience of your services.

Final Thoughts

On this article we reviewed how to implement an asynchronous request/response using queues. Async resquests/responses are very useful and widely used to exchange data between microservices in non-blocking calls, allowing the resqueted service to throttle incoming requests via a queue preventing its own exhaustion. On this example we still leveraged Docker and Docker Compose to simplify the setup and the initialization of our backend services.

I hope you liked the demo and will consider using this pattern in your applications.

Source Code

As always, the source code for this article is available on my GitHub.

References

See Also

Monday, August 20, 2018

Exploring MassTransit InMemory Scheduled Messaging using RabbitMQ and .NET Core

On this post, let's explore MassTransit's scheduler messaging system using RabbitMQ

On a previous post, I demoed how to create a MassTransit client/server application using RabbitMQ, .NET Core and Linux. Today we will explore another very interesting functionality: the ability to schedule messages to send them in the future. My experiences with MassTransit so far have been fantastic, however, there are a few use cases that I still would like to test. On this post, we cover the scheduled message use case, testing the in-memory solution.

Persistence Requirements

In order to keep our data persisted for the scheduler to use we'll need to configure MassTransit's storage with one of the following services:
  • Quartz.Net in a hosted server
  • Azure Service Bus
  • RabbitMQ by installing a plugin
  • a test-driven In-Memory implementation
On this post, we'll spike out the in-memory solution due to its simpler requirements but the behaviour should be equivalent for different transports.

Referencing Packages

MassTransit's Scheduling Api utilizes Quartz.Net. So, for it to work, we will need the to add a reference to the MassTransit.Quartz package to your project with:
$ dotnet add reference <project-name> MassTransit.Quartz --version 5.1.3
Once the reference is added, run dotnet restore to load the necessary extension methods to do the initialization.

Initialization

The initialization code for the in-memory implementation as simple as adding a call to UseInMemoryScheduler() on your bus configuration.
Using the in-memory scheduler uses non-durable storage. If the process terminates, any scheduled messages will be lost, immediately, never to be found again. For any production system, using a standalone service is recommended with persistent storage.

Sample Code

The code below shows a simple implementation of MassTransit and its scheduling system:

Running the Demo app

So, I run my app and my code schedules a message to be sent 3 seconds after sent by the user. This is my output:

Conclusion

Hope this serves as an introduction to the scheduling feature within MassTransit. I've been using MassTransit for a couple of years now and definitely would recommend it as a framework for your distributed systems. Want to learn more about MassTransit? Please consider reading the following posts:

    Source Code

    The source for this post is located on my GitHub page.
    In case you're interested, I recently pushed a more updated MassTransit/Docker/.NET Core 3.1 implementation to GitHub here: https://github.com/hd9/masstransit-rabbitmq

    See Also

    Monday, August 13, 2018

    Creating a MassTransit client/server application using RabbitMQ, .NET Core and Linux

    Let's test the versatile MassTransit framework using RabbitMQ, .NET Core and Linux and see if it can serve as a reliable messaging system.
    On a previous post we introduced MassTransit on this blog and presented and some reasons why it may be a real nice alternative to your system. Today we will review a simple use case: I will simulate a client/server architecture by building two console applications using MassTransit as the service bus, RabbitMQ as the transport running on Docker, .NET Core.

    Sounds complicated? Let's take a look.

    Installing RabbitMQ

    The quickest way to install RabbitMQ locally is by using RabbitMQ's official Docker image. Assuming you have docker installed on your machine, pulling and running it is as simple as:
    $ docker run --hostname rmqhost --name rmqcontainer -p 15672:15672 -p 5672:5672 rabbitmq:3.7.5-management
    Before running that command, let's examine what each part means:
    • --hostname rmqhost : sets the host name
    • --name rmqcontainer : sets the name of the container
    • -p 15672:15672 : maps container port 15672 to host port 15672 so you can access on your localhost
    • -p 5672:5672 : maps container port 5672 to host port 5672 so you can access on your localhost
    • rabbitmq:3.7.5-management : the name of the image to donwload and run. I chose that one because it has a nice UI to manage RabbitMQ in case you want to play with it.
    If you don't have that image yet, Docker will pull it for you and initialize a container based on the parameters above.

    Once the download is complete, Docker will init RabbitMQ for us. On my Fedora box, I get:

    With your image loaded and you can then access the Management UI on http://localhost:15672. Login with guest | guest:
    Cool. Now that we have RabbitMQ running, let's take a look at MassTransit.

    Building a shared .NET POCO Contract

    Since we're building a client and a server, we also need to build a shared contract they can access. Because the client and the server may (and should) run on different servers, the common solution is to build a class library and reference it on both projects.

    In .NET Core we simply type in the terminal:
    dotnet new console -n MassTransit.RabbitMQ.Contracts
    Then open that project and add this simple message:
    That's it. Build that project to validate everything is ok and let's move to the client.

    Building a MassTransit RabbitMQ Client

    As previously said, the client will need to do two important things:
    • connect to the bus so it can start sending messages
    • send messages that the server recognizes. That's why we created the contract on the step above.
    You can initialize your client with a command similar to the one executed above. Once you have your project created, the important bits are: adding a reference to the contract project previously created, initializing the bus and publishing a message to RabbitMQ.

    Initializing the bus

    To initialize the bus, write a code similar to:

    Publishing a message

    Now, we should be able to publish a message with a code like: Build this project with $ dotnet build to make sure everything is ok but don't run it yet. We still need to build the server.

    Building a MassTransit RabbitMQ Server

    The simple server is pretty similar to the client with the difference that:
    • the server does not publish messages
    • the server contains handlers (consumers) to messages published
    Build a new console app and reference the contract project created above. The gist below shows how to init a service that will run on a console terminal and handle the CreateAccount message:

    Testing

    Now, let's test both the client and the server to see if they're working ok. Don't forget that now you'll need your RabbitMQ instance running. I suggest running the apps side by side and let's see how that goes:

    My Client App:
     My Server App:
    And my RabbitMQ console:
    So, from the above, you see that the messages sent from the client are reaching the server. Remember, those applications do not know about each other and the only coupling between them is the standalone contract project that both references.

    Conclusion

    With all that we reach the end of this post. I hope this demo showed how simple it is to create a MassTransit client/server application using RabbitMQ, .NET Core. I've been using MassTransit for a couple of years now and definitely would recommend it as a base framework for your distributed systems. Want to learn more about MassTransit? Please consider reading the following posts:

    Source Code

    The source code used on this post is available on GitHub.

    In case you're interested, I recently pushed a more updated MassTransit/Docker/.NET Core 3.1 implementation to GitHub at: github.com/hd9/masstransit-rabbitmq

    See Also

    Monday, August 6, 2018

    MassTransit, a real alternative to NServiceBus?

    Understand how MassTransit could be a real alternative when building distributed systems on the .NET Platform.
    Photo by Markus Spiske on Unsplash

    Looking for an free and open-source alternative to NServiceBus? Maybe MassTransit could be what you are looking for. Let's understand the platform and how it could be used on your next project.

    What is MassTransit?

    MassTransit  is a lightweight service bus for building distributed .NET applications. The main goal is to provide a consistent, .NET friendly abstraction over the message transport (RabbitMQ, Azure Service Bus, etc.). MassTransit is not a new project. They've been around since 2007 and were created as an alternative to NServiceBus. In early 2014, the framework was rewritten to support asynchronous programming as well as leveraged the power of messaging platform. The code was also rewritten, resulting in an entirely new, completely asynchronous, and highly optimized framework for message processing.

    Why MassTransit

    Like NServiceBus, MassTransit helps decoupling your backend from your frontend (and in general, decoupling services), leveraging enterprise design patterns like CQRS and Eventual Consistency. Some of the features you will find in MassTransit are:
    • providing support for messages, sagas
    • supporting different transports
    • allows automated or custom retries on failures
    • asynchronous requests/responses
    • poison message handling
    • exception management
    • custom serialization
    • message correlation
    • routing
    • scheduling
    • support for modern technologies like Azure Service BusApache KafkaAzure Event Hub and Amazon SQS

    Customizations

    MassTransit is also extremely customizable and as mentioned previously, can run on different transports (RabbitMQ, Azure Service Bus, etc) providing enormous benefits as both are strong and stable platforms with different characteristics. It also supports the new .NET Standard on .NET Core and runs on multi-platforms.

    Cloud Support

    todo

    Transports

    MassTransit includes full support for several transports, most of which are traditional message brokers. RabbitMQ, ActiveMQ, Azure Service Bus, Apache Kafka, Azure Event Hub and Amazon SQS are supported.

    Sample Code

    With that introduction, let's review some code. If worked with NSB before, you're probably looking for three things:
    1. How to initialize the bus
    2. How to initialize the host (your backend, where your handlers run)
    3. How to initialize a client (where you send messages from).
    Please take a look at examples below to understand how it all works.

    Initializing the Bus

    The below code shows how to initialize a simple MassTransit Bus:

    Initializing a Server (Host)

    The below code shows how to initialize a simple MassTransit server:

    Initializing a Client

    The below code shows how to initialize a simple MassTransit client that talks to the above service:

    Other aspects to consider

    Before we end, I'd like to bring to your attention other things you should evaluate before committing on such an important framework to your organization. I'd recommend that you research about:
    • Support: NServiceBus, being backed by a commercial organization deserves a point on this item. Having a commercial organization backing the product may potentially be a more compelling argument to your employer. NSB offers multiple support options including consulting and on-site training.
    • Online Documentation: both frameworks have good documentation but NSB is definitely ahead on this criteria. You will find way more documentation about NSB.
    • Community: both MT and NSB have decent online communities, with the latter being bigger and more active.
    • Access to current technologies: MT and NSB will definitely provide you access to modern technologies like cloud and serverless but NSB seems to be ahead on that regard.
    • Active development: both NSB and MT have very active developments.
    • Open-Source: I preferably like to have access to the source code of the product I'm using, in case there are issues
    • Ecosystem: Particular, NSB's parent company, offers lots of other products that are worth considering. These products integrate well with NServiceBus and depending on your use of the platform may 

    Final Thoughts

    This ends our quick introduction about MassTransit. I've been using NServiceBus for quite some time and while I recognize its value, I wanted to explore viable (and cheaper) alternatives for my open-source projects. I've been using MT for a couple of years I can confirm that it's a solid alternative to NServiceBus. MassTransit today is also a bigger and more solid project than it was 5 years ago and definitely deserves your attention.

    However, choosing a critical framework like this is not only about code. I recommend you to also evaluate other criteria such as support, documentation, training and ecosystem. On pretty much every criteria (except for price) NServiceBus is ahead. And maybe those things will count for your organization.

    More about MassTransit

    Want to learn more about MassTransit? Please consider reading these articles:

    Source Code

    As always, the source code is available on my GitHub.

    References

    See Also

      Friday, September 1, 2017

      Custom security on ASP.NET applications

      Most of the time the default security of the ASP.NET web applications is insufficient. On this post, let's see how we can extend it with our custom implementation.
      Photo by Shahadat Rahman on Unsplash

      Due how the security of the ASP.NET framework is setup, very frequently ASP.NET developers will have to write custom security controls for their applications. A classic is example is roles/permissions. How do we extend what the framework offers to respond to business needs while keeping the previous features functioning?

      Reviewing the current implementation

      Your current implementation is probably based on Role Providers which's what everyone else is doing. That said, you're probably using a standard solution like this one where you specify a role to the AuthorizeAttribute:
      [Authorize(Roles = "MyRole")]
      Assuming you implemented your RoleProvider correctly authorization should work out of the box.

      A custom requirement

      The problem is that custom requirements require a more granular solution that cannot be accomplished with Roles because it would require us to write stuff we don't need and doesn't satisfy the business requirements. That's what we get when using an out of the box solution. So, most developers, would probably end up writing custom code.

      Permission requirements

      The required permissioning model is basic. We have to support the concepts of User, Groups, Roles, Permissions (or Claims) and the traditional relationships between them. Users can view everything but only be able to act on certain contexts.

      In terms of development, the following use cases derive from it:
      • The web app should controls accordingly (that we don't want to do this with JavaScript);
      • Each endpoint if the user is authorized and has permissions to call that endpoint;
      • There should be a mapping users -> groups; groups -> roles; roles -> permissions;
      • When users log in, we need to load their permissions;
      • Every message sent to the backend should have a  SubmittedBy  and that should be validated to guarantee that the user had access to that feature.
      • Reject the request if he doesn't have the permission (either in the front or in the backend);
      • Log SecurityExceptions or what is relevant for you.

      Introducing Claims

      Claims are granular permissions linked to resources in your application. You could use it rather than relying on role based permissions. That will allow you to control in detail the access to resource/action rather than having a multitude of roles on the system. I personally like to use a mixture of both so I can use the standard authorization features available in ASP.NET MVC with roles, reading the granular claims/permissions on the more complicated rules.

      The Solution

      So let's review the solution. I broke it in four parts:
      • The basic Infrastructure - the code necessary to provide access to the permissioning system, validation, constants, etc.
      • ASP.NET - secure the web app so that all actions are authorized;
      • Backend - validating if the requests reaching the backend also respect the same concerns already printed in the previous layers;
      • Refactotings - you will probably be required to refactor your framework to apply authorization in certain parts of your application. Or even, creating a generic application filter to auto-process every command sent to it. Topic for a future blog post.
      Before writing code though let's not forget some to review some good design practices:
      • Code Reuse - centralize your roles in one single class in an assembly so everyone talks the same dialect
      • Use consts / readonly strings - please, don't propagate hardcoding your permissions everywhere on your code
      • Write some extension methods / helper classes so you respect the single responsability principle
      • Performance - I used Dependency Injection to inject from the DB the permissioning configuration into my PermissionController class in case it would be loaded only once and kept in cache for the duration of the application. In case you forgot it, just set some code in your Application_Start  as it's your MVC's composition root
      • Review security principles
      • Write tests

      The Basic Infrastructure

      I like writing code bottom up so I can unit test my code as I'm writing it. So, the 1st thing I created on this project was a SecurityController or, a facade to control all my custom authorization:


      The static init method is used to initialize our framework. It gets roles and groups from the db, memory or whatever you're using. That way, the logic is db-independent. Meaning it can be reused everywhere, is lightweight, can be easily cached and, performant by default.

      The other methods are pretty self-descriptive. Interesting is that, because this class is so lightweight, it can be reused trough all your assemblies, tests and even different projects without overhead. And it's also easily extensible but still closed for modifications respecting the SOLID principle.

      Initializing the Framework

      In order to properly initialize the framework via Dependency Injection, we need to understand what our composition roots are. In our case we would have:
      • ASP.NET - global.asax;
      • A console Application - your static void main;
      • Backend - varies according to the technology.
      The composition root for my ASP.NET web app is the Application_Start method. So, the initialization happens like this:
      Let's now take a look at how to secure our app. Essentially we want to be covered on:
      • Views - render/hide stuff in razor according to user's permissions
      • Controllers - authorizing/revoking access on controllers and actions
      • ActionFilters - to auto-validate content that's being injected.

      Securing our Views

      The first obvious step is to hide content user isn't supposed to see. Remember, we should not do this via javascript as client-side security doesn't work. In order to hide content, we need to tell Razor not to render the content if the user doesn't have access to some resource. That could be done simply like this:


      Here, I have a few interesting points:
      • using an extention method in my Razor to reduce duplication of code;
      • Moved all my permissions to the "Permission" class - to avoid hardcoding permission names troughout the application;

      While it's true that the above code would work, more interesting would be trying to implement it elegantly by extending the HtmlHelper to create custom methods such as SecureLink and SecureButton and using those controls in Razor:

      The advantage of this approach is to simplify your Razor markup and reuse the logic to validate like this:
      @Html.SecureButton("Edit", "btn", Permissions.Edit)

      The RequirePermission ActionFilter

      In order to secure our controllers, we'll need three things:
      • create an attribute to auto-validate requests to a given action/controller;
      • secure your actions/controllers trough application filters;

      I created the RequiredPermission Attribute to validate, as an ActionFilter all the requests issued to my app. It looks more a less like:

      Adding Authorization to the Views

      With all of that out of the way we can now decorate our actions like this:
      Take a look at the RequirePermission ActionFilter above. With one liner in the controller, we got the validation working in our ASP.NET application using our custom security implementation. Now, it's just a matter of manual work to decorate our actions with the validations you want. Remember that we did all this work because we now require granular permission validation.

      Securing the Backend

      Because we separated the concerns in our application, delegating to the SecurityController all the logic to validate permissions against user, our backend, whatever it is, can re-utilize it.

      Conclusion

      On this post we analyzed a custom implementation for securing ASP.NET applications. There's a lot more to talk about this topic that will be reviewed in future blog posts. Keep tuned!

      See Also

      About the Author

      Bruno Hildenbrand      
      Principal Architect, HildenCo Solutions.