Showing posts with label Design Patterns. Show all posts
Showing posts with label Design Patterns. Show all posts

Monday, June 12, 2023

Different Types of IT Architects

There is a lot of ambiguity and even misinformation regarding the role of an IT Architect in a software project. Let's understand the differences between them.

Photo by Medienstürmer on Unsplash

There is a lot of ambiguity and even misinformation regarding the role of an IT Architect with regards to their participation in software projects.

Since the responsibilities of an IT Architect differ between organizations, I prepared this article to explain the particularities of different architecture roles, and bring clarity to those looking to work as professional IT Architects.

How Architecture varies per organization 

Due to differences in scale, complexity and organizational structure, Software Architecture varies significantly between small and large organizations.

Usually, large Enterprises have a more structured model for Architecture, and it's common to have a clear distinction between the different roles. In those organizations, it’s common to have different individuals performing different (and more specific) roles, with their titles aligning to their duties. Most of them, you will see in this article.

In smaller organizations though, very commonly Architects end up performing a combination of all of these roles (if not all at the same time for a given project). The name of the roles varies per organization, with the most popular being Solution, Cloud, Technical or Software Architects.

However, regardless of the size of the organization, architects are expected to do much more that just performing the tasks described in the job descriptions.

Which is why architects should broaden their skills as much as possible in different domains. Tough task, but will get you prepared to anything your employer (and clients) request from you. Make this a marathon, not a sprint.

With that said, let’s review the differences between the main types of architects as it relates to projects in technology.

Different types of Architects

Let's jump directly to the most common roles in architecture. Currently in the marketplace, they are:

  • Enterprise architect
  • Solution architect
  • Software/Application architect
  • Data/Information architect
  • Security architect
  • Cloud architect
  • Technical Architect
  • Principal Architect

So let’s review them.

Enterprise Architect

Enterprise architects are responsible for the technical solutions and strategic direction of an organization. They must work with a variety of stakeholders to understand an organization's market, customers, products, business domain, requirements, and technology.

From a broad perspective, enterprise architects are the most business oriented, and consequently, less technical in nature.

Solution Architect

A solution architect converts business/technical requirements into a solution architecture. They work closely with business analysts, product owners and technical people to understand the requirements in depth, so that they can design a solution that satisfies those requirements.

Solution Architects are the most hybrid role in this list. Solution architects are required to have great technical and business skill, making them a perfect fit for any project in technology.

Software (Application) Architect

Software/Application Architects focus mainly on the software architecture. They ensure that the requirements for their application are satisfied by the design of that application and serve as a liaison between the technical and non-technical staff working on an application.

Application Architects are involved in all the steps in the software development process.

Software/Application Architects architect and recommend solutions for new/existing/legacy technologies, as well as evaluating alternative approaches to problems and proposing solutions to existing problems.

Data (Information) Architect

Data Architects are responsible for designing, deploying, and managing an organization's data architecture.

Data Architects usually focus on data management systems, and their goal is to ensure that the appropriate consumers of an organization's data have access to the data in the right place at the right time.

Lastly, Data Architects are responsible for all of an organization's data sources, both internal and external. They ensure that an organization's strategic data requirements are met.

Infrastructure Architect

Infrastructure Architects focus on the design and implementation of an organization's enterprise infrastructure. This type of Architect is responsible for the infrastructure environment meeting the organization's business goals, and provide hardware, networking, operating system, and software solutions to satisfy them.

Security Architect

A Security Architect is responsible for an organization's computer and network security. They build, oversee, and maintain an organization's security implementations.

Security Architects must have a full understanding of an organization's systems and infrastructure so that they can design secure systems.

Cloud Architect

A Cloud Architect is someone who is responsible for an organization's cloud computing strategy and initiatives. They are responsible for the cloud architecture used for the deployment of software systems. An organization that has someone who is focused on cloud architecture leads to increased levels of success with cloud adoption.

The responsibilities of cloud architects include selecting a cloud provider and selecting the model (for example, SaaS, PaaS, or IaaS) that is most appropriate for the organization's needs.

Cloud Architects create cloud migration plans for existing applications not already in the cloud, including the coordination of the adoption process. They may also be involved in designing new cloud-native applications that are built from the ground up for the cloud.

Technical Architect

Technical Architects are another very technical role in Architecture. They develop the technical strategy for a project, making sure that the technical solutions meet the requirements of the customer and the business.

Technical Architects are also responsible for the long-term technical vision of a software solution. They evaluate new technologies and architectures to ensure the system meets the customer’s needs and is cost effective. Additionally, they are responsible for the development and implementation of standards and procedures to ensure the quality of the solutions.

Technical architects are very similar to Application/Software architects. Most likely, most people wouldn’t be able to differentiate them.

Principal Architect

Finally, Principal (or Lead) Architect. This role is usually more related to the leadership performed by the person (in this case, an Architect), than to specific tasks. However, it's expected that the Principal (or Lead) Architect usually possesses the widest knowledge within the team including Solution, Enterprise, Software and Cloud architecture.

Principal Architects frequently oversee and lead the design and development of projects from both a technical and managerial standpoint. Common tasks attributed to them include project leadership, design, planning, governance, team and client engagement.

Finally, principal Architects are expected to have exceptional technical, inter-personal, management, executive and sales skills. After all, they usually interact with a high level audience, including C-level executives.

Conclusion

On this article we presented a quick overview of the different types of IT Architects. Due to differences in scale, complexity, organizational structure and even how roles are defined, Architecture in the context of technology can (and will) vary significantly between organizations.

However, regardless of the role you execute in your organization, keep your skills sharpened by broadening your knowledge as much as you can in different domains. Tough task, but will get you prepared to anything your employer (and/or clients) request from you.

See Also

Tuesday, June 1, 2021

Microservices in ASP.NET

Microservices is the last significant change in modern development. Let's learn some tools and related design patterns by building a simplified e-commerce website using modern tools and techniques such as ASP.NET Core and Docker.
Photo by Adi Goldstein on Unsplash

For some time we've been discussing tools and technologies adjacent to microservices on this blog. Not randomly though. Most of these posts derived from my open-source project aspnet-microservices, a simple (yet complicated 😉) distributed application built primarily with .NET Core and Docker. While still work in progress, the project demoes important concepts in distributed architectures.

What's included in the project

This project uses popular tools such as:
On the administrative side, the project also includes:

Disclaimer

When you create a sample microservice-based application, you need to deal with complexity and make tough choices. For the aspnet-microservices application, I deliberately chose to balance complexity and architecture by reducing the emphasis on design patterns focusing on the development of the services themselves. The project was built to serve as an introduction and a start-point for those looking forward to working of Docker, Compose and microservices.

This project is not production-ready! Check Areas for Improvement for more information.

Microservices included in this project

So far, the project consists of the following services:

  • Web: the frontend for our e-commerce application;
  • Catalog: provides catalog information for the web store;
  • Newsletter: accepts user emails and stores them in the newsletter database for future use;
  • Order: provides order features for the web store;
  • Account: provides account services (login, account creation, etc) for the web store;
  • Recommendation: provides simple recommendations based on previous purchases;
  • Notification: sends email notifications upon certain events in the system;
  • Payment: simulates a fake payment store;
  • Shipping: simulates a fake shipping store;

Technologies Used

The technologies used were cherry-picked from the most commonly used by the community. I chose to favour open-source alternatives over proprietary (or commercially-oriented) ones. You'll find in this bundle:
  • ASP.NET Core: as the base of our microservices;
  • Docker and Docker Compose: to build and run containers;
  • MySQL: serving as a relational database for some microservices;
  • MongoDB: serving as the catalog database for the Catalog microservice;
  • Redis: serving as distributed caching store for the Web microservice;
  • RabbitMQ: serving as the queue/communication layer over which our services will communicate;
  • MassTransit: the interface between our apps and RabbitMQ supporting asynchronous communications between them;
  • Dapper: lightweight ORM used to simplify interaction with the MySQL database;
  • SendGrid: used to send emails from our Notification service as described on a previous post;
  • Vue.js and Axios.Js to abstract the frontend of the Web microservice on a simple and powerful  JavaScript framework.

Conventions and Design Considerations

Among others, you'll find in this project that:
  • The Web microservice serves as the frontend for our e-commerce application and implements the API Gateway / BFF design patterns routing the requests from the user to other services on an internal Docker network;
  • Web caches catalog data a Redis data store; Feel free to use Redis Commander to delete cached entries if you wish or need to.
  • Each microservice has its own database isolating its state from external services. MongoDB and MySQL were chosen as the main databases due to their popularity.
  • All services were implemented as ASP.NET Core webapps exposing the endpoints /help and /ping so they can be inspected from and observed automatically the the running engine.
  • No special logging infrastructure was added. Logs can be easily accessed via docker logs or indexed by a different application if you so desire.
  • Microservices communicate between themselves via Pub/Sub and asynchronous request/response using MassTransit and RabbitMQ.
  • The Notification microservice will eventually send emails. This project was tested with SendGrid but other SMTP servers should work from within/without the containers.
  • Monitoring is experimental and includes Grafana sourcing its data from a Prometheus backend.

Technical Requirements

To run this project on your machine, please make sure you have installed:

If you want to develop/extend/modify it, then I'd suggest you to also have:

Running the microservices

So let's get quickly learn how to load and build our own microservices.

Initializing the project

Get your copy by cloning the project:
git clone https://github.com/hd9/aspnet-microservices

Next open the solution src/AspNetContainers.sln with Visual Studio 2019. Since code is always the best documentation, the easiest way to understand the containers and their configurations is by reading the src/docker-compose.yml file.

Debugging with Visual Studio

Building and debugging with Visual Studio 2019 is straightforward. Simply open the AspNetMicroservices.sln solution from the src folder, build and run the project as debug (F5). Next, run the dependencies (Redis, MongoDB, RabbitMQ and MySQL) by issuing the below command from the src folder:

docker-compose -f docker-compose.debug.yml up

Running the services with Docker Compose

In order to run the services you'll need Docker and Docker Compose installed on your machine. Type the command below from the src folder on a terminal to start all services:
docker-compose up
Then to stop them:
docker-compose down
To remove everything, run:
docker-compose down -v
To run a specific service, do:
docker-compose up <service-name>
As soon as you run your services, Compose should start emitting on the console logs for each service:
The output of our docker-compose command

You can also query individual logs for services as usual with docker logs <svc-name>. For example:

~> docker logs src_catalog_1
info: CatalogSvc.Startup[0]
      DB Settings: ConnStr: mongodb://catalog-db:27017, Db: catalog, Collection: products
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app

Database Initialization

Database initialization is automatically handled by Compose. Check the docker-compose.yml file to understand how that happens. You'll find examples on how to initialize both MySQL and MongoDB.

Dockerfiles

Each microservice contains a Dockerfile in their respective roots and understanding them should be straightforward. If you never wrote a Dockerfile before, consider reading the official documentation.

Docker Compose

There are two docker-compose files in the solution. Their use is described below:
  • docker-compose.yml: this is the main Compose file. Running this file means you won't be able to access some of the services as they'll not be exposed.
  • docker-compose.debug.yml: this is the file you should run if you want to debug the microservices from Visual Studio. This file only contains the dependencies (Redis, MySQL, RabbitMQ, Mongo + admin interfaces) you'll need to use when debugging.

Accessing our App

If the application booted up correctly, go to http://localhost:8000 to access it. You should see a simple catalog and some other widgets. Go ahead and try to create an account. Just make sure that you have the settings correctly configured on your docker-compose.yml file:
Our simple e-commerce website. As most things, its beauty is in the details 😊.

    Admin Interfaces

    You'll still have available admin interfaces for our services on:
    I won't go over the details about each of these apps. Feel free to explore on your own.

    Monitoring

    Experimental monitoring is available with Grafana, Prometheus and cadvisor. Open Grafana at http://localhost:3000/ and login with admin | admin, select the Docker dashboard and you should see metrics for the services similar to:

    Grafana capturing and emitting telemetry about our microservices.

    Quick Reference

    As a summary, the microservices are configured to run at:

    The management tools are available on:

    And you can access the databases at:
    • MySql databases: use Adminer at: http://localhost:8010/, enter the server name (ex. order-db for the order microservice) and use root | todo as username/password.
    • MongoDB: use MongoExpress at: http://localhost:8011/. No username/password is required.

    Final Thoughts

    On this post I introduce to you my open-source project aspnet-microservices. This application was built as a way to present the foundations of Docker, Compose and microservices for the whole .NET community and hopefully serves as an intuitive guide for those starting in this area.

    Microservices is the last significant change in modern development and requires learning lots (really, lots!) of new technologies and new design patterns. This project is by far complete and should not be used in production as it lacks basic cross-cutting concerns any production-ready project would need. I deliberately omitted them for simplicity else I could simply point you to this project. For more information, check the project's README on GitHub.

    Feel free to play with it and above all, learn and have fun!

    Source Code

    As always, the source code is available on GitHub at: github.com/hd9/spnet-microservices.

    Monday, November 2, 2020

    Async Request/Response with MassTransit, RabbitMQ, Docker and .NET core

    Let's review how to implement an async resquest/response exchange between two ASP.NET Core websites via RabbitMQ queues using MassTransit
    Photo by Pavan Trikutam on Unsplash

    Undoubtedly the most popular design pattern when writing distributed application is Pub/Sub. Turns out that there's another important design pattern used in distributed applications not as frequently mentioned, that can also be implemented with queues: async requests/responses. Async requests/responses are very useful and widely used to exchange data between microservices in non-blocking calls, allowing the requested service to throttle incoming requests via a queue preventing its own exhaustion.

    On this tutorial, we'll implement an async request/response exchange between two ASP.NET Core websites via RabbitMQ queues using MassTransit. We'll also wire everything up using Docker and Docker Compose.

    On this post we will:
    • Scaffold two ASP.NET Core websites
    • Configure each website to use MassTransit to communicate via a local RabbitMQ queue
    • Explain how to write the async request/response logic
    • Run a RabbitMQ container using Docker
    • Test and validate the results

    Understanding MassTransit Async Requests

    If you understand how to wire everything up, setting up async request/response with MassTransit is actually very simple. So before getting our hands into the code, let's review the terminology you'll need to know:
    • Consumer: a class in your service that'll respond for requests (over a queue on this case);
    • IRequestClient<T>: the interface we'll have to implement to implement the client and invoke async requests via the queue;
    • ReceiveEndpoint: a configuration that we'll have to setup to enable our Consumer to listen and respond to requests;
    • AddRequestClient: a configuration that we'll have to setup to allow our own async request implementation;
    Keep that info in mind as we'll use them in the following sections.

    Creating our Project

    Let's quickly scaffold two ASP.NET Core projects by using the dotnet CLI with:
    dotnet new mvc -o RequestSvc
    dotnet new mvc -o ResponseSvc

    Adding the Dependencies

    The dependencies we'll need today are:

    Adding Configuration

    The configuration we'll need  is also straightforward. Paste this in your RequestSvc/appsettings.json:
    "MassTransit": {
        "Host": "rabbitmq://localhost",
        "Queue": "requestsvc"
    }
    And this in your ResponseSvc/appsettings.json:
    "MassTransit": {
        "Host": "rabbitmq://localhost",
        "Queue": "responsesvc"
    }
    Next, bind the config classes to those settings. Since I covered in detail how configurations work in ASP.NET Core 3.1 projects on a previous article I'll skip that to keep this post short. But if you need, feel free to take a break and understand that part first before you proceed.

    Adding Startup Code

    Wiring up MassTransit in ASP.NET DI framework is also well documented. For our solution it would look like this for the RequestSvc project:
    services.AddMassTransit(x =>
    {
        x.AddBus(context => Bus.Factory.CreateUsingRabbitMq(c =>
        {
            c.Host(cfg.MassTransit.Host);
            c.ConfigureEndpoints(context);
        }));
       
        x.AddRequestClient<ProductInfoRequest>();
    });

    services.AddMassTransitHostedService();
    And like this for the  ResponseSvc project:
    services.AddMassTransit(x =>
    {
        x.AddConsumer<ProductInfoRequestConsumer>();

        x.AddBus(context => Bus.Factory.CreateUsingRabbitMq(c =>
        {
            c.Host(cfg.MassTransit.Host);
            c.ReceiveEndpoint(cfg.MassTransit.Queue, e =>
            {
                e.PrefetchCount = 16;
                e.UseMessageRetry(r => r.Interval(2, 3000));
                e.ConfigureConsumer<ProductInfoRequestConsumer>(context);
            });
        }));
    });

    services.AddMassTransitHostedService();
    Stop for a second and compare the differences between both initializations. Spot the differences?

    Building our Consumer

    Before we can issue our requests, we have to build a consumer to handle these messages. In MassTransit's world, this is the same consumer you'd build for your regular pub/sub. For this demo, our ProductInfoRequestConsumer looks like this:
    public async Task Consume(ConsumeContext<ProductInfoRequest> context)
    {
        var msg = context.Message;
        var slug = msg.Slug;

        // a fake delay
        var delay = 1000 * (msg.Delay > 0 ? msg.Delay : 1);
        await Task.Delay(delay);

        // get the product from ProductService
        var p = _svc.GetProductBySlug(slug);

        // this responds via the queue to our client
        await context.RespondAsync(new ProductInfoResponse
        {
            Product = p
        });
    }

    Async requests

    With consumer, configuration and the startup logic in place, it's time to write the request code. In essence, this is the piece of code that will mediate the async communication between the caller and the responder using a queue (abstracted obviously by MassTransit). A simple async request to a remote service using a backend queue looks like:
    using (var request = _client.Create(new ProductInfoRequest { Slug = slug, Delay = timeout }))
    {
        var response = await request.GetResponse<ProductInfoResponse>();
        p = response.Message.Product;
    }

    Running the dependencies

    To run RabbitMQ, we'll use Docker Compose. Running RabbitMQ with Compose is as simple as running the below command from the src folder:
    docker-compose up
    If everything correctly initialized, you should expect to see RabbitMQ's logs emitted by Docker Compose on the terminal:
    To shutdown Compose and RabbitMQ, either click Ctrl-C or run:
    docker-compose down
    Finally, to remove everything, run:
    docker-compose down -v

    Testing the Application

    Open the project from Visual Studio 2019, and run it as debug (F5) and VS will open 2 windows - one for RequestSvc and another for ResponseSvc. RequestSvc looks like this:

    Go ahead and run some queries. If you got your debugger running, it will stop in both services allowing you to validate the exchange between them. To reduce Razor boilerplate the project uses VueJS and AxiosJs so we get responses in the UI without unnecessary roundtrips.

    RabbitMQ's Management Interface

    The last thing worth mentioning is how to get to RabbitMQ's management interface. This project also allows you to play with RabbitMQ at http://localhost:8012. By logging in with guest | guest and clicking on the Queues tab you should see something similar to:
    RabbitMQ is a powerful message-broker service. However, if you're running your applications on the cloud, I'd suggest using a fully-managed service such as Azure Service Bus since it increases the resilience of your services.

    Final Thoughts

    On this article we reviewed how to implement an asynchronous request/response using queues. Async resquests/responses are very useful and widely used to exchange data between microservices in non-blocking calls, allowing the resqueted service to throttle incoming requests via a queue preventing its own exhaustion. On this example we still leveraged Docker and Docker Compose to simplify the setup and the initialization of our backend services.

    I hope you liked the demo and will consider using this pattern in your applications.

    Source Code

    As always, the source code for this article is available on my GitHub.

    References

    See Also

    Monday, July 27, 2020

    Send emails from ASP.NET Core websites using SendGrid and Azure

    Today we have multiple free options to send email from our apps. Let's review how to configure and use SendGrid and Azure to send emails from our ASP.NET Core apps and benefit from their extraordinary free plan.
    Photo by Carol Jeng on Unsplash

    Long are the days that we had to use Gmail App Passwords to send and test emails from our apps. Today we have a plethora of alternatives that cost nothing or close to nothing. On that category, SendGrid offers Azure subscribers 25,000 free emails per month! So let's review how to setup a free SendGrid account and build a simple ASP.NET website to send emails from it.

    On this post we will:
    • create a SendGrid account directly in Azure
    • build a simple ASP.NET Core web app and review how to properly configure it
    • access and configure our SendGrid settings in SendGrid
    • send emails using SMTP (not the RESTful API) from our console application
    For a quick start, download the code from GitHub at: github.com/hd9/aspnet-sendgrid

    Creating a SendGrid account in Azure

    The good news is that in case you don't have one already, you can create a SendGrid account directly from Azure. Let's get straight to it. Open your Azure portal and type sendgrid on the search tool bar and click on SendGrid Accounts:
    Click Add to create our account from Azure:
    Enter your information on the next screen:
    Review and confirm your package:
    Wait until your deployment completes (it should take no more than 10 seconds). Now go back to SendGrid Accounts and you should see your new account there:
    Clicking on it would take you to the SendGrid pane showing you essential information about your new resource:
    Did you notice that Manage button? Clicking that button will take us directly to SendGrid where we'll be able to configure out account, create api keys, monitor our usage and a lot more.

    I won't expand much in what SendGrid offers (tldr; a lot!). For more of that, feel free to visit their website.

    Configuring SendGrid

    The first time you login to SendGrid, you'll be requested to confirm your email address. After confirmation, this is the what you should see a screen similar to the below, showing you a general overview of your account:

    Creating our SMTP API Key

    To be able to send emails from SendGrid, we'll have first to generate a password. First click on Settings -> API Keys:
    Choose Restricted Access:

    Select Mail Send (for this demo we only need that one):

    And click create. You'll be presented with your password (api key). Copy it safely:

    SendGrid Configuration

    With the password in hand, here's a summary about the configuration we'll need:
    • Host: smtp.sendgrid.net
    • Port: 587
    • Username: apikey
    • Password: SG.zGNcZ-**********************

    Building our App

    I guess that at this point, creating an ASP.NET web app is no surprise to anyone. But if you're new to .NET Core, please check this documentation on how to build and run ASP.NET Core on Linux. It's a different perspective from the Visual Studio-centric approach you'll see elsewhere. To quickly create with VS, File -> Create a new project and select Web Application (Model-View-Controller).

    Configuring our App

    With the configuration in hand, let's now review how to use it. To simplify things, I built already a simple web app that captures 2 fields: name and email of a potential newsletter subscriber. It looks like this and is available on GitHub:
    Apart from the visual, there are a couple of things on this app that are worth looking into. Let's start with the configuration. If you open appsettings.json on the root of the project you will see:
      "SmtpOptions": {
        "Host": "<smtp>",
        "Port": "587",
        "Username": "<account>",
        "Password": "<password>",
        "FromName": "<from-name>",
        "FromEmail": "<from-email>",
        "EmailOverride": "<email>"
      },
      "EmailTemplate": {
        "Subject": "[HildenCo WebStore] Welcome to our newsletter!",
        "Body": "Hello {0},\nThanks for signing up for our newsletter!\n\nBest Regards,\nHildenCo."
      }

    Since I already explained how to bind that config to a class of our own, I'll not extend too much on the topic. Essentially we will:
    • map the SmtpOptions configuration into a SmtpOptions class
    • map the EmailTemplate config into the EmailConfig class
    That mapping is done elegantly by the framework as this line from Startup.cs shows:
    cfg = configuration.Get<AppConfig>();
    Inspecting cfg during debug confirms the successful binding:

    Dependency Injection

    Next, it's time to setup dependency injection. For our objective here, ASP.NET's default DependencyInjection utility is good enough. Put the below in your ConfigureServices method to wire everything up:
    services.AddSingleton(cfg.SmtpOptions);
    services.AddSingleton(cfg.EmailTemplate);
    services.AddTransient<IMailSender, MailSender>();
    Next, inject the dependencies needed by our Controller and our MailSender classes:
    readonly IMailSender _mailSender;
    readonly ILogger<HomeController> _logger;

    public HomeController(
        IMailSender mailSender,
        ILogger<HomeController> logger)
    {
        _logger = logger;
        _mailSender = mailSender;
    }

    Invoking SendMail from our controller

    To call MailSender from our controller, simply inject a SendMail command into and invoke it:
    await _mailSender.Send(new SendMail
    {
        Name = signup.Name,
        Email = signup.Email
    });

    Our MailSender class

    To finish, here's an excerpt of our MailSender class (see the full source on GitHub):
    // init our smtp client
    var smtpClient = new SmtpClient
    {
        Host = _smtpOptions.Host,
        Port = _smtpOptions.Port,
        EnableSsl = true,
        DeliveryMethod = SmtpDeliveryMethod.Network,
        UseDefaultCredentials = false,
        Credentials = new NetworkCredential(_smtpOptions.Username, _smtpOptions.Password)
    };
    // init our mail message
    var mail = new MailMessage
    {
        From = new MailAddress(_smtpOptions.FromEmail, _smtpOptions.FromName),
        Subject = _tpl.Subject,
        Body = string.Format(_tpl.Body, msg)
    };
    // send the message
    await smtpClient.SendMailAsync(mail);

    Testing the App

    Run the app with Visual Studio 2019, enter a name and an email address. If all's configured correctly, you should soon get an email in your inbox:
    As well as SendGrid reporting a successful delivery:

    Final Thoughts

    The only remaining question is why SMTP? The advantages of using SMTP instead of the API is that SMTP is a pretty standard protocol, works with .NET's primitives, works with any programming language or framework and, contrary to the restful API, does not require any specific packages. SMTP also works well with containers, but I'll leave that fore a future post. 😉

    Conclusion

    On this tutorial we reviewed how to create a SendGrid account directly from Azure, we demoed how to configure it so we can send emails from our applications. SendGrid is a great email service with a very powerful API that I recommend exploring and learning other topics such as creating your own templates, using webhooks, etc. On the future we'll revisit this example to send email from our own ASP.NET containers in a microservice application. Keep tuned!

    Source Code

    As always, the source is available on GitHub.

    References

    See Also

    Monday, August 20, 2018

    Exploring MassTransit InMemory Scheduled Messaging using RabbitMQ and .NET Core

    On this post, let's explore MassTransit's scheduler messaging system using RabbitMQ

    On a previous post, I demoed how to create a MassTransit client/server application using RabbitMQ, .NET Core and Linux. Today we will explore another very interesting functionality: the ability to schedule messages to send them in the future. My experiences with MassTransit so far have been fantastic, however, there are a few use cases that I still would like to test. On this post, we cover the scheduled message use case, testing the in-memory solution.

    Persistence Requirements

    In order to keep our data persisted for the scheduler to use we'll need to configure MassTransit's storage with one of the following services:
    • Quartz.Net in a hosted server
    • Azure Service Bus
    • RabbitMQ by installing a plugin
    • a test-driven In-Memory implementation
    On this post, we'll spike out the in-memory solution due to its simpler requirements but the behaviour should be equivalent for different transports.

    Referencing Packages

    MassTransit's Scheduling Api utilizes Quartz.Net. So, for it to work, we will need the to add a reference to the MassTransit.Quartz package to your project with:
    $ dotnet add reference <project-name> MassTransit.Quartz --version 5.1.3
    Once the reference is added, run dotnet restore to load the necessary extension methods to do the initialization.

    Initialization

    The initialization code for the in-memory implementation as simple as adding a call to UseInMemoryScheduler() on your bus configuration.
    Using the in-memory scheduler uses non-durable storage. If the process terminates, any scheduled messages will be lost, immediately, never to be found again. For any production system, using a standalone service is recommended with persistent storage.

    Sample Code

    The code below shows a simple implementation of MassTransit and its scheduling system:

    Running the Demo app

    So, I run my app and my code schedules a message to be sent 3 seconds after sent by the user. This is my output:

    Conclusion

    Hope this serves as an introduction to the scheduling feature within MassTransit. I've been using MassTransit for a couple of years now and definitely would recommend it as a framework for your distributed systems. Want to learn more about MassTransit? Please consider reading the following posts:

      Source Code

      The source for this post is located on my GitHub page.
      In case you're interested, I recently pushed a more updated MassTransit/Docker/.NET Core 3.1 implementation to GitHub here: https://github.com/hd9/masstransit-rabbitmq

      See Also

      Monday, August 13, 2018

      Creating a MassTransit client/server application using RabbitMQ, .NET Core and Linux

      Let's test the versatile MassTransit framework using RabbitMQ, .NET Core and Linux and see if it can serve as a reliable messaging system.
      On a previous post we introduced MassTransit on this blog and presented and some reasons why it may be a real nice alternative to your system. Today we will review a simple use case: I will simulate a client/server architecture by building two console applications using MassTransit as the service bus, RabbitMQ as the transport running on Docker, .NET Core.

      Sounds complicated? Let's take a look.

      Installing RabbitMQ

      The quickest way to install RabbitMQ locally is by using RabbitMQ's official Docker image. Assuming you have docker installed on your machine, pulling and running it is as simple as:
      $ docker run --hostname rmqhost --name rmqcontainer -p 15672:15672 -p 5672:5672 rabbitmq:3.7.5-management
      Before running that command, let's examine what each part means:
      • --hostname rmqhost : sets the host name
      • --name rmqcontainer : sets the name of the container
      • -p 15672:15672 : maps container port 15672 to host port 15672 so you can access on your localhost
      • -p 5672:5672 : maps container port 5672 to host port 5672 so you can access on your localhost
      • rabbitmq:3.7.5-management : the name of the image to donwload and run. I chose that one because it has a nice UI to manage RabbitMQ in case you want to play with it.
      If you don't have that image yet, Docker will pull it for you and initialize a container based on the parameters above.

      Once the download is complete, Docker will init RabbitMQ for us. On my Fedora box, I get:

      With your image loaded and you can then access the Management UI on http://localhost:15672. Login with guest | guest:
      Cool. Now that we have RabbitMQ running, let's take a look at MassTransit.

      Building a shared .NET POCO Contract

      Since we're building a client and a server, we also need to build a shared contract they can access. Because the client and the server may (and should) run on different servers, the common solution is to build a class library and reference it on both projects.

      In .NET Core we simply type in the terminal:
      dotnet new console -n MassTransit.RabbitMQ.Contracts
      Then open that project and add this simple message:
      That's it. Build that project to validate everything is ok and let's move to the client.

      Building a MassTransit RabbitMQ Client

      As previously said, the client will need to do two important things:
      • connect to the bus so it can start sending messages
      • send messages that the server recognizes. That's why we created the contract on the step above.
      You can initialize your client with a command similar to the one executed above. Once you have your project created, the important bits are: adding a reference to the contract project previously created, initializing the bus and publishing a message to RabbitMQ.

      Initializing the bus

      To initialize the bus, write a code similar to:

      Publishing a message

      Now, we should be able to publish a message with a code like: Build this project with $ dotnet build to make sure everything is ok but don't run it yet. We still need to build the server.

      Building a MassTransit RabbitMQ Server

      The simple server is pretty similar to the client with the difference that:
      • the server does not publish messages
      • the server contains handlers (consumers) to messages published
      Build a new console app and reference the contract project created above. The gist below shows how to init a service that will run on a console terminal and handle the CreateAccount message:

      Testing

      Now, let's test both the client and the server to see if they're working ok. Don't forget that now you'll need your RabbitMQ instance running. I suggest running the apps side by side and let's see how that goes:

      My Client App:
       My Server App:
      And my RabbitMQ console:
      So, from the above, you see that the messages sent from the client are reaching the server. Remember, those applications do not know about each other and the only coupling between them is the standalone contract project that both references.

      Conclusion

      With all that we reach the end of this post. I hope this demo showed how simple it is to create a MassTransit client/server application using RabbitMQ, .NET Core. I've been using MassTransit for a couple of years now and definitely would recommend it as a base framework for your distributed systems. Want to learn more about MassTransit? Please consider reading the following posts:

      Source Code

      The source code used on this post is available on GitHub.

      In case you're interested, I recently pushed a more updated MassTransit/Docker/.NET Core 3.1 implementation to GitHub at: github.com/hd9/masstransit-rabbitmq

      See Also

      About the Author

      Bruno Hildenbrand