Monday, September 25, 2017

Custom Security on NServiceBus Endpoints

See how to enhance the security on NServiceBus endpoints by using generic and elegant code.

On a previous post, we discussed why as developers, it's important to consider security. Then we discussed how to secure our Asp.Net MVC application using a mix of custom role/claims providers. Today well's see how we can protect our NServiceBus endpoints.


NServiceBus (NSB) is a messaging and workflow for .NET and .NET Core that,
Supports a variety of messaging patterns and workflows on multiple transports like MSMQ, RabbitMQ, Azure, and Amazon SQS. Developers focus on core logic, fully abstracted from the underlying infrastructure. Runs on .NET or .NET Core on Windows, Linux, or in Docker containers.
In other words, It helps us separating the concerns in our application, create simpler services using distributed messaging, simplify our workflow, easy to test and helping us implement interesting design patterns like CQRS.

Securing NserviceBus Endpoints

But how could we integrate a robust, generic security authorization in a backend?

On previous post we discussed how to implement SecurityController, a db-independent security mechanism to our Asp.Net website. In order to use it in our business layer, we need to inject its dependencies (roles and groups) via its Init factory method described below:
public class SecurityController
    public static void Init(List<Role> roles, List<Group> groups)
        allRoles = roles ?? new List<Role>();
        allGroups = groups ?? new List<Group>();
        allPerms = allRoles.SelectMany(r => r.Permissions).Distinct().ToList();
How would we integrate our custom groups and roles in the NSB backend?


Within NSB, we can use INeedInitialization interface to hook our code in the NSB initialization pipeline. By using INeedInitialization you are supposed to implement this inerface:
public void Customize(BusConfiguration busConfiguration)

Our custom SecurityRegistry

When we initialize our NSB endpoint, we know that NSB will invoke classes that implement that interface, including our own SecurityRegistry:
 public class SecurityRegistry : INeedInitialization
    public void Customize(BusConfiguration busConfiguration)
        // todo :: load roles, groups from your repository...

        // init the security controller...
        SecurityController.Init(roles, groups);
Now, let's review how do we plug our validator in the NSB pipeline so it gets executed whenever a new message is executed in our service layer.


The elegant way of doing that using NSB is trough message mutators. Yes, a fancy name for a plugin, filter or whatever else you want to name it. Message mutators allow we injecting our own business logic before the messages reach your NSB Message handler. With mutators, you can mutate incoming and outcoming messages. Since we're talking about security here, don't think you wanna mutate outcoming messages, right

Creating a MessageMutator

So, let's go back to the code: we know that we need to create a class that would implement IMutateIncomingMessages where we want to validate commands submitted to our service layer. Please, just note that, in order for this to automatically work, we will need:
  • a base generic Command where we can now beforehand who submitted what, when;
  • a common user;
  • the permission associated to a particular message;
  • access to our SecurityController that will validate if that user has access to submit that command or not.
The code below shows a sample validator implementing IMutateIncomingMessages:
public class PermissionValidator : IMutateIncomingMessages
    private CommandBase cmd;
    private User user;

    public object MutateIncoming(object message)
        return message;

    private void ValitatePermission(object message)
        cmd = message as CommandBase;
        if (cmd == null)

        // tries to load custom security permissions from command
        var pa = cmd.GetType().GetCustomAttribute(typeof (RequirePermissionAttribute), true) as RequirePermissionAttribute;
        // if class not decorated with RequirePermission, nothing to validate
        if (pa == null)

        var user = LoadUser();

        if (user is null)
            throw new SecurityException("Is the User null?");


    private User LoadUser()
        if (cmd.SubmittedBy == null)
            return null;

        // todo :: load our user from repo...
        return user;

    private void ValidatePermission(string permission)
        // todo :: add your custom permission validation...

The RequirePermissionAttribute

The last part in the puzzle is how to automatically map permissions to commands. This can be easily done by creating a custom attribute like:
public class RequirePermissionAttribute : Attribute
    public string Permission { get; set; }

    /// <summary>
    /// Validates only if the SubmittedBy is an existing User
    /// </summary>
    public RequirePermissionAttribute()
    /// <summary>
    /// Validates if the SubmittedBy is an existing User AND if that user has specified permission
    /// </summary>
    /// <param name="p"></param>
    public RequirePermissionAttribute(string p)
        Permission = p;
That attribute could be now used to decorate commands present in our service layer like:
public class UpdateUser : ContentCommandBase
    public string Name { get; set; }
    // etc...

Associating permissions and Commands

The last step is to build the association between permissions and commands. The line below does exactly that. The code binds permission to its associated commands:
// load custom security permissions from command
var pa = cmd.GetType().GetCustomAttribute(typeof (RequirePermissionAttribute), true) as RequirePermissionAttribute;


On this post we reviewed how we can inject our custom security logic into a messaging framework like NServiceBus. Since most of these frameworks have extension points in their pipelines, it shouldn't be complicated to do a similar approach with MassTransit, for example. If want to read other posts about NServiceBus, please also consider:


    See Also

      Monday, September 18, 2017

      Security is only as strong as your weakest node

      If someone said to you that security is only as strong as the weakest link, would you know what that means?
      When we talk security, there are certain patterns that as developers, we should know about. The "Security is only as strong as your weakest node" is one of them. Bruce Schneier explained that on his excellent Secrets and Lies book as:
      Security is also like a chain. It is composed of many links, and each one of them is essential to the strength of the chain. And like a chain, security is only as strong as the weakest link. In this part, we look at the different security technologies that make up a chain, looking from the inside of the onion to the outside
      Let's review with a concrete example what that means.

      A simple multi-layered Architecture

      Suppose we have the following simple architecture, common in most IT shops:
      As seen, the above architecture is composed of:
      • a firewall;
      • one web server;
      • one database server;
      • one backend server;
      • storage accounts;
      • a messaging system used to communicate between the servers;
      • probably a build server;
      • probably a deploying mechanism;
      • probably multiple workstations with developers working on it
      • probably multiple 3rd-party packages and dependencies used in development projetcts
      • probably making use of multiple SaaS, PaaS, IaaS.

      And what's necessary to secure this architecture? Let's review a security checklist.

      A security checklist

      In my company for example, this is the minimum that I could expect:
      • on the firewall: inbound and outbound rules are safe, the firewall software is kept up to date, the accounts to access the service have strong passwords, the server is kept up to date, etc;
      • on the web server: ensuring only necessary ports are open, services are not leaking any information, the server is patched, the webserver software itself is patched, the services utilized are secure (such as ssh, sftp, etc) and up to date, accounts to access the service have strong, rotating passwords, the code is reasonably resilient against attacks, has anti-virus software running, etc;
      • on the database server: ensuring not open to sql injection attacks, server is patched, database software is patched, no horizontal/vertical escalation can happen, accounts to access the service have strong, rotating passwords, has anti-virus software running, etc;
      • in the backend: code is somehow safe against attacks, OS is kept up to date, all 3rd-party libraries are kept up to date, accounts to access the service have strong, rotating passwords, has anti-virus software running, etc;
      • on the storage accounts: have strong passwords, have some sort of rotation mechanisms
      • the build server: the build is safe against external code injection, the build service is kept up to date, the OS is kept up to date, has anti-virus software running, etc.
      • the deployment process: is done automatically with little option to interference, has the source OS patched, guaranteed that the build is intact, communicates to secure channels, etc;
      • users have a clear understanding of how spam, phishing, smishing, vishing, malicious links can affect the internal network
      • developers are aware of the vulnerabilities in 3rd-party packages utilized in their projects and are constantly upgrading their dependencies to the latest available stable version
      So just on the infrastructure we have at least 40 very important policies for our development architecture of 6 nodes! How complicated is to maintain all of that? Well, considering that the CVE Database is dynamic and considering the complexity of the above systems, I'd say, it's very, very complicated.

      What about Development?

      And what about on the user/development side? If we factor the # of accounts, # of developers, # of 3rd-party software used and multiple attack types, that list escalates very quickly!

      May it be clear that, of course, on the list above some items are more critical than others but you get the point. Why would an attacker spend time trying to crack your super-secure session encryption algorithm if your unsecured instance of MongoDB is running openly on the internet or if you're using admin/admin as username password, or if you're not aware of the vulnerabilities your tools, or ... provide-your-example-here. That's why security is only as strong as the weakest link.


      It won't matter if it's on a jQuery, on a poorly implemented code, a Sql Injection or on a printer. The attacker will always seek the easiest route to get his job done. And it's our job to try to be as safe as we can.


      See Also

      Friday, September 8, 2017

      Security and Ethics

      Understand how reputation, security, ethics are important today and learn how they affect your security online.
      Security is important. We also know that there is no such thing as absolute security. But can we do it better?

      Of course. We can and should do our best to secure our applications, infrastructure, code, policies, etc - but in the end, security is just another technical requirement. A very important requirement that supports the reputation and the perception of a company. On this post, let's discuss reputation, security, ethics, why they matter, why they are important and how they affect online security on information technology.


      Reputation is a very important asset companies (and people) should pursue and work hard to keep because once lost, it's hard to gain it back. For example, a recent survey made by The Identity Theft Resource Center (ITRC) found out that 41 percent of surveyed people said they wouldn't do business with the breached company again.
      With that said, let's discuss two LastPass and Log Me In before we can jump on insights on the latest Equifax hack.

      The LastPass case

      I was looking for information on password managers and found this interesting post on Troy Hunt's website describing why he decided to stop using LastPass after it was acquired by LogMeIn:
      Then, on the blog post he said :
      Companies like LastPass live and die by reputation and incidents like their breach in July that exposed master password hashes are hugely significant due to the impact it has on the perception of the company.

      The Equifax case

      Now, let's jump to Equifax. According to Wikipedia,
      Equifax collects information on over 800 million individual consumers and more than 88 million businesses worldwide. Equifax has US$ 3.1 billion in annual revenue and 9,000 employees in 14 countries.
      Not a small company, right? But what about the hack?
      In September 2017, Equifax announced a cyber-security breach, which it claims to have occurred between mid-May and July 2017, where cybercriminals accessed approximately 143 million U.S. Equifax consumers' personal data, including their full names, Social Security numbers, birth dates, addresses, and, in some cases, driver license numbers.

      Equifax also confirmed at least 209,000 consumers' credit card credentials were taken in the attack. The company claims to have discovered the evidence of cybercrime event on July 29, 2017. Residents in the United Kingdom and Canada were also impacted.
      According to Bloomberg and ARS Technica (just to name a few) it's probably "one of the biggest hacks in history". Personally, I couldn't agree more with them. ARS Technica describes:
      the Equifax data breach is, "very possibly the work leak of personal info ever." The breach, via a security flaw on the Equifax website, included full names, Social Security numbers, birth dates, addresses, and driver license numbers in some cases. Many of the affected consumers have never even directly done business with the giant consumer credit reporting agency.

      A highly problematic solution

      If all of that wasn't enough, ARS Technica still reports that the site created to alert users - - was "highly problematic for a variety of reasons". For example, it was found on 9/8/2017 9AM PT that the site was leaking data:

      Source: ARS Technica

      Yes, an open endpoint leaking data on a website created to alert users that everything is supposed to be OK. It was removed a little after but you get it. The company that already had their reputation and perception damaged (because of insecure systems), was trying to calm everyone creating a website full of naive technical defects.

      A highly problematic solution - Part Deux

      Update: On Sep 17,2017, Brian Krebs reports that researches found an Equifax in Argentina having access to extremely confidential information configured with admin/admin; 
      It took almost no time for them to discover that an online portal designed to let Equifax employees in Argentina manage credit report disputes from consumers in that country was wide open, protected by perhaps the most easy-to-guess password combination ever: “admin/admin.”
      How critical is that? I think we're seeing a sad pattern here. Hope we don't see any other chapters in this history because the leak is already pretty critical.

      A highly problematic solution - Part Trois

      Update: On Sep 21,2017 - Yes, it can get worse! According to The Verge:
      In a tweet to a potential victim, the credit bureau linked to, instead of It was an easy mistake to make, but the result sent the user to a site with no connection to Equifax itself. Equifax deleted the tweet shortly after this article was published, but it remained live for nearly 24 hours.
      Gizmodo captured the tweets:

      Pretty sure there's way more out there exploring the situation. The level of incompetence is astonishing!


      And then, if all of that wasn't enough, we get to Ethics. Bloomberg News reports that three Equifax managers sold stock before cyber hack revealed. In fact, Wolf Richter greatly summarized this for us:
      Turns out, Equifax got hacked – um, no, not today. Today it disclosed that it had discovered on July 29 – six weeks ago – that it had been hacked sometime between “mid-May through July,” and that key data on 143 million US consumers was stolen. There was no need to notify consumers right away. They’re screwed anyway. But it gave executives enough time to sell 2 million shares between the discovery of the hack and today, when they crashed 13% in late trading.
      The interval between the supposed hack and it's public announcement was enough to allow insiders sell 2 million shares. How can a company have its perception improved like that?  Probably not going to happen in the near future, especially after more and more bad news about websites misconfigured, data leaks and links to a fake site.


      So, that's how security, reputation and ethics converge. Perception derives from those and is highly influenced by them. Security is hard. Ethics in the other hand, can and should be easy - but only if  we want to do it. It's about time companies do their best to protect their biggest asset: their customers, their data, their privacy.

      And it all can start with us developers by writing safer, better code.

      See Also

      Friday, September 1, 2017

      Custom security on ASP.NET applications

      Most of the time the default security of the ASP.NET web applications is insufficient. On this post, let's see how we can extend it with our custom implementation.
      Photo by Shahadat Rahman on Unsplash

      Due how the security of the ASP.NET framework is setup, very frequently ASP.NET developers will have to write custom security controls for their applications. A classic is example is roles/permissions. How do we extend what the framework offers to respond to business needs while keeping the previous features functioning?

      Reviewing the current implementation

      Your current implementation is probably based on Role Providers which's what everyone else is doing. That said, you're probably using a standard solution like this one where you specify a role to the AuthorizeAttribute:
      [Authorize(Roles = "MyRole")]
      Assuming you implemented your RoleProvider correctly authorization should work out of the box.

      A custom requirement

      The problem is that custom requirements require a more granular solution that cannot be accomplished with Roles because it would require us to write stuff we don't need and doesn't satisfy the business requirements. That's what we get when using an out of the box solution. So, most developers, would probably end up writing custom code.

      Permission requirements

      The required permissioning model is basic. We have to support the concepts of User, Groups, Roles, Permissions (or Claims) and the traditional relationships between them. Users can view everything but only be able to act on certain contexts.

      In terms of development, the following use cases derive from it:
      • The web app should controls accordingly (that we don't want to do this with JavaScript);
      • Each endpoint if the user is authorized and has permissions to call that endpoint;
      • There should be a mapping users -> groups; groups -> roles; roles -> permissions;
      • When users log in, we need to load their permissions;
      • Every message sent to the backend should have a  SubmittedBy  and that should be validated to guarantee that the user had access to that feature.
      • Reject the request if he doesn't have the permission (either in the front or in the backend);
      • Log SecurityExceptions or what is relevant for you.

      Introducing Claims

      Claims are granular permissions linked to resources in your application. You could use it rather than relying on role based permissions. That will allow you to control in detail the access to resource/action rather than having a multitude of roles on the system. I personally like to use a mixture of both so I can use the standard authorization features available in ASP.NET MVC with roles, reading the granular claims/permissions on the more complicated rules.

      The Solution

      So let's review the solution. I broke it in four parts:
      • The basic Infrastructure - the code necessary to provide access to the permissioning system, validation, constants, etc.
      • ASP.NET - secure the web app so that all actions are authorized;
      • Backend - validating if the requests reaching the backend also respect the same concerns already printed in the previous layers;
      • Refactotings - you will probably be required to refactor your framework to apply authorization in certain parts of your application. Or even, creating a generic application filter to auto-process every command sent to it. Topic for a future blog post.
      Before writing code though let's not forget some to review some good design practices:
      • Code Reuse - centralize your roles in one single class in an assembly so everyone talks the same dialect
      • Use consts / readonly strings - please, don't propagate hardcoding your permissions everywhere on your code
      • Write some extension methods / helper classes so you respect the single responsability principle
      • Performance - I used Dependency Injection to inject from the DB the permissioning configuration into my PermissionController class in case it would be loaded only once and kept in cache for the duration of the application. In case you forgot it, just set some code in your Application_Start  as it's your MVC's composition root
      • Review security principles
      • Write tests

      The Basic Infrastructure

      I like writing code bottom up so I can unit test my code as I'm writing it. So, the 1st thing I created on this project was a SecurityController or, a facade to control all my custom authorization:

      The static init method is used to initialize our framework. It gets roles and groups from the db, memory or whatever you're using. That way, the logic is db-independent. Meaning it can be reused everywhere, is lightweight, can be easily cached and, performant by default.

      The other methods are pretty self-descriptive. Interesting is that, because this class is so lightweight, it can be reused trough all your assemblies, tests and even different projects without overhead. And it's also easily extensible but still closed for modifications respecting the SOLID principle.

      Initializing the Framework

      In order to properly initialize the framework via Dependency Injection, we need to understand what our composition roots are. In our case we would have:
      • ASP.NET - global.asax;
      • A console Application - your static void main;
      • Backend - varies according to the technology.
      The composition root for my ASP.NET web app is the Application_Start method. So, the initialization happens like this:
      Let's now take a look at how to secure our app. Essentially we want to be covered on:
      • Views - render/hide stuff in razor according to user's permissions
      • Controllers - authorizing/revoking access on controllers and actions
      • ActionFilters - to auto-validate content that's being injected.

      Securing our Views

      The first obvious step is to hide content user isn't supposed to see. Remember, we should not do this via javascript as client-side security doesn't work. In order to hide content, we need to tell Razor not to render the content if the user doesn't have access to some resource. That could be done simply like this:

      Here, I have a few interesting points:
      • using an extention method in my Razor to reduce duplication of code;
      • Moved all my permissions to the "Permission" class - to avoid hardcoding permission names troughout the application;

      While it's true that the above code would work, more interesting would be trying to implement it elegantly by extending the HtmlHelper to create custom methods such as SecureLink and SecureButton and using those controls in Razor:

      The advantage of this approach is to simplify your Razor markup and reuse the logic to validate like this:
      @Html.SecureButton("Edit", "btn", Permissions.Edit)

      The RequirePermission ActionFilter

      In order to secure our controllers, we'll need three things:
      • create an attribute to auto-validate requests to a given action/controller;
      • secure your actions/controllers trough application filters;

      I created the RequiredPermission Attribute to validate, as an ActionFilter all the requests issued to my app. It looks more a less like:

      Adding Authorization to the Views

      With all of that out of the way we can now decorate our actions like this:
      Take a look at the RequirePermission ActionFilter above. With one liner in the controller, we got the validation working in our ASP.NET application using our custom security implementation. Now, it's just a matter of manual work to decorate our actions with the validations you want. Remember that we did all this work because we now require granular permission validation.

      Securing the Backend

      Because we separated the concerns in our application, delegating to the SecurityController all the logic to validate permissions against user, our backend, whatever it is, can re-utilize it.


      On this post we analyzed a custom implementation for securing ASP.NET applications. There's a lot more to talk about this topic that will be reviewed in future blog posts. Keep tuned!

      See Also

      About the Author

      Bruno Hildenbrand      
      Principal Architect, HildenCo Solutions.