Showing posts with label MongoDB. Show all posts
Showing posts with label MongoDB. Show all posts

Tuesday, June 1, 2021

Microservices in ASP.NET

Microservices is the last significant change in modern development. Let's learn some tools and related design patterns by building a simplified e-commerce website using modern tools and techniques such as ASP.NET Core and Docker.
Photo by Adi Goldstein on Unsplash

For some time we've been discussing tools and technologies adjacent to microservices on this blog. Not randomly though. Most of these posts derived from my open-source project aspnet-microservices, a simple (yet complicated 😉) distributed application built primarily with .NET Core and Docker. While still work in progress, the project demoes important concepts in distributed architectures.

What's included in the project

This project uses popular tools such as:
On the administrative side, the project also includes:

Disclaimer

When you create a sample microservice-based application, you need to deal with complexity and make tough choices. For the aspnet-microservices application, I deliberately chose to balance complexity and architecture by reducing the emphasis on design patterns focusing on the development of the services themselves. The project was built to serve as an introduction and a start-point for those looking forward to working of Docker, Compose and microservices.

This project is not production-ready! Check Areas for Improvement for more information.

Microservices included in this project

So far, the project consists of the following services:

  • Web: the frontend for our e-commerce application;
  • Catalog: provides catalog information for the web store;
  • Newsletter: accepts user emails and stores them in the newsletter database for future use;
  • Order: provides order features for the web store;
  • Account: provides account services (login, account creation, etc) for the web store;
  • Recommendation: provides simple recommendations based on previous purchases;
  • Notification: sends email notifications upon certain events in the system;
  • Payment: simulates a fake payment store;
  • Shipping: simulates a fake shipping store;

Technologies Used

The technologies used were cherry-picked from the most commonly used by the community. I chose to favour open-source alternatives over proprietary (or commercially-oriented) ones. You'll find in this bundle:
  • ASP.NET Core: as the base of our microservices;
  • Docker and Docker Compose: to build and run containers;
  • MySQL: serving as a relational database for some microservices;
  • MongoDB: serving as the catalog database for the Catalog microservice;
  • Redis: serving as distributed caching store for the Web microservice;
  • RabbitMQ: serving as the queue/communication layer over which our services will communicate;
  • MassTransit: the interface between our apps and RabbitMQ supporting asynchronous communications between them;
  • Dapper: lightweight ORM used to simplify interaction with the MySQL database;
  • SendGrid: used to send emails from our Notification service as described on a previous post;
  • Vue.js and Axios.Js to abstract the frontend of the Web microservice on a simple and powerful  JavaScript framework.

Conventions and Design Considerations

Among others, you'll find in this project that:
  • The Web microservice serves as the frontend for our e-commerce application and implements the API Gateway / BFF design patterns routing the requests from the user to other services on an internal Docker network;
  • Web caches catalog data a Redis data store; Feel free to use Redis Commander to delete cached entries if you wish or need to.
  • Each microservice has its own database isolating its state from external services. MongoDB and MySQL were chosen as the main databases due to their popularity.
  • All services were implemented as ASP.NET Core webapps exposing the endpoints /help and /ping so they can be inspected from and observed automatically the the running engine.
  • No special logging infrastructure was added. Logs can be easily accessed via docker logs or indexed by a different application if you so desire.
  • Microservices communicate between themselves via Pub/Sub and asynchronous request/response using MassTransit and RabbitMQ.
  • The Notification microservice will eventually send emails. This project was tested with SendGrid but other SMTP servers should work from within/without the containers.
  • Monitoring is experimental and includes Grafana sourcing its data from a Prometheus backend.

Technical Requirements

To run this project on your machine, please make sure you have installed:

If you want to develop/extend/modify it, then I'd suggest you to also have:

Running the microservices

So let's get quickly learn how to load and build our own microservices.

Initializing the project

Get your copy by cloning the project:
git clone https://github.com/hd9/aspnet-microservices

Next open the solution src/AspNetContainers.sln with Visual Studio 2019. Since code is always the best documentation, the easiest way to understand the containers and their configurations is by reading the src/docker-compose.yml file.

Debugging with Visual Studio

Building and debugging with Visual Studio 2019 is straightforward. Simply open the AspNetMicroservices.sln solution from the src folder, build and run the project as debug (F5). Next, run the dependencies (Redis, MongoDB, RabbitMQ and MySQL) by issuing the below command from the src folder:

docker-compose -f docker-compose.debug.yml up

Running the services with Docker Compose

In order to run the services you'll need Docker and Docker Compose installed on your machine. Type the command below from the src folder on a terminal to start all services:
docker-compose up
Then to stop them:
docker-compose down
To remove everything, run:
docker-compose down -v
To run a specific service, do:
docker-compose up <service-name>
As soon as you run your services, Compose should start emitting on the console logs for each service:
The output of our docker-compose command

You can also query individual logs for services as usual with docker logs <svc-name>. For example:

~> docker logs src_catalog_1
info: CatalogSvc.Startup[0]
      DB Settings: ConnStr: mongodb://catalog-db:27017, Db: catalog, Collection: products
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app

Database Initialization

Database initialization is automatically handled by Compose. Check the docker-compose.yml file to understand how that happens. You'll find examples on how to initialize both MySQL and MongoDB.

Dockerfiles

Each microservice contains a Dockerfile in their respective roots and understanding them should be straightforward. If you never wrote a Dockerfile before, consider reading the official documentation.

Docker Compose

There are two docker-compose files in the solution. Their use is described below:
  • docker-compose.yml: this is the main Compose file. Running this file means you won't be able to access some of the services as they'll not be exposed.
  • docker-compose.debug.yml: this is the file you should run if you want to debug the microservices from Visual Studio. This file only contains the dependencies (Redis, MySQL, RabbitMQ, Mongo + admin interfaces) you'll need to use when debugging.

Accessing our App

If the application booted up correctly, go to http://localhost:8000 to access it. You should see a simple catalog and some other widgets. Go ahead and try to create an account. Just make sure that you have the settings correctly configured on your docker-compose.yml file:
Our simple e-commerce website. As most things, its beauty is in the details 😊.

    Admin Interfaces

    You'll still have available admin interfaces for our services on:
    I won't go over the details about each of these apps. Feel free to explore on your own.

    Monitoring

    Experimental monitoring is available with Grafana, Prometheus and cadvisor. Open Grafana at http://localhost:3000/ and login with admin | admin, select the Docker dashboard and you should see metrics for the services similar to:

    Grafana capturing and emitting telemetry about our microservices.

    Quick Reference

    As a summary, the microservices are configured to run at:

    The management tools are available on:

    And you can access the databases at:
    • MySql databases: use Adminer at: http://localhost:8010/, enter the server name (ex. order-db for the order microservice) and use root | todo as username/password.
    • MongoDB: use MongoExpress at: http://localhost:8011/. No username/password is required.

    Final Thoughts

    On this post I introduce to you my open-source project aspnet-microservices. This application was built as a way to present the foundations of Docker, Compose and microservices for the whole .NET community and hopefully serves as an intuitive guide for those starting in this area.

    Microservices is the last significant change in modern development and requires learning lots (really, lots!) of new technologies and new design patterns. This project is by far complete and should not be used in production as it lacks basic cross-cutting concerns any production-ready project would need. I deliberately omitted them for simplicity else I could simply point you to this project. For more information, check the project's README on GitHub.

    Feel free to play with it and above all, learn and have fun!

    Source Code

    As always, the source code is available on GitHub at: github.com/hd9/spnet-microservices.

    Tuesday, December 1, 2020

    Distributed caching in ASP.NET Core using Redis, MongoDB and Docker

    Redis is the world's most popular caching database. Let's review how to implement distributed caching in ASP.NET Core using Redis, MongoDB and Docker Compose.
    Photo by Christian Nielsen on Unsplash

    One of the things that every modern website needs is caching. After all, we don't want to be alerted at 2AM being informed that our services are down because we had a spike in usage which our databases couldn't handle.

    One common solution to reducing the stress in our applications is placing a fast caching service between our website and our database. Modern caching implementations include requirements around decreasing response time, distributed caching (sharing the same cache between multiple web instances) and cost reduction. Most implementations today use Redis (a super-fast an in-memory key–value database) as a cache service sitting in front of a database of choice.

    On this post we will implement a fictional ASP.NET Core e-commerce website using MongoDB as database and Redis as a cache service, both running on Docker with Docker Compose so that we can understand how it all works together.

    On this post we will:
    • Scaffold an ASP.NET Core website
    • Implement a catalog service using MongoDB
    • Implement distributed caching using Redis
    • Run our dependencies using Docker Compose
    • Setup Redis Commander and Mongo Express to view/manage our services

    Setting up an ASP.NET Core website

    Let's quickly scaffold an ASP.NET Core website using the command line with:
    dotnet new mvc -n AspNetDistributedCaching
    Then, add the below configuration to your appsettings.json file:
      "Mongo": {
        "ConnectionString": "mongodb://localhost:27017",
        "Db": "catalog",
        "Collection": "products"
      },
      "Redis": {
        "Configuration": "localhost",
        "InstanceName": "web"
      }
    Next, add the config classes and bind these configs. In case you missed, feel free to review how configurations work in ASP.NET Core 3.1 projects on a previous article.

    Setting up dependencies

    Let's now setup our dependencies: Redis, MongoDB and the management interfaces  Redis Commander and Mongo Express. Despite sounding complicated, it's actually very simple if we use the right tools: Docker and Docker Compose.

    Docker Compose 101

    Without much extension, let me briefly re-introduce Docker Compose:
    Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. See the list of features.
    Setting Compose in your project is actually very simple. For starters, paste this on a docker-compose.yml file on the root of the project. We'll add the services and their respective configurations next:
    version: '3.7'
    services:
      # we'll add our services on the next steps

    Configuring our MongoDB instance

    Next, let's configure MongoDB. Paste the snippet below at the bottom of your docker-compose.yml file. It will instruct Compose to init a Mongo instance called catalog-db and initialize the catalog database from the ./db.js file:
      catalog-db:
        image: mongo
        environment:
          # MONGO_INITDB_ROOT_USERNAME: root
          # MONGO_INITDB_ROOT_PASSWORD: todo
          MONGO_INITDB_DATABASE: catalog
        volumes:
        - .db.js:/docker-entrypoint-initdb.d/db.js:ro
        expose:
          - "27017"     ports:
          - "3301:27017"

    Configuring our Redis instance

    As with MongoDB, let's now setup our Redis cache. Paste this at the bottom of your docker-compose.yml file:
      redis:
        image: redis:6-alpine
        expose:
          - "6379"
        ports:
          - "6379:6379"

    Configuring  the Management interfaces

    Let's now setup management interfaces for Redis - Redis Commander and Mongo - Mongo Express to access our resources (I'll show later how do you use them). Again, paste the below on your docker-compose.yml file:
      # Mongo Express: tool to manage our Mongo database
      mongo-express:
        image: mongo-express
        restart: always
        ports:
          - "8011:8081"
        environment:
          - ME_CONFIG_MONGODB_SERVER=catalog-db
          # MONGO_INITDB_ROOT_USERNAME: root
          # MONGO_INITDB_ROOT_PASSWORD: todo
        depends_on:
          - catalog-db

      # Redis Commander: tool to manage our Redis container from localhost
      redis-commander:
        image: rediscommander/redis-commander:latest
        environment:
          - REDIS_HOSTS=redis
        ports:
          - "8013:8081"
        depends_on:
          - redis

    Querying Catalog Data

    Obviously in order to cache the data, we should have it first. So let's implement a simple MongoDB wrapper using the Repository Pattern which I'll call CatalogRepository. Its interface looks like:
    public interface ICatalogRepository
    {
        Task<IList<Category>> GetCategories();
        Task<Category> GetCategory(string slug);
        Task<Product> GetProduct(string slug);
        Task<IList<Product>> GetProductsByCategory(string slug);
    }
    For the concrete implemtation, don't forget to add the MongoDB.Driver NuGet Package. I show a simple query below. To view it in full, check this demo's repo:
    public async Task<IList<Category>> GetCategories()
    {
        var c = _db.GetCollection<Category>("categories");
        return (await c.FindAsync(new BsonDocument())).ToList();
    }
    Next, let's set up DI for this guy using ASP.NET's DI framework:
    services.AddTransient<ICatalogRepository, CatalogRepository>();
    To view the code in full, check it on this demo's repo at github.com/hd9/aspnet-distributed-caching.

    Caching Catalog Data

    With the repository working, let's implement the caching. I'll divide this task in:
    1. setting up Redis with distributed caching
    2. implementing a Service class and
    3. adding the caching logic to the service class.

    Setting up the Redis Initialization

    First, add NuGet references to Microsoft.Extensions.DependencyInjection. and Microsoft.Extensions.Caching.StackExchangeRedis. Then add the add a call to services.AddStackExchangeRedisCache in ConfigureServices:
    services.AddStackExchangeRedisCache(o =>
    {
        o.Configuration = cfg.Redis.Configuration;
        o.InstanceName = cfg.Redis.InstanceName;
    });

    Implementing a Service Class

    The next part consists of implementing the caching logic. Such logic will live in CatalogService which implements the service design pattern and abstracts from the Controller either the repository and the caching implementations. Its first important part is the constructor which looks like this:
    public CatalogSvc(
        ICatalogRepository repo,
        IDistributedCache cache)
    {
        _repo = repo;
        _cache = cache;
    }

    Adding the caching logic

    Then, every method of the above interface are similarly implemented. For example, the GetCategories method that feeds the landing page looks like this:
    public async Task<IList<Category>> GetCategories()
    {
        return await GetFromCache<IList<Category>>(
            "categories",
            "*",
            async () => await _repo.GetCategories());
    }
    And our GetFromCache private method that generalizes caching logic for our catalog service looks like:
    private async Task<TResult> GetFromCache<TResult>(
        string key,
        string val,
        Func<Task<object>> func)
    {
        var cacheKey = string.Format(_keyFmt, key, val);
        var data = await _cache.GetStringAsync(cacheKey);

        if (string.IsNullOrEmpty(data))
        {
            data = JsonConvert.SerializeObject(await func());

            await _cache.SetStringAsync(
                cacheKey,
                data);
        }

        return JsonConvert.
            DeserializeObject<TResult>(data);
    }
    The interesting bits of the above piece of code is that before searching the database (which's abstracted by a Func<> parameter), it searches the cache with GetStringAsync. If it finds, it returns the cached data cast as the provided type (TResult) by deserializing it from its string value. In case the cache key is not present, it will invoke the target function and cache its result as a Json string in the Redis cache.

    To use it, first we wire it up to the DI container:
    services.AddTransient<ICatalogSvc, CatalogSvc>();
    So we can properly inject it and use it in our controllers:
    public HomeController(ICatalogSvc svc)
    {
        _svc = svc;
    }

    Management Interfaces

    To finish up, let's quickly review how to access the management interfaces for Redis and Mongo Express.

    Accessing Mongo Express

    To view/modify your catalog, you can access Mongo Express at: http://localhost:8011/. I changed from the original port 8081 to 8011 since many services run on that port and if it was your case, Compose would fail. But feel free to change that configuration on your docker-compose.yml file. As previously mentioned, this database is auto-initialized from the db.js file by Docker Compose. Here's a quick glance of Mongo Express displaying our catalog data:

    Accessing Redis Commander

    Redis Commander is a frontend for vieweing and managing Redis. On this demo I run it on http://localhost:8011/. As previously, feel free to change the port on your docker-compose.yml file. Here's Redis Commander showing our cached data:

    Running the Services

    The last part is to describe how to run the services. As a .NET developer, you're already used to debug and run your solutions with Visual Studio - the same applies here. The only thing that remains is how to run the dependencies? As mentioned, it's as simple as running the below command from the project's root with Docker Compose:
    docker-compose up
    You should see the services starting in the backend similar to this:
    To shutdown, run:
    docker-compose down
    Finally, to remove everything, run:
    docker-compose down -v

    Final Thoughts

    On this article we reviewed how to use Redis, a super-fast key-value document database in front of MongoDB database serving as a distributed service. On this example we still leveraged Docker and Docker Compose to simplify the setup and the initialization of our project so we could get our application running and test it as quickly as possible.

    Redis is one of the world's most used and loved databases and a very common option for caching. I hope you also realized how Docker and Docker Compose help developers by simplifying rebuilding complex environments like this.

    Source Code

    As always, the source code for this article is available on my GitHub.

    References

    Wednesday, July 15, 2020

    Hosting NuGet packages on GitHub

    On this post let's review how to build, host and consume our own NuGet packages using GitHub Packages
    Photo by Leone Venter on Unsplash

    Long gone are the days we had to pay to host our NuGet packages. Today, things have changed. We have many options to host our own NuGet packages for free (including privately if we wish) including in our own GitHub repositories. On this tutorial let's review how to build our own packages using .NET Core's CLI, push them to GitHub and finally, how to consume from our own projects.

    About NuGet

    NuGet is a free and open-source package manager designed by Microsoft and used extensively in the .NET /.NET Core ecosystem. NuGet is the name of the tool and of the package itself. The most common repository for NuGet packages is NuGet.org hosting more than 200k packages! But we can host our own packages on different repos (including private ones) such as GitHub Packages. NuGet is bundled with Visual Studio and with the .NET Core SDK so you probably have it already available on your machine.

    About GitHub Packages

    GitHub Packages is GitHub's free offering for those wanting to host their own packages. GitHub Packages allows hosting public and private packages. The benefits of using GitHub Packages is that it's free, you can share your packages privately or with the rest of the world, integrate with GitHub APIs, GitHub Actions, webhooks and even create complex end-to-end DevOps workflows. For more information about GitHub Packages, click here.

    Why build our own packages

    But why build our own packages? Mainly because packages simplify using and distributing self-contained and reusable software (tools, libraries, etc) in a clean and organized way of doing so. Beyond that, other common reasons are:
    1. sharing packages with someone else (and possibly the world)
    2. sharing that package privately with your coworkers so they can be used in different projects.
    3. packaging software so it can be installed or deployed elsewhere.

    Building NuGet Packages

    So let's get started and build our first NuGet package. The project we'll build is a simple library consisting of POCOs I frequently use as standard onfiguration bindings when developing microservices: Smtp, Redis, RabbitMQ, MassTransit and MongoDB. I chose this example because this is the type of code we frequently duplicate, so why not isolate them in a shareable package and keep our codebase DRY?

    Creating our project

    To quickly create my project let's use the .NET Core CLI (feel free to use Visual Studio if you will):
    dotnet new classlib -o HildenCo.Core
    Then I'll add those config classes. For example the SmtpOptions looks like:
    public class SmtpOptions
    {
        public string Host { get; set; }
        public int  Port { get; set; }
        public string Username { get; set; }
        public string Password { get; set; }
        public string FromName { get; set; }
        public string FromEmail { get; set; }
    }

    Creating our first NuGet package

    Let's then create our first package. The simplest way to do so is by configuring it via Visual Studio. For that, select the Project and Alt-Enter it (or right-click it with the mouse) to view Project Properties and check Generate NuGet package on build on the Package tab:
    Don't forget to add relevant information about your package such as Id, Name, Version, Authors, Description, Copyright, License and RepositoryUrl. All that information is required by GitHub:
    If you prefer, you can edit the above metadata directly in the csproj file.
    Now, build again to confirm our package was built by inspecting the Build Output in VS (Ctrl-W, O):
    1>------ Build started: Project: HildenCo.Core, Configuration: Debug Any CPU ------
    1>HildenCo.Core -> C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Debug\netstandard2.0\HildenCo.Core.dll
    1>Successfully created package 'C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Debug\HildenCo.Core.0.0.1.nupkg'.
    ========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
    Congrats! You now have built your first package!
    Don't forget to add RepositoryUrl with your correct username/repo name. We'll need it to push to GitHub later.

    Creating our package using the CLI

    As always, the CLI may be a better alternative. Why? In summary because it allows automating package creation on continuous integration, integrating with APIs, webhooks and even creating end-to-end DevOps workflows. So, go ahead and uncheck that box and build it again with:
    dotnet pack --configuration Release
    This time, we should see this as output:
    Microsoft (R) Build Engine version 16.6.0+5ff7b0c9e for .NET Core
    Copyright (C) Microsoft Corporation. All rights reserved.

      Determining projects to restore...
      All projects are up-to-date for restore.
      HildenCo.Core -> C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Release\netstandard2.0\HildenCo.Core.dll
      Successfully created package 'C:\src\nuget-pkg-demo\src\HildenCo.Core\bin\Release\HildenCo.Core.0.0.1.nupkg'.
    TIP: You may have realized that we now built our package as release. This is another immediate benefit from decoupling our builds from VS. On rare occasions should we push packages built as Debug.

    Pushing packages to GitHub

    With the basics behind, let's review how to push your own packages to GitHub.

    Generating an API Key

    In order to authenticate to GitHub Packages the first thing we'll need is an access token. Open your GitHub account, go to Settings -> Developer Settings -> Personal access tokens, click Generate new Token, give it a name, select write:packages and save:

    Creating a nuget.config file

    With the API key created, let's create our nuget.config file. This file should contain the authentication for the package to be pushed to the remote repo. A base config is listed below with the fields to be replaced in bold:
    <?xml version="1.0" encoding="utf-8"?>
    <configuration>
        <packageSources>
            <clear />
            <add key="github" value="https://nuget.pkg.github.com/<your-github-username>/index.json" />
        </packageSources>
        <packageSourceCredentials>
            <github>
                <add key="Username" value="<your-github-username>" />
                <add key="ClearTextPassword" value="<your-api-key>" />
            </github>
        </packageSourceCredentials>
    </configuration>

    Pushing a package to GitHub

    With the correct configuration in place, we can push our package to GitHub with:
    dotnet nuget push ./bin/Release/HildenCo.Core.0.0.1.nupkg --source "github"
    This is what happened when I pushed mine:
    dotnet nuget push ./bin/Release/HildenCo.Core.0.0.1.nupkg --source "github"
    Pushing HildenCo.Core.0.0.1.nupkg to 'https://nuget.pkg.github.com/hd9'...
      PUT https://nuget.pkg.github.com/hd9/
      OK https://nuget.pkg.github.com/hd9/ 1927ms
    Your package was pushed.
    Didn't work? Check if you added RepositoryUrl to your project's metadata as nuget uses it  need it to push to GitHub.

    Reviewing our Package on GitHub

    If you managed to push your first package (yay!), go ahead and review it in GitHub on the Package tab of your repository. For example, mine's available at: github.com/hd9/nuget-pkg-demo/packages and looks like this:

    Using our Package

    To complete the demo let's create an ASP.NET project to use our own package:
    dotnet new mvc -o TestNugetPkg
    To add a reference to your package, we'll use our own nuget.config since it contains pointers to our own repo. If your project has a solution, copy the nuget.config to the solution folder. Else, leave it in the project's folder. Open your project with Visual Studio and open the Manage NuGet Packages. You should see your newly created package there:
    Select it and install:
    Review the logs to make sure no errors happened:
    Restoring packages for C:\src\TestNugetPkg\TestNugetPkg.csproj...
      GET https://nuget.pkg.github.com/hd9/download/hildenco.core/index.json
      OK https://nuget.pkg.github.com/hd9/download/hildenco.core/index.json 864ms
      GET https://nuget.pkg.github.com/hd9/download/hildenco.core/0.0.1/hildenco.core.0.0.1.nupkg
      OK https://nuget.pkg.github.com/hd9/download/hildenco.core/0.0.1/hildenco.core.0.0.1.nupkg 517ms
    Installing HildenCo.Core 0.0.1.
    Installing NuGet package HildenCo.Core 0.0.1.
    Committing restore...
    Writing assets file to disk. Path: C:\src\TestNugetPkg\obj\project.assets.json
    Successfully installed 'HildenCo.Core 0.0.1' to TestNugetPkg
    Executing nuget actions took 960 ms
    Time Elapsed: 00:00:02.6332352
    ========== Finished ==========

    Time Elapsed: 00:00:00.0141177
    ========== Finished ==========
    And finally we can use it from our second project and harvest the benefits of clean code and code reuse:

    Final Thoughts

    On this post we reviewed how to build our own NuGet packages using .NET Core's CLI, pushed them to GitHub and finally described how to consume them from our own .NET projects. Creating and hosting our own NuGet packages is important for multiple reasons including sharing code between projects and creating deployable artifacts.

    Source Code

    As always, the source code for this post is available on GitHub.

    See Also

    Monday, May 21, 2018

    Seven Databases in Seven Weeks, 2nd Edition

    The book is an interesting heads-up on databases being used on different fields throughout the world.
    You may or may not have heard about polyglot persistence. The fact is that more and more, software projects are making use of different technologies. And when it comes to the database world, the number of options is immense: it can be relational, document or columnar databases, key-value stores, graph databases, not to mention other cloud infrastructure options like service buses, blobs, storage accounts and queues.

    What to use? Where? And how does that affects developers?

    Choosing a database is perhaps one of the most important architectural decisions a developer can make. In that regard, I'd like to recommend a very interesting book that addresses some of that discussion: Seven Databases in Seven Weeks, 2nd Edition.

    Why you should read this book

    Because the book:
    • provides practical and conceptual introductions to Redis, Neo4J, CouchDB, MongoDB, HBase, PostgreSQL, and DynamoDB
    • introduces you to different technologies encouraging you to run your own experiences
    • revises important topics like the CAP Theorem
    • will give you an overview of what’s out there so you can choose the best tool for the job
    • explore some cutting-edge databases available - from a traditional relational database to newer NoSQL approaches
    • and make informed decisions about challenging data storage problems
    • tackle a real-world problem that highlights the concepts and features that make it shine

    Conclusion

    Whether you're a programmer building the next big thing, a data scientist seeking solutions to thorny problems, or a technology enthusiast venturing into new territory, you will find something to inspire you in this book.

    Reference

    See Also

    About the Author

    Bruno Hildenbrand