Showing posts with label Tutorials. Show all posts
Showing posts with label Tutorials. Show all posts

Tuesday, February 2, 2021

Deploying Docker images to Azure App Services

Deploying Docker images to Azure App Services is simple. Learn how to deploy your Docker images to Azure App Services using Azure Container Registry (ACR)
Photo by Glenn Carstens-Peters on Unsplash

We've been discussing Docker, containers and microservices for some time on the blog. On previous posts we learned how to create our own ASP.NET Docker images and how to push them to Azure Container Registry. Today we'll learn how to deploy these same Docker images on Azure App Services.

On this post we will:

Requirements

As requirements, please make sure you have:
If you want to follow along, please check the previous tutorials discussing how to:

    About Azure App Services

    Azure developers are quite familiar with Azure App Services. But for those who don't know, App services are:
    HTTP-based services for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments.

    Why use App Services

    And why use Azure App Services? Essentially because App Services:
    • support multiple languages and frameworks: such as ASP.NET, Java, Ruby, Python and Node.js
    • can be easily plugged into your CI/CD pipelines, for example to deploy from Docker Hub or Azure Container Registries
    • can be used as serverless services
    • runs webjobs allowing us to deploy backend services without any additional costs
    • have a very powerful and intuitive admin interface 
    • are integrated with other Azure services

    Creating our App Service

    So let's get started and create our App Service. While this shouldn't be new to anyone, I'd like to review the workflow so readers understand the step-by-step. To create your App Service, in Azure, click Create -> App Service:
    On this screen, make sure you select:
    • Publish: Docker Container
    • OS: Linux

    Select the free plan

    Click on Change Plan to choose the free one (by default you're set on a paid one). Click Dev/Test and select F1:

    Selecting Docker Container/Linux

    Review the info and don't forget to select Docker Container/Linux for Publish and Operating System:

    Specifying Container Information

    Next, we specify the container information. On this step we will choose:
    • Options: Single Container
    • Image Source: Azure Container Registry
    • Registry: Choose yours
    Change Image Source to Azure Container Registry:
    On this step, Azure should auto-populate your repository. However, if you do not have admin user enabled (I didn't), you'll get this error:

    Enabling Admin in your Azure Container Registry

    To enable admin access to your registry, open it using the portal and on the Identity tab, change from Disable:
    To Enable and Azure will auto-generate the credentials for you:

    Specify your Container

    Back to the creation screen, as soon as the admin access is enabled on your registry, Azure should auto-populate the required information with your registry, image and tag (if one exists):
    Startup Command allows you to specify additional commands for the image (for example environment vars, volumes, configurations, etc).

    Review and Confirm

    Review and confirm. The deployment should take less than 1 second:

    Accessing our App Service in Azure

    As seen above, as soon as confirm, the deployment starts. It shouldn't take more than 1 minute to have it complete.

    Accessing our Web Application

    Let's check if our image is running. From the above image you can see my image's URL highlighted in yellow. Open that on a browser to confirm the site is accessible:

    Container Features

    To finish, let's summarize some features that Azure offers us to easily manage our containers. 

    Container Settings

    Azure still offers a Container Settings tab that allows us to inspect, change container settings for our web app.

    Container Logs

    We can inspect logs for our containers to easily troubleshoot them.
    As an example, here's an excerpt of what I got for my own container log:
    2020-04-10 04:32:51.913 INFO  -  Status: Downloaded newer image for hildenco.azurecr.io/webapp:v1
    2020-04-10 04:32:52.548 INFO  - Pull Image successful, Time taken: 0 Minutes and 47 Seconds
    2020-04-10 04:32:52.627 INFO  - Starting container for site
    2020-04-10 04:32:52.627 INFO  - docker run -d -p 5021:80 --name hildenco-docker_0_e1384f56 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=hildenco-docker -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=hildenco-docker.azurewebsites.net -e WEBSITE_INSTANCE_ID=[redacted] hildenco.azurecr.io/webapp:v1 
    2020-04-10 04:32:52.627 INFO  - Logging is not enabled for this container.
    Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
    2020-04-10 04:32:57.601 INFO  - Initiating warmup request to container hildenco-docker_0_e1384f56 for site hildenco-docker
    2020-04-10 04:33:02.177 INFO  - Container hildenco-docker_0_e1384f56 for site hildenco-docker initialized successfully and is ready to serve requests.

    Continuous Deployment (CD)

    Another excellent feature that you should explore in the future is enabling continuous deployment on your web apps. Enabling continuous deployment is essential to help your team gain agility by releasing faster and often. We'll try to cover this fantastic topic in the future, keep tuned.

    Conclusion

    On this post we reviewed how to create an Azure App Service and learned how to deploy our own Docker Images from our very own Azure Container Registry (ACR) to it. By using ACR greatly simplified the integration between our own Docker Images and our App Services. From here I'd urge you to explore continuous integration to automatically push your images to your App Services as code lands in your git repository.

    References

    See Also

    Monday, November 2, 2020

    Async Request/Response with MassTransit, RabbitMQ, Docker and .NET core

    Let's review how to implement an async resquest/response exchange between two ASP.NET Core websites via RabbitMQ queues using MassTransit
    Photo by Pavan Trikutam on Unsplash

    Undoubtedly the most popular design pattern when writing distributed application is Pub/Sub. Turns out that there's another important design pattern used in distributed applications not as frequently mentioned, that can also be implemented with queues: async requests/responses. Async requests/responses are very useful and widely used to exchange data between microservices in non-blocking calls, allowing the requested service to throttle incoming requests via a queue preventing its own exhaustion.

    On this tutorial, we'll implement an async request/response exchange between two ASP.NET Core websites via RabbitMQ queues using MassTransit. We'll also wire everything up using Docker and Docker Compose.

    On this post we will:
    • Scaffold two ASP.NET Core websites
    • Configure each website to use MassTransit to communicate via a local RabbitMQ queue
    • Explain how to write the async request/response logic
    • Run a RabbitMQ container using Docker
    • Test and validate the results

    Understanding MassTransit Async Requests

    If you understand how to wire everything up, setting up async request/response with MassTransit is actually very simple. So before getting our hands into the code, let's review the terminology you'll need to know:
    • Consumer: a class in your service that'll respond for requests (over a queue on this case);
    • IRequestClient<T>: the interface we'll have to implement to implement the client and invoke async requests via the queue;
    • ReceiveEndpoint: a configuration that we'll have to setup to enable our Consumer to listen and respond to requests;
    • AddRequestClient: a configuration that we'll have to setup to allow our own async request implementation;
    Keep that info in mind as we'll use them in the following sections.

    Creating our Project

    Let's quickly scaffold two ASP.NET Core projects by using the dotnet CLI with:
    dotnet new mvc -o RequestSvc
    dotnet new mvc -o ResponseSvc

    Adding the Dependencies

    The dependencies we'll need today are:

    Adding Configuration

    The configuration we'll need  is also straightforward. Paste this in your RequestSvc/appsettings.json:
    "MassTransit": {
        "Host": "rabbitmq://localhost",
        "Queue": "requestsvc"
    }
    And this in your ResponseSvc/appsettings.json:
    "MassTransit": {
        "Host": "rabbitmq://localhost",
        "Queue": "responsesvc"
    }
    Next, bind the config classes to those settings. Since I covered in detail how configurations work in ASP.NET Core 3.1 projects on a previous article I'll skip that to keep this post short. But if you need, feel free to take a break and understand that part first before you proceed.

    Adding Startup Code

    Wiring up MassTransit in ASP.NET DI framework is also well documented. For our solution it would look like this for the RequestSvc project:
    services.AddMassTransit(x =>
    {
        x.AddBus(context => Bus.Factory.CreateUsingRabbitMq(c =>
        {
            c.Host(cfg.MassTransit.Host);
            c.ConfigureEndpoints(context);
        }));
       
        x.AddRequestClient<ProductInfoRequest>();
    });

    services.AddMassTransitHostedService();
    And like this for the  ResponseSvc project:
    services.AddMassTransit(x =>
    {
        x.AddConsumer<ProductInfoRequestConsumer>();

        x.AddBus(context => Bus.Factory.CreateUsingRabbitMq(c =>
        {
            c.Host(cfg.MassTransit.Host);
            c.ReceiveEndpoint(cfg.MassTransit.Queue, e =>
            {
                e.PrefetchCount = 16;
                e.UseMessageRetry(r => r.Interval(2, 3000));
                e.ConfigureConsumer<ProductInfoRequestConsumer>(context);
            });
        }));
    });

    services.AddMassTransitHostedService();
    Stop for a second and compare the differences between both initializations. Spot the differences?

    Building our Consumer

    Before we can issue our requests, we have to build a consumer to handle these messages. In MassTransit's world, this is the same consumer you'd build for your regular pub/sub. For this demo, our ProductInfoRequestConsumer looks like this:
    public async Task Consume(ConsumeContext<ProductInfoRequest> context)
    {
        var msg = context.Message;
        var slug = msg.Slug;

        // a fake delay
        var delay = 1000 * (msg.Delay > 0 ? msg.Delay : 1);
        await Task.Delay(delay);

        // get the product from ProductService
        var p = _svc.GetProductBySlug(slug);

        // this responds via the queue to our client
        await context.RespondAsync(new ProductInfoResponse
        {
            Product = p
        });
    }

    Async requests

    With consumer, configuration and the startup logic in place, it's time to write the request code. In essence, this is the piece of code that will mediate the async communication between the caller and the responder using a queue (abstracted obviously by MassTransit). A simple async request to a remote service using a backend queue looks like:
    using (var request = _client.Create(new ProductInfoRequest { Slug = slug, Delay = timeout }))
    {
        var response = await request.GetResponse<ProductInfoResponse>();
        p = response.Message.Product;
    }

    Running the dependencies

    To run RabbitMQ, we'll use Docker Compose. Running RabbitMQ with Compose is as simple as running the below command from the src folder:
    docker-compose up
    If everything correctly initialized, you should expect to see RabbitMQ's logs emitted by Docker Compose on the terminal:
    To shutdown Compose and RabbitMQ, either click Ctrl-C or run:
    docker-compose down
    Finally, to remove everything, run:
    docker-compose down -v

    Testing the Application

    Open the project from Visual Studio 2019, and run it as debug (F5) and VS will open 2 windows - one for RequestSvc and another for ResponseSvc. RequestSvc looks like this:

    Go ahead and run some queries. If you got your debugger running, it will stop in both services allowing you to validate the exchange between them. To reduce Razor boilerplate the project uses VueJS and AxiosJs so we get responses in the UI without unnecessary roundtrips.

    RabbitMQ's Management Interface

    The last thing worth mentioning is how to get to RabbitMQ's management interface. This project also allows you to play with RabbitMQ at http://localhost:8012. By logging in with guest | guest and clicking on the Queues tab you should see something similar to:
    RabbitMQ is a powerful message-broker service. However, if you're running your applications on the cloud, I'd suggest using a fully-managed service such as Azure Service Bus since it increases the resilience of your services.

    Final Thoughts

    On this article we reviewed how to implement an asynchronous request/response using queues. Async resquests/responses are very useful and widely used to exchange data between microservices in non-blocking calls, allowing the resqueted service to throttle incoming requests via a queue preventing its own exhaustion. On this example we still leveraged Docker and Docker Compose to simplify the setup and the initialization of our backend services.

    I hope you liked the demo and will consider using this pattern in your applications.

    Source Code

    As always, the source code for this article is available on my GitHub.

    References

    See Also

    Thursday, October 1, 2020

    Building and Hosting Docker images on GitHub with GitHub Actions

    Building Docker images for our ASP.NET Core websites is easy and fun. Let's see how.
    Photo by Steve Johnson on Unsplash

    On a previous post we discussed how to build our own Docker images from ASP.NET Core websites, push and host them on GitHub Packages. We also saw how to build and host our own NuGet packages in GitHub. Those approaches are certainly the recommended if you already have a CI/CD implemented for your project. However, for new projects running on GitHub, GitHub Actions deserves your attention.

    What we will build

    On this post we'll review how to build Docker images from a simple ASP.NET Core website and setup continuous integrations using GitHub Actions to automatically build, test and deploy them as GitHub Packages.

    Requirements

    To run this project on your machine, please make sure you have installed:

    If you want to develop/extend/modify it, then I'd suggest you to also have:

    About GitHub Packages

    GitHub Packages is GitHub's offering for those wanting to host their own packages or Docker images. The benefits of using GitHub Packages is that it's free, you can share your images privately or publicly and you can integrate with other GitHub tooling such as APIs, Actions, webhooks and even create complex end-to-end DevOps workflows.

    About GitHub Actions

    GitHub Actions allows automating all your workflows such as build, test, and deployments right from GitHub. We can also use Actions to make code reviews, branch management, and issue triaging. GitHub Actions is very powerful, easy to customize, extend and it counts with lots of pre-configured templates to build and deploy pretty much everything.

    Building our Docker Image

    So let's quickly build our Docker image. For this demo, I'll use my own aspnet-github-actions. If you want to follow along, open a terminal and clone the project with:
    git clone https://github.com/hd9/aspnet-github-actions

    Building our local image

    Next, cd into that folder and build a local Docker image with:
    docker build . -t aspnet-gitub-actions
    Now confirm your image was successfully built with:
    docker images

    Testing our image

    With the image built, let's quickly test it by running:
    docker run --rm -d -p 8080:80 --name webapp aspnet-gitthub-actions
    Browse to http://localhost:8080 to confirm it's running:

    Stop the container with the command below as we'll take a look at the setup on GitHub:
    docker stop webapp
    For more information on how to setup and run the application, check the project's README file.

    Setting up Actions

    With the container building and running locally, it's time to setup GitHub Actions. Open your repo and click on Actions:
    From here, you can either add a blank workflow or use a build template for your project. For our simple project I can use the template Publish Docker Container:
    By clicking Set up this workflow, GitHub will add a clone of that file to our repo at ~/.github/workflows/ and will load an editor so we can edit our recently created file. Go ahead and modify it to your needs. Since our Dockerfile is pretty standard, you'll only need to change the IMAGE_NAME to something adequate to your image:

    Running the Workflow

    As soon as you add that file, GitHub will run your first action. If you haven't pushed your code yet it'll probably fail:
    To fix the error above, go ahead and push some code (or reuse mine if you wish). Assuming you have a working Dockerfile in the root of your project (where the script expects it to be), you should see your next project being queued and run. The UI is pretty cool and allows you to inspect the process in real time:
    If the workflow finishes successfully, we'll get a confirmation like:
    Failed again? Did you update IMAGE_NAME as explained on the previous step?

    Accessing the Packages

    To view your Docker images, go to the project's page and click on Packages link:
    By clicking on your package, you'll see other details about your package, including how to pull it and run it locally:

    Running our Packages

    From there, the only thing remaining would be running our recently created packages. Since we already discussed in detail how to host and use our Docker images from GitHub packages, fell free to jump that post to learn how.

    Final Thoughts

    On this post we reviewed how to automatically build Docker images using GitHub Actions. GitHub Actions makes it easy to automate all your workflows including CI/CD, builds, test, and deployments. Hosting our Docker images on GitHub is valuable as you can share your images privately or with the rest of the world, integrate with GitHub tools and even create complex DevOps workflows. Other common scenarios would be building our images on GitHub and pushing them to Docker Hub or even auto-deploying them to the cloud. We'll evaluate those in the future so keep tuned!

    Source Code

    As always, the source code is available on GitHub.

    See Also

    Monday, May 4, 2020

    Configuration in .NET Core console applications

    If you search the official .NET documentation, you will probably not find much information on how to add config files to your .NET Core console applications. Let's learn how.
    Photo by Christopher Gower on Unsplash

    With the release of .NET Core 3.1, Microsoft changed a few things in how we access configuration in our files. While with ASP.NET documentation is really solid and scaffolding an ASP.NET Core website should include all the dependencies to get that right, the same does not happen with Console Applications. On this quick  tutorial let's see how we can replicate the same setup for our console apps.

    Why replicate ASP.NET Configuration

    The maturity that the .NET Core framework achieved includes the configuration framework. And all of that, despite the lack of documentation, can be shared between web and console apps. That said, here are some reasons why you should be using some of the ASP.NET tolling on your console projects:
    • the configuration providers read configuration data from key-value pairs using a variety of configuration sources including appsettings.json, environment variables, and command-line arguments
    • it can be used with custom providers
    • it can be used with in-memory .NET objects
    • if you're developing with Azure, integrates with Azure Key Vault, Azure App Configuration 
    • if you're running Docker, you can override your settings via the command line or environment variables
    • you will find parsers for most formats (we'll see an example here)

    The Solution

    So let's take a quick look at how to integrate some of these tools in our console apps.

    Adding NuGet packages

    Once you create your .NET Core app, the first thing to do is to add the following packages:
    • Microsoft.Extensions.Configuration
    • Microsoft.Extensions.Configuration.Binder
    • Microsoft.Extensions.Configuration.EnvironmentVariables
    • Microsoft.Extensions.Configuration.FileExtensions
    • Microsoft.Extensions.Configuration.Json
    Next, add the following initialization code:
    var env = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");
    var builder = new ConfigurationBuilder()
        .AddJsonFile($"appsettings.json", true, true)
        .AddJsonFile($"appsettings.{env}.json", true, true)
        .AddEnvironmentVariables();

    var config = builder.Build();
    If set, the env var above will auto-load the configuration as per the environment variable ASPNETCORE_ENVIRONMENT that comes preset on a new ASP.NET Core project. So for dev, it will try to use appSettings.Development.json sticking with appSettings.Development.json if the former doesn't exist.

    Creating a configuration file

    Now add an empty appSettings.json file in the root of your project and add your configuration. Remember that this is a json file so your config should be a valid json document. For example, to config file for one of my microservices is:
    {
      "MassTransit": {
        "Host": "rabbitmq://localhost",
        "Queue": "hildenco"
      },
      "ConnectionString": "Server=localhost;Database=hildenco;Uid=<username>;Pwd=<pwd>",
      "Smtp": {
        "Host": "<smtp-server>",
        "Port": "<smtp-port>",
        "Username": "<username>",
        "Password": "<password>",
        "From": "HildenCo Notification Service"
      }
    }

    Parsing the configuration

    There are two ways to access the configuration: by accessing each entry individually or by mapping the whole config file (or specific sections) to a class of our own. Let's see both.

    Accessing config entries

    With the config instance above, accessing our configurations is now simple. For example, accessing a root property is:
    var appName = config["ConnectionString"];
    While accessing a sub-property is:
    var rmqHost = config["RabbitMQ:Host"];

    Mapping the configuration

    Despite working well, the previous example is verbose and error prone. So let's see a better alternative: mapping the configuration to a POCO class that Microsoft calls the options pattern. Despite its fancy name, it's probably something that you'll recognize.

    We'll also see two examples: mapping the whole configuration and mapping one specific section. For both, the procedure will require these steps:
    • creating an options file
    • mapping to/from the settings
    • binding the configuration.

    Mapping the whole config

    Because our configuration contains 3 main sections - MassTransit, a MySQL ConnectionString and a SMTP config -, we'll model our AppConfig file the same way:
    public class AppConfig
    {
        public SmtpOptions Smtp { get; set; }
        public MassTransitOptions MassTransit { get; set; }
        public string ConnectionString { get; set; }
    }
    SmtpOptions should also be straight-forward:
    public class SmtpOptions
    {
        public string Host { get; set; }
        public int  Port { get; set; }
        public string Username { get; set; }
        public string Password { get; set; }
    }
    As MassTransitOptions:
    public class MassTransitOptions
    {
        public string Host { get; set; }
        public string Queue { get; set; }
    }
    The last step is binding the whole configuration with our config:
    var cfg = config.Get<AppConfig>();

    Accessing Configuration Properties

    With the config loaded, accessing our configs becomes trivial:
    var cs = cfg.ConnectionString;
    var smtpFrom = cfg.Smtp.From;

    Mapping a Section

    To map a section we use the method .GetSetcion("<section-name>").Bind() present on the Microsoft.Extensions.Configuration.Binder NuGet package that we added earlier. For example, to map just SmtpOptions we'd do:
    var mailOptions = new SmtpOptions();
    config.GetSection("Mail").Bind(mailOptions);

    Making it Generic

    Turns out that quickly the previous procedure also gets verbose. So let's shortcut it all with the following generic method (static if ran from Program.cs):
     private static T InitOptions<T>(string section)
        where T : new()
    {
        var config = InitConfig();
        return config.GetSection<T>(section);
    }
    And using it with:
    var smtpCfg = InitOptions<SmtpConfig>("Smtp");

    Reviewing the solution

    Everything should be good at this point. Remember to leverage your options classes along with your Dependency Injection framework instead of accessing the IConfiguration for performance reasons. To conclude, here's our final program.cs file:
    static async Task Main(string[] args)
    {
        var cfg = InitOptions<AppConfig>();
        // ...
    }

    private static T InitOptions<T>()
        where T : new()
    {
        var config = InitConfig();
        return config.Get<T>();
    }

    private static IConfigurationRoot InitConfig()
    {
        var env = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");
        var builder = new ConfigurationBuilder()
            .AddJsonFile($"appsettings.json", true, true)
            .AddJsonFile($"appsettings.{env}.json", true, true)
            .AddEnvironmentVariables();

        return builder.Build();
    }

    Conclusion

    On this post we reviewed how to use the ASP.NET tooling to bind and access configuration from our console applications. While .NET Core matured a lot, the documentation for console applications is not that great. For more information on the topic I suggest reading about Configuration in ASP.NET Core and understanding .NET Generic Host.

    References

    See Also

    Monday, November 12, 2018

    Windows Subsystem for Linux, the best way to learn Linux on Windows

    Want to learn Linux but don't know how/where to start? WSL may be a good option.
    In 2018, Microsoft released the Windows Subsystem for Linux (WSL). WSL lets developers run the GNU/Linux shell on a Windows 10 PC, a very convenient way to access the beloved tools, utilities and services Linux offers without the overhead of a VM.
    WSL is also the best way to learn Linux on Windows!

    About WSL

    Currently WSL supports Ubuntu, Debian, Suse and Kali distributions and can:
    • run bash shell scripts 
    • run GNU/Linux command-line applications including: vim, emacs, tmux
    • run programming languages like JavaScript, Node.js, Ruby, Python, Golang, Rust, C/C++, C# & F#, etc.
    • run background services like ssh shells, MySQL, Apache, lighttpd;
    • install additional software using own GNU/Linux distribution package manager.
    • invoke Windows applications.
    • access your Windows filesystem

    Installing WSL on Windows 10

    Installing WSL is covered by Microsoft on this article and is as easy is two steps.

    Step 1 - Run a Powershell Command

    On your Windows PC, you will need to run this PowerShell script as Administrator (shift + right-click):
    Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows -Subsystem-Linux
    After the installation ends, restart your PC.

    Step 2 - Install WSL from the Windows Store

    After the reboot, WSL can be installed through the Windows Store. To open the Windows Store on your Windows 10, click:
    Start -> Type Store -> Click on the Windows Store:
    Then type "Linux" on the search box and you should get something similar results to this:

    Click on the icon, accept the terms and Windows will download and install WSL for you.

    Running WSL

    After installation started, you will be prompted to enter your username and password. After done, you'll get a cool Linux terminal to start playing with. You can even have multiple Distros installed on your Windows 10 machine. On mine, I installed Debian and Ubuntu.

    Using the Terminal

    Okay, so now that we have access to our Linux shell, what to do next? Let's go through these use cases:
    • Accessing my Windows files
    • Access internet resources
    • I install software

    Accessing Windows Files

    WSL mounts your Windows files on the /mnt/c mount point. To verify on yours type mount on the command prompt and look for C: on it. Your windows files should be there.
    In case you don't know Linux, listing files is done with   ls  . This is the content of my C drive as as seen from WSL:

    Accessing the Internet

    Your WSL instance should have access to the internet. Testing the internet is as simple as doing a ping to Google:
    You can also verify your network info with ifconfig:
     

    Installing Software

    Installing software on Ubuntu/Debian is done by the apt command. For example, this is how we search packages:
    To install packages, use apt-get install. For example, to install Ruby on the Ubuntu WSL, run the command below:
    sudo apt-get install ruby-full

    Using git

    We can leverage apt and install git with:
    sudo apt-get install git
    ... # apt installs git
    git --help # to get help
    And, I'd recommend learn to use it on the terminal. Atlassian has an excellent tutorial to learn git.

    Getting Help

    Need help? The man tool is there to help you. For example, we could run the commands below to get help on git for example:
    man git

    Additional tip: try the new Windows Terminal

    And, if you want to invest more time on your WSL, I'd suggest that you install the new Windows Terminal. Download the last release from GitHub and install it on your box. It's very customizeable and contains shells for WSL, PowerShell, Azure CLI and the traditional Windows terminal.

    What's next?

    Now that you know how to locate your files, have access to the internet and installed some software, I'd recommend that you:

    Conclusion

    Congratulations! You have the WSL installed on your machine and now you have a Linux terminal to starting playing with. Now what? The first thing I'd recommend is to get comfortable with basic system commands, understand the filesystem, learn to add/remove software and run administrative tasks on the terminal. WSL is perfect for users who want to learn Linux and to those who spent a lot of time on Windows but need access to a Linux terminal.

    If you want to know more about my setup, here's why I use Fedora Linux with the fantastic i3 window manager on the desktop and CentOS on servers. Happy hacking!

    References

    See Also

      About the Author

      Bruno Hildenbrand