Showing posts with label Open Source. Show all posts
Showing posts with label Open Source. Show all posts

Thursday, March 17, 2022

Why use Vim

Depending on your preconceptions, Vim may look exotic or sexy. Let's review those assumptions and provide rational reasons to use this fantastic text editor.
Photo by Alex Knight on Unsplash

It may be possible that you heard about Vim. It may be possible that you didn't. Depending on your background, it may even be possible that have preconceptions about it. On this post, let's try to review all assumptions and provide concrete reasons to use this fantastic text editor.

This article is an adaptation of another publication made by me on Vim4us. I'm re-publishing here to a wider audience with a few tweaks.

Vim is ubiquitous

Vim has been around for almost thirty years. Due to its simplicity, ubiquity  and low resource requirements, it's the preferred editor by sysadmins worldwide.

Easy to install

Vim is also easy to install on Windows and Macs and is packaged in most Linux distros meaning that, even if it isn't installed in your system, Vim is one line from the terminal and two clicks from your software manager.

Vim is lightweight

Differently from most editors, Vim is very lightweight. The installation package is only 10 Mb and depending on your setup, memory consumption reaches  20 Mb. Compare that with most text editors, especially the Electron-based editors like Visual Studio. Install size is not less than 200Mb, memory consumption quickly 1Gb (50 times more!) while requiring 1.5Gb of storage, making it slow, even on modern hardware.

If you're running a Mac, a low end computer, a phone, or even a Raspberry Pi, Vim is definitely a good option for you.

Vim is stable

As previously said, Vim has been around for almost 30 years. And will probably be for at two more decades. Learning Vim is an excellent investment as you will be able to use your knowledge for the next two decades at least.

Compare that to the editor you use today (EclipseVisual StudioSublime TextVisual Studio Code) - can you really guarantee you'll be using them ten years from now?

Vim is language-independent

Vim works well with anything you want, as long as it's text. Vim works by default with most file formats, has locales, can be localized, supports eastern typography such as Arabic and Hebrew and comes with built-in support (including highlighting) for most languages.

Vim respects your freedom

Vim does not contain any built-in telemetry. It's (unfortunately) common theses days the companies are abusing your statistics in favor of improvements in their system. Sysadmins trust that Vim will not be reaching the network to run ad-hoc requests.

Vim is efficient

Vim is brilliant in how it optimizes your use of the keyboard. We'll talk about that later but for now, understand that its combination of multiple modes, motions, macros and other brilliant features makes it literally light-years ahead of other text editors.

Thriving Ecosystem

Stop for a second and think about which feature you couldn't live without today on your current text editor? The answer you probably be that Python or Go extension, meaning that what you'll miss is not actually about the editor but about its ecosystem.

Vim has a brilliant ecosystem. You'll find thousands of extensions covering anything you need. You can also host your extensions anywhere (on GitHub, for example) without being locked by any vendor. You could also host them in private/corporate repos just for your team or share on public directories like Vim Awesome.

Vim is ultra-customizable

Even if by default Vim has most of what you need, it's important to understand that Vim lets you change pretty much everything. For example, you can make temporary/local customizations (by using the Ex mode), permanent customizations (by changing your .vimrc) or even customizations based on file type.

Vim is always getting better

Vim is actively developed meaning that it keeps getting better. Vim users get security patches and new features all the time. Vim is also updated to accommodate the latest upgrades on modern operating systems while also supporting older systems too!

Huge Community

Vim's community is huge and you can get help easily. These days, the most active discussions happen on Vim's mailing listsStack ExchangeIRCYouTube and of course, Reddit.

Extensive documentation

Learning how to learn Vim is the key to a continuous understanding of the tool and not getting frustrated. There are many ways to get help on Vim: using its built-in help system, using the man pages and obviously, accessing the communities listed above.

Vim is free

These days it may be odd to say that Vim's free. Vim's freedom goes beyond its price, but also your freedom to modify it to your needs and deploy it wherever you want. Vim developers also have a strong commitment to helping needed people around the world.

GUI-less

Vim also runs GUI-less, meaning it runs on your terminal. So you get a full featured text-editor on any system you're working on, regardless if it's a local desktop or remote supercomputer. This feature is essential for sysadmins and developers who often need to modify text files on remote machines trough an SSH connection.

Rich out-of-the-box toolset

Vim comes with fantastic tooling by default: powerful search, regular expression support, syntax highlighting, text sort, integrated terminal, integrated file manager, cryptography, color schemes, plugin management and much more. All without a single plugin installed!

Vim integrates into your workflow

Differently from other text editors which force you into their thing, Vim adjusts seamlessly to your workflow via powerful customization, extension support, integrated shell support and ability to pipe data in/out from it. 

Vim can be programmed

Want to go the extra mile? Vim also has its own language, called VimL. With it you can create your own plugins and optimize even further the system to your needs.

Vim will boost your productivity

There are multiple ways Vim will boost your productivity. First, Vim's extensive use of the home row of the keyboard saves you from having to reach the arrow keys (or even worse, the mouse) to do your work. Second, with Vim you can quickly create macros to reproduce repetitive operations, third, the combination of motions, plugins, custom shortcuts and shell integration will definitely boost your productivity way more than you could imagine.

Vim will make you type better and faster

Being keyboard based, Vim's workflow based on the home row will definitely help force you to type better. With Vim you'll realize that you probably move your hands way more than you should and will significantly increase your typing speed.

Vim will make you learn more

Most editors these days do too much. Yes, part of that is imposed on us by languages that require a lot of metadata (Java and C# for example). One problem with that is that you end up relying on the text editor much more than you need. Without access to Eclipse or Visual Studio it may be possible that you'll feel the impostor syndrome

With Vim, despite being able to, you'll feel closer to your work, resulting in a better understanding of what you're doing. You'll also realize that you will learn more and memorize better the contents of what you're working on.

Conclusion

On this post we provided many tips why one should learn Vim. Vim is stable, ubiquitous and is supported by an engaged, growing community. Given all its features, Vim is definitely a good tool to learn now and harvest the benefits for decades to come.

References

See Also

Wednesday, December 1, 2021

Vimium, the hacker's browser

Vimium is an essential tool for those looking to increase their productivity regardless if you're on Windows, Mac or Linux. Read to understand.
Photo by James Pond on Unsplash

If you read this blog before, you probably know my perfect setup: Fedora Linux, the i3 window manager (or Sway), the terminalRangerVim and lot, lots of automation. I got to this setup after meticulously searching for tools that could improve my workflow so I could be more productive, doing less. However, during that journey I realized that the browsing experience  - which takes a lot of our productive time - wasn't as optimal as it could be, so I started looking for ways to optimize it as well.

Turns out that Vimium is the key ingredient on that setup. On this post, let's learn what Vimium is, what it offers, how to use it, and how you too can be more productive, regardless of what your perfect setup might be.

About Vimium

So what's Vimium? Vimium's is a browser extension that provides keyboard shortcuts for navigating and controlling your browser inspired by the Vim text editor.

But why use Vimium?

So why should you care for yet another browser extension? Because Vimium:
  • will increase your productivity: by allowing you navigate the web without using the mouse.
  • makes you work faster: when you get used with Vimium you'll be able to accomplish work faster.
  • is highly customizable: allowing you to set your own keyboard shortcuts
  • is simple to use: once you understand how it works, it'll be very intuitive
  • has Vim-like keybindings: this is what makes Vimium 
  • helps reducing your fatigue: during the day, we do thousands of movements from the keyboard to the mouse. Keeping your hands centered on the keyboard will save you a lot of energy.
  • is an active open-source project: mature and healthy open-source projects are important as they guarantee you'll receive updates, fixes and improvements. You can find it's source code here.

Supported browsers

Currently Vimium runs on most browsers including Google Chrome, Firefox, Edge and Brave.

Why based on Vim?

Contrary to what you may have heard, Vim is a fantastic text editor. Vim emphasizes good typing practices by leveraging the keys located around the home row. The home row (F and J) is the most efficient place to place of your fingers causing less muscular stress and reduced arm movement. Vimium brings these concepts to the browser, transforming the traditional point-click browsing experience into productivity through the use of the keyboard.

Installing Vimium

Installing Vimium is very simple. Just open the app store for your browser and click Add extension (or equivalent) button on the extension page. Google Chrome users check this page, Firefox users can find Vimium here.
Installing Vimium on your browser should be as simple as navigating to the links above, clicking the Add extension button and confirming. No restart is necessary.

Using Vimium

With Vimium installed, let's start with the basics. The most essential shortcuts are:
  • f: pressing f will make Vimium highlight all hyperlinks. Entering the key opens the link on the same tab
  • F: same as f but opens on another tab
  • x: close the current tab
  • j: scroll down
  • k: scroll up
  • d: scroll down half a page
  • u: scroll up half a page
  • gg: scroll to top of the page
  • G: scroll to bottom of the page
  • H: go to the previous page
  • L: go the the next page 
  • b: open a bookmark
  • /: search
Vimium does not run on all pages. If the V icon on your bar is grey, it's turned off.  Vim also does not run by default on Private Mode but you can configure it to, on the extension settings page.

Managing Tabs

Vimium can also manage your tabs. The most used commands are:
  • x: close the current tab
  • F + link: opens link in another tab
  • J: previous tab
  • K: next tab
  • g<num>: goes to tab <num>
  • t: create new tab
  • yt: duplicate current tab
  • X: undo close tab

Getting Help

With Vimium installed, press ? to view the default shortcuts. You should see a screen like this:

A simple example

So let's do a simple example if possible, by just using the keyboard. With Vimium installed, open its GitHub page and press f. You should see:
As you can realize, all the yellow boxes contain letters inside. Typing them will tell the browser to click those links. For example, if I pressed S, I'd be taken to the link that here points to on the same tab. Need to continue working? Just open a new tab and go from there. Change tabs with J or K (uppercase), close with x, rinse and repeat.
I used F instead of f, typing S next would open here in another tab.

Advanced Features

As previously said, Vimium is also highly configurable. Because it's out of the scope of this post, will simply point you to the official configuration. There's a lot more there and once you get used with the tool you'll probably want to explore and customize it to your needs.

Conclusion

On this post we explained why using Vimium may yield good results by increasing your productivity and reducing your fatigue. I hope you are excited to try it out. If you want to learn other productivity hacks, check the Ranger file manager and the Vim text editor. Together with Vimium, these tools will make your workflow way more productive.

See Also

Tuesday, March 2, 2021

Continuous Integration with Azure App Services and Docker Containers

Enabling continuous integration between your Azure App Services and Docker Containers is simple. Learn how.
Photo by Jason Leung on Unsplash

Following up on a previous post where we learned how to deploy our own Docker Images to Azure App Services, today we will learn how to enable continuous deployment between our App Service and our Azure Container Registry so that our ASP.NET website is automatically updated whenever a new image is pushed to our private repository.

On this post we will:

If you want to follow along, please check the previous tutorials discussing how to:

  • Build a simple ASP.NET Core image on your local Docker repository
  • Create and push a Docker Image to your own Azure Container Registry
  • Deploy Docker images to Azure App Services

Requirements

As requirements, please make sure you have:

Why Continuous Deployment?

Before getting to the code, let's understand a little more about continuous deployment. Wikipedia defines it as
a software engineering approach in which software functionalities are delivered frequently through automated deployments.
And why practice CD? Still according to Wikipedia, CD is especially important because in an environment in which data-centric microservices provide the functionality, and where the microservices can be multiply instantiated, CD consists of instantiating the new version of a microservice and retiring the old version as it has drained all the requests in flight.

Reviewing our App Service

So let's start by reviewing our application. We will resume from a previous post where we explained how to deploy our Docker images to App Services. Our app looked like this:

Our App Service Panel

Here's its Azure panel:

Container Services

And here's the configuration used on the previous deployment:

Image Setup

Notice that because we're switching to continuous deployment and we'll be constantly changing the version number so sticking with v1 will no longer work. On this case, tagging our images as latest is preferred since we want automatic deployments whenever a new webapp:latest reaches the registry. As you expect, tagging an existing image is a simple process:
docker image tag webapp hildenco.azurecr.io/webapp:latest

Then we push that image again just so our repo contains a latest tag to configure our webhook:

docker image push hildenco.azurecr.io/webapp:latest

We now should now see webapp:latest in our registry:

Enabling Continuous Deployment

With the requirements in place, let's configure the necessary settings to deploy whenever a new webapp:latest reaches the registry.

Enabling App Service Continuous Deployment

To enable continuous deployment for our App Service, open your App Service -> Container Settings, set Continuous Deployment to on and the tag to latest, then save:
This operation may take a little longer than you expect because it will create a webhook with the above configuration in our registry. See the next item for more information

Reviewing the Container Registry webhook

Now go to Container Registry -> Webhooks to confirm that the previous operation created a webhook for us. As seen from the history, it was never triggered so let's push a new image to test it.

Preparing a new Version

So let's prepare another version to test if our CD works. On this step we will change the code, rebuild the image, tag it as latest and push it to our private repo.
Keep track of your versions. Images can contain multiple tags and they don't occupy any space. Treat your tags as releases. In case you want to restore or redeploy an older version, it's easier to find them by tag and by image ID.

Changing the Source Code

Firstly let's change our super-complexcode and add a link to this site in our landing page:

Rebuilding the Image

Next, we rebuild our webapp with:
docker image build . -t webapp
Then we tag it with the registry's FQDN with:
docker image tag webapp hildenco.azurecr.io/webapp:latest

Testing our Continuous Deployment

With our local image ready and tagged, let's push it to our registry and verify if the webhook was triggered.

Pushing our Image

In order to push our image, login to ACR with:
az acr login -n hildenco
Then we push it with:
docker image push hildenco.azurecr.io/webapp:latest

Reviewing the webhook

Refresh the webhook page and see that the hook executed successfully:

Reviewing the logs

And on the logs tab under Container settings, I also see that the webhook was triggered (UTC time):

Reviewing the App

Lastly, we can confirm that our awesome app was updated on the public URL:

Conclusion

On this post we reviewed how to do continuous integration from Docker containers into our Azure App Services using our private Azure Container Registry. Docker containers today are the standard way to build, pack and ship our applications and it's important to learn how tools such as private container registries can help us be more effective.

References

See Also 

Tuesday, February 2, 2021

Deploying Docker images to Azure App Services

Deploying Docker images to Azure App Services is simple. Learn how to deploy your Docker images to Azure App Services using Azure Container Registry (ACR)
Photo by Glenn Carstens-Peters on Unsplash

We've been discussing Docker, containers and microservices for some time on the blog. On previous posts we learned how to create our own ASP.NET Docker images and how to push them to Azure Container Registry. Today we'll learn how to deploy these same Docker images on Azure App Services.

On this post we will:

Requirements

As requirements, please make sure you have:
If you want to follow along, please check the previous tutorials discussing how to:

    About Azure App Services

    Azure developers are quite familiar with Azure App Services. But for those who don't know, App services are:
    HTTP-based services for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments.

    Why use App Services

    And why use Azure App Services? Essentially because App Services:
    • support multiple languages and frameworks: such as ASP.NET, Java, Ruby, Python and Node.js
    • can be easily plugged into your CI/CD pipelines, for example to deploy from Docker Hub or Azure Container Registries
    • can be used as serverless services
    • runs webjobs allowing us to deploy backend services without any additional costs
    • have a very powerful and intuitive admin interface 
    • are integrated with other Azure services

    Creating our App Service

    So let's get started and create our App Service. While this shouldn't be new to anyone, I'd like to review the workflow so readers understand the step-by-step. To create your App Service, in Azure, click Create -> App Service:
    On this screen, make sure you select:
    • Publish: Docker Container
    • OS: Linux

    Select the free plan

    Click on Change Plan to choose the free one (by default you're set on a paid one). Click Dev/Test and select F1:

    Selecting Docker Container/Linux

    Review the info and don't forget to select Docker Container/Linux for Publish and Operating System:

    Specifying Container Information

    Next, we specify the container information. On this step we will choose:
    • Options: Single Container
    • Image Source: Azure Container Registry
    • Registry: Choose yours
    Change Image Source to Azure Container Registry:
    On this step, Azure should auto-populate your repository. However, if you do not have admin user enabled (I didn't), you'll get this error:

    Enabling Admin in your Azure Container Registry

    To enable admin access to your registry, open it using the portal and on the Identity tab, change from Disable:
    To Enable and Azure will auto-generate the credentials for you:

    Specify your Container

    Back to the creation screen, as soon as the admin access is enabled on your registry, Azure should auto-populate the required information with your registry, image and tag (if one exists):
    Startup Command allows you to specify additional commands for the image (for example environment vars, volumes, configurations, etc).

    Review and Confirm

    Review and confirm. The deployment should take less than 1 second:

    Accessing our App Service in Azure

    As seen above, as soon as confirm, the deployment starts. It shouldn't take more than 1 minute to have it complete.

    Accessing our Web Application

    Let's check if our image is running. From the above image you can see my image's URL highlighted in yellow. Open that on a browser to confirm the site is accessible:

    Container Features

    To finish, let's summarize some features that Azure offers us to easily manage our containers. 

    Container Settings

    Azure still offers a Container Settings tab that allows us to inspect, change container settings for our web app.

    Container Logs

    We can inspect logs for our containers to easily troubleshoot them.
    As an example, here's an excerpt of what I got for my own container log:
    2020-04-10 04:32:51.913 INFO  -  Status: Downloaded newer image for hildenco.azurecr.io/webapp:v1
    2020-04-10 04:32:52.548 INFO  - Pull Image successful, Time taken: 0 Minutes and 47 Seconds
    2020-04-10 04:32:52.627 INFO  - Starting container for site
    2020-04-10 04:32:52.627 INFO  - docker run -d -p 5021:80 --name hildenco-docker_0_e1384f56 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=hildenco-docker -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=hildenco-docker.azurewebsites.net -e WEBSITE_INSTANCE_ID=[redacted] hildenco.azurecr.io/webapp:v1 
    2020-04-10 04:32:52.627 INFO  - Logging is not enabled for this container.
    Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
    2020-04-10 04:32:57.601 INFO  - Initiating warmup request to container hildenco-docker_0_e1384f56 for site hildenco-docker
    2020-04-10 04:33:02.177 INFO  - Container hildenco-docker_0_e1384f56 for site hildenco-docker initialized successfully and is ready to serve requests.

    Continuous Deployment (CD)

    Another excellent feature that you should explore in the future is enabling continuous deployment on your web apps. Enabling continuous deployment is essential to help your team gain agility by releasing faster and often. We'll try to cover this fantastic topic in the future, keep tuned.

    Conclusion

    On this post we reviewed how to create an Azure App Service and learned how to deploy our own Docker Images from our very own Azure Container Registry (ACR) to it. By using ACR greatly simplified the integration between our own Docker Images and our App Services. From here I'd urge you to explore continuous integration to automatically push your images to your App Services as code lands in your git repository.

    References

    See Also

    Tuesday, January 5, 2021

    Pushing Docker images to ACR - Azure Container Registry

    Building ASP.NET Core websites with Docker? Learn how to use Azure Container Registry.
    Photo by Hal Gatewoodon on Unsplash

    Since you now understand how to create your own ASP.NET Core websites with Docker, the next step in your container journey is learning how to push your images to a container registry so they can be shared and deployed somewhere else.

    Today, we will push our own ASP.NET Core Docker images to Azure's Container Registry (ACR) and deploy them to our own custom CentOS server. Sounds complicated? Let's see.

    On this post we will:
    • (Quickly review how to) Create our own Docker images.
    • Create our Azure Container Registry to host and share our images.
    • Push our own Docker images to our new container registry.
    • Pull and run our images from a CentOS server.

    Requirements

    For this post you'll need:

    Managed Container Registries

    Today, Docker Hub is still the world's most popular container registry. But do you know what container registries are? I like this definition from GCP:
    A Container Registry is a single place for your team to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control. Existing CI/CD integrations let you set up fully automated Docker pipelines to get fast feedback.

    Why create our own container registry?

    Okay but if we already have Docker Hub, why bother creating our own container registry?

    Because as we will see, pushing our images to a managed container registry allows you to not only share your images privately with other developers in your organization but also to utilize them in your CI/CD - in an integrated fashion -, process and deploy them to your cloud resources such as Virtual Machines, Azure App Services and even Azure Kubernetes Services.

    In summary, the main advantages of creating our own container registry are:
    • have our own private repository
    • use it as a source of deployment for our artifacts, especially if on the same cloud
    • integrate with your CI/CD process to produce automated pipelines
    • use a cloud-hosted service, alleviating our ops team
    • have automated vulnerability scans for our own images
    • geo-distribute your images so teams around the globe can access them quickly

    Alternatives to ACR (Azure Container Registry)

    But what if you use a different cloud? No problem! Apart from ACR and Docker Hub today we also have: Amazon ECS, Red Hat's QuayGoogle Container Registry (GCR) and Digital Ocean's Container Registry.

    And which registry should I choose? I'd recommend using the one offered by your cloud provider since they will have better integration with resources on your cloud resources. In other words, on AWS? use ECS. On Google Cloud? Use GCR. On Azure, go with ACR. Else, stick with Docker Hub.

    Creating our Azure Container Registry

    So let's get to work. There are essentially two ways to create our own registry on Azure: via the CLI and via the portal. Since the Azure CLI alternative is already very well documented, let's review how to do it from the portal.

    Click New Resource, type Container Registry and you should see:

    Click Create, enter the required information:
    Review and confirm, the deployment should take a few seconds:

    Reviewing our Container Registry

    Once the deployment finishes, accessing your own container registry should get you to a panel similar to the below. In yellow I highlighted my registry's URL. We'll need it to push our images.

    I like pinning resources by project on a custom Dashboard. Fore more information, check this link.

    Preparing our Image

    Let's now prepare an image to push to the remote registry. I'll use my own available on GitHub but feel free to use yours if you already have one available.

    Pulling a sample .NET image

    If you followed my previous post on how to create ASP.NET Core websites using Docker, you probably have our webapp ASP.NET Core 3.1 image on your Docker repository. If not, please run:
    git clone https://github.com/hd9/aspnet-docker
    cd aspnet-docker
    docker build . -t webapp
    Then type the below command to confirm your image is available locally:
    docker image ls
    docker images or docker image ls? Both are the same but since all commands to manage images use the docker image prefix I prefer to follow that pattern. Feel free to use what suits you better.

    Tagging our image

    Before pushing our image we still need to tag it with the full repo name and add a reasonable version to it:
    docker tag webapp <acrname>.azurecr.io/webapp:v1
    You have to tag the image with the fully qualified domain name (ex. yourrepo.azurecr.io) in the name else, Docker will try to push it to Docker Hub.

    Run docker image ls once more to make sure your image is correctly tagged:

    Pushing our Image

    With the image ready, let's push it to our registry.

    Logging in to our Registry

    The first thing we'll need is to authenticate using the Azure CLI. On a Windows terminal, type:
    az acr login --name <your-acr-name>
    if you used the az tool before, you're probably logged in and you won't be required to enter any username/password. Else, you'll need to sign in with the az login command then sign in again with az acr login. Check this page for more details.

    Now the big moment. Push the image to the remote repo with:
    docker push <acrname>.azurecr.io/webapp:v1
    If all went well, you should see on the Services/Repositories tab, our image webapp as a repository:
    And clicking on our repo, we see v1 as Tag and additional info about our image on the right:

    Deploying our image locally

    Pulling and running your remote image locally should be really simple since tools/credentials are in place. Run:

    docker pull hildenco.azurecr.io/webapp:v1
    docker run -d webapp

    Deploying our image to a remote Linux VM

    But we're here to learn so let's complicate this a little and deploy this image to a remote Linux VM. For this part of the tutorial, make sure your VM is ready by installing the following requirements:
    To avoid sudoing all the time with Docker, add your user to the docker group with sudo usermod -aG <username>

    Authenticating with the Azure CLI

    If you recall, we had to login to our repo from our development box. Turns out that our CentOS VM is another host so we'll have to login again first with the main az login:
    az login
    Then login on ACR:
    az acr login --name <your-acr-name>

    Pulling our Image from CentOS

    With the authentication in place, pulling the image from our CentOS should be familiar:
    docker pull hildenco.azurecr.io/webapp:v1

    Running our Container

    The last step is to run our image and access it from outside. To run our image we do:
    docker run --rm -d -p 8080:80 hildenco.azurecr.io/webapp:v1
    Where:
    • --rm: remove container when stopped
    • -d: run in detached (background) mode
    • -p 8080:80: map port 80 on container to external 8080 on host

    Accessing our Container

    Then, accessing from outside requires knowing the VM's IP address and accessing it from the development box (host). To know your CentOS's IP, type:
    ip a show eth0
     Then, with the IP, access it from your favourite browser:

    Conclusion

    On this article we described how to create our own Azure's Container Registry on Azure, push, pull and deploy images from it to a CentOS virtual machine. With the growth of microservices, knowing Docker becomes an essential skill these days. And since containers are the way developers build, pack and ship applications these days, knowing how to use a managed container registry will be an essential asset for your team.

    Source Code

    As always, the source code for this post is available on my GitHub.

    References

    See Also

    Monday, August 10, 2020

    Creating ASP.NET Core websites with Docker

    Creating and running an ASP.NET Core website on Docker using the latest .NET Core framework is fun. Let's learn how.
    Photo by Guillaume Bolduc on Unsplash

    Docker is one the most used and loved technology on the market today. We already discussed its benefits, how to install it and even listed technical details every developer should know. On this post, we will review how to create an ASP.NET Core website with Docker Desktop using the latest .NET Core 3.1. After reading this post you should understand how to:
    • Create and run ASP.NET Core 3.1 website
    • Build your first container
    • Run your website as a local container
    • Understand the basic commands
    • Troubleshooting

    Requirements

    For this post, I'll ask you to make sure that you have the following requirements installed:
    Linux users should be able to follow along assuming they have .NET Core and Docker installed. Podman, a very competent alternative to Docker should work too.

    Containers in .NET world

    So what's the state of containers in the ASP.NET world? Microsoft started late on the game but since .NET Core 2.2 we started seeing steady increases in container adoption. The ecosystem also matured. If you look at their official ASP.NET sample app on GitHub, you'll can now run your images on Debian (default), Alpine, Ubuntu and Windows Nano Server.

    Describing our Project

    The version we'll use (3.1) is the latest LTS before .NET Framework and .NET Core merge as .NET 5. That's excellent news for slower teams as they'll be able to catch up. However, don't sit and wait, it's worth understanding how containers, microservices and orchestration technologies work so you're able to help your team in the future.

    For our project we'll use two images: the official .NET Core 3.1 SDK to build our project and the official ASP.NET Core 3.1 to run it. As always, our project will be a simple ASP.NET MVC Core web app scaffolded from the dotnet CLI.

    Downloading the .NET Core Docker SDK

    This step is optional but if you're super excited and want to get your hands in the code already, consider running the below command. Docker will pull dotnet's Docker SDK and store on your local repository.
    docker pull mcr.microsoft.com/dotnet/core/sdk:3.1

    C:\src\>docker pull mcr.microsoft.com/dotnet/core/sdk:3.1
    3.1: Pulling from dotnet/core/aspnet
    c499e6d256d6: Pull complete
    251bcd0af921: Pull complete
    852994ba072a: Pull complete
    f64c6405f94b: Pull complete
    9347e53e1c3a: Pull complete
    Digest: sha256:31355469835e6df7538dbf5a4100c095338b51cbe52154aa23ae79d87585d404
    Status: Downloaded newer image for mcr.microsoft.com/dotnet/core/aspnet:3.1
    mcr.microsoft.com/dotnet/core/aspnet:3.1
    This is a good test to see if your Docker Desktop is correctly installed. As we'll see when we build our image, Docker skips repulling the image from the remote host if it exists locally so we aren't losing anything in doing that now.

    To confirm our image sits in our local repo, run:
    docker image ls
    You should see the image in your local repo as:
    Why do I have 3 dotnet images? Because I used them before. At the end of this posts you should have two of them. Guess which?

    Creating our App

    Let's now create our app. As always, we'll use the dotnet CLI, let's leave the Visual Studio tutorials to Microsoft, shall we? Open a terminal, navigate to your projects folder and create a root folder for our project. For example c:\src\webapp. Open a terminal, cd into that folder and type:
    C:\src>dotnet new mvc -o webapp

    The template "ASP.NET Core Web App (Model-View-Controller)" was created successfully.
    This template contains technologies from parties other than Microsoft, see https://aka.ms/aspnetcore/3.1-third-party-notices for details.

    Processing post-creation actions...
    Running 'dotnet restore' on webapp.csproj...
      Restore completed in 123.55 ms for C:\src\webapp\webapp.csproj.

    Restore succeeded.
    Now let's test our project to see if it runs okay by running:
    cd webapp
    dotnet run

    C:\src\webapp>dotnet run
    info: Microsoft.Hosting.Lifetime[0]
          Now listening on: https://localhost:5001
    info: Microsoft.Hosting.Lifetime[0]
          Now listening on: http://localhost:5000
    info: Microsoft.Hosting.Lifetime[0]
          Application started. Press Ctrl+C to shut down.
    info: Microsoft.Hosting.Lifetime[0]
          Hosting environment: Development
    info: Microsoft.Hosting.Lifetime[0]
          Content root path: C:\src\webapp

    Open https://localhost:5001/, and confirm your webapp is similar to:

    Containerizing our web application

    Let's now containerize our application. Learning this is a required step for those looking to get into microservices. Since containers are the new deployment unit it's also important to know that we can encapsulate our builds inside Docker images and wrap everything on a Docker file.

    Creating our first Dockerfile

    A Dockerfile is the standard used by Docker (and OCI-containers) to perform tasks to build images. Think of it as a script containing a series of operations (and configurations) Docker will use. Since our super-simple web app does not require much, our Dockerfile can be as simple as:
    # builds our image using dotnet's sdk
    FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
    WORKDIR /source
    COPY . ./webapp/
    WORKDIR /source/webapp
    RUN dotnet restore
    RUN dotnet publish -c release -o /app --no-restore

    # runs it using aspnet runtime
    FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
    WORKDIR /app
    COPY --from=build /app ./
    ENTRYPOINT ["dotnet", "webapp.dll"]
    Why combine instructions?

    Remember that Docker images are built using common layers and each command run will be ran on top of the previous one. The way we script our Dockerfiles affects how our images are built as each instruction will produce a new layer. In order to optimize our images, we should combine our instructions whenever possible. 

    Building our first image

    Save the contents above as a file named Dockerfile on the root folder of your project (on the path above to where your csproj exists, on my case c:\src\webapp\Dockerfile) and run:
    C:\src\webapp>docker build . -t webapp

    Sending build context to Docker daemon  4.391MB
    Step 1/10 : FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
     ---> fc3ec13a2fac
    Step 2/10 : WORKDIR /source
     ---> Using cache
     ---> 18ca54a5c786
    Step 3/10 : COPY . ./webapp/
     ---> 847771670d86
    Step 4/10 : WORKDIR /source/webapp
     ---> Running in 2b0a1800223e
    Removing intermediate container 2b0a1800223e
     ---> fb80acdfe165
    Step 5/10 : RUN dotnet restore
     ---> Running in cc08422b2031
      Restore completed in 145.41 ms for /source/webapp/webapp.csproj.
    Removing intermediate container cc08422b2031
     ---> a9be4b61c2e6
    Step 6/10 : RUN dotnet publish -c release -o /app --no-restore
     ---> Running in 8c2e6f280cb9
    Microsoft (R) Build Engine version 16.5.0+d4cbfca49 for .NET Core
    Copyright (C) Microsoft Corporation. All rights reserved.

      webapp -> /source/webapp/bin/release/netcoreapp3.1/webapp.dll
      webapp -> /source/webapp/bin/release/netcoreapp3.1/webapp.Views.dll
      webapp -> /app/
    Removing intermediate container 8c2e6f280cb9
     ---> ceda76392fe7
    Step 7/10 : FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
     ---> c819eb4381e7
    Step 8/10 : WORKDIR /app
     ---> Using cache
     ---> 4f0b0bc1c33b
    Step 9/10 : COPY --from=build /app ./
     ---> 26e01e88847d
    Step 10/10 : ENTRYPOINT ["dotnet", "webapp.dll"]
     ---> Running in 785f438df24c
    Removing intermediate container 785f438df24c
     ---> 5e374df44a83
    Successfully built 5e374df44a83
    Successfully tagged webapp:latest
    If your build worked, run docker image ls and you should see your image webapp listed as an image by Docker:
    Remember the -t flag on the docker build command above? It told Docker to tag our own image as webapp. That way, we can use it intuitively instead of relying on its ID. We'll see next.

    Running our image

    Okay, now the grand moment! Run it with the command below in bold, we'll explain the details later:
    C:\src\webapp>docker run --rm -it -p 8000:80 webapp
    warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
          Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
    warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
          No XML encryptor configured. Key {a0030860-1697-4e01-9c32-8d553862041a} may be persisted to storage in unencrypted form.
    info: Microsoft.Hosting.Lifetime[0]
          Now listening on: http://[::]:80
    info: Microsoft.Hosting.Lifetime[0]
          Application started. Press Ctrl+C to shut down.
    info: Microsoft.Hosting.Lifetime[0]
          Hosting environment: Production
    info: Microsoft.Hosting.Lifetime[0]
          Content root path: /app
    Point your browser to http://localhost:8000, you should be able to view our containerized website similar to:

    Reviewing what we did

    Let's now recap and understand what we did so far. I intentionally left this to the end as most people would like to see the thing running and play a little with the commands before they get to the theory.

    Our website

    Hope there isn't anything extraordinary there. We essentially scaffolded an ASP.NET Core MVC project using the CLI. The only point of confusion could be the location of the project and the location of the Docker file. Assuming all your projects sit on c:\src on your workstation, your should have:
    • c:\src\webapp: the root for our project and our ASP.NET Core MVC website
    • c:\src\webapp\Dockerfile: the location for our Dockerfile

    Dockerfile

    When we built our Dockerfile you may have realized that we utilized two images (the SDK and the ASP.NET image). Despite this being a little less intuitive for beginners, it's actually a good practice as our images will be smaller in size and have less deployed code making them more secure as we're reducing the attack surface. The commands we used today were:
    • FROM <src>: tells Docker that to pull the base image needed by our image. The SDK image contains the tools necessary to build our project and the ASP.NET image to run it.
    • COPY .: copies the contents of the current directory to the specified inside the container.
    • RUN <cmd>: runs a command inside the container.
    • WORKDIR <path>: set the working directory for subsequent instructions and also as startup location for the container itself.
    • ENTRYPOINT [arg1, arg2, argN]: specifies the command to execute when the container starts up in an array format. In our case, we're running dotnet webapp.dl on the container as we would do to run our published website outside of it.

    Docker Build

    Next let's review what the command docker build . -t webapp means:
    • docker build .: tells Docker to to build our image based on the contents of the current folder (.). This command also accepts an optional Dockerfile which we didn't provide on this case. When not provided, Docker expects a Dockerfile on the current folder which you copied and pasted before running this command.
    • -t webapp: tag the image as webapp so we can run commands using this friendly name

    Docker Run

    To finish, let's understand what docker run --rm -it -p 8000:80 webapp means:
    • docker run: runs an instance of an image (a container);
    • --rm: remove the image just after it finishes. We did this to not pollute your local environment as you'll probably run this command multiple times and each run will produce a new container. Note that you should only use this in development as Docker won't preserve the logs for the image after it's deleted;
    • --it: keep it running attached to the terminal so we can see the logs and cancel it with Ctrl-C;
    • -p 8000:80: expose the container's port 80 on the localhost at port 8000. Is that port used? Feel free to change the number before the : to something that makes sense to you, just remember to point your browser correctly to the new port.
    • webapp: the tag of our image

    Troubleshooting

    So, it may be possible that you couldn't complete your tutorial. Here's some tips that may help.

    Check your Dockerfile

    The syntax for the Dockerfile is very specific. Make sure you copied the file correctly and your names/folders match mine. Also make sure you saved your Dockerfile on the root of your project (or, in the same folder as your csproj) and that you ran docker build from the same folder.

    Run interactively

    In the beginning, always run the container interactively by using the -it syntax as below. After you get comfortable with the Docker CLI, you'll probably want to run them in detached mode (-d).
    docker run --rm -it --name w1 webapp

    List your images

    Did you make sure your image was correctly built? Do you see webapp when you run:
    docker image ls
    You can also list your images with docker images however, I prefer the above format as all other commands follow that pattern, regardless of the resource you're managing.

    List the containers

    If you ran your container, Did you make sure your image was correctly built? Do you see webapp when you run:
    docker container ls

    Use HTTP and not HTTPS

    Make sure that you point your browser to http://localhost:8000 (and not https). Some browsers are picky today with HTTP but it's what works on this example.

    Check for the correct port

    Are you pointing to the correct url. The -p 8000:80 param specified previously tells Docker to expose the container's port 80 on our host at 8000.

    Search your container

    It's possible that your container failed. To list all containers that ran previously, type:
    docker container ls -a

    Inspect container information

    To inspect the metadata for your container type the command below adding your container id/name. This command is worth exploring as it will teach you a lot about the internals of the image.
    docker container inspect <containerid>

    Removing containers

    If you want to get rid of the containers, run:
    docker container prune -f

    Removing Images

    If you want to get rid of the containers, run:
    docker image rm <imageid>

    Check the logs

    If you managed to create the image and run it, you could check the logs with:
    docker container logs <container-id>

    Log into your container

    You could even log into your container and if you know some Linux, validate if the image contains what you expect. The command to connect to a running container is:
    docker exec -it <containerid> bash

    Install tools on your container

    You could even install some tools on your container. For Debian (the default image), the most essential tools (and their packages) I needed were:
      • ps: apt install procps
      • netstat: apt install net-tools
      • ping: apt install iputils-ping
      • ip: apt install iproute2
      Don't forget to run apt update to update your local cache of files else the commands above won't work.

      Conclusion

      On this post we reviewed how to create an ASP.NET Core website with Docker. Docker is a very mature technology and essential for those looking into transitioning their platforms into microservices.

      Source Code

      As always, the source code for this article is available on GitHub.

      References

      See Also

      About the Author

      Bruno Hildenbrand