Showing posts with label Virtualization. Show all posts
Showing posts with label Virtualization. Show all posts

Tuesday, January 5, 2021

Pushing Docker images to ACR - Azure Container Registry

Building ASP.NET Core websites with Docker? Learn how to use Azure Container Registry.
Photo by Hal Gatewoodon on Unsplash

Since you now understand how to create your own ASP.NET Core websites with Docker, the next step in your container journey is learning how to push your images to a container registry so they can be shared and deployed somewhere else.

Today, we will push our own ASP.NET Core Docker images to Azure's Container Registry (ACR) and deploy them to our own custom CentOS server. Sounds complicated? Let's see.

On this post we will:
  • (Quickly review how to) Create our own Docker images.
  • Create our Azure Container Registry to host and share our images.
  • Push our own Docker images to our new container registry.
  • Pull and run our images from a CentOS server.

Requirements

For this post you'll need:

Managed Container Registries

Today, Docker Hub is still the world's most popular container registry. But do you know what container registries are? I like this definition from GCP:
A Container Registry is a single place for your team to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control. Existing CI/CD integrations let you set up fully automated Docker pipelines to get fast feedback.

Why create our own container registry?

Okay but if we already have Docker Hub, why bother creating our own container registry?

Because as we will see, pushing our images to a managed container registry allows you to not only share your images privately with other developers in your organization but also to utilize them in your CI/CD - in an integrated fashion -, process and deploy them to your cloud resources such as Virtual Machines, Azure App Services and even Azure Kubernetes Services.

In summary, the main advantages of creating our own container registry are:
  • have our own private repository
  • use it as a source of deployment for our artifacts, especially if on the same cloud
  • integrate with your CI/CD process to produce automated pipelines
  • use a cloud-hosted service, alleviating our ops team
  • have automated vulnerability scans for our own images
  • geo-distribute your images so teams around the globe can access them quickly

Alternatives to ACR (Azure Container Registry)

But what if you use a different cloud? No problem! Apart from ACR and Docker Hub today we also have: Amazon ECS, Red Hat's QuayGoogle Container Registry (GCR) and Digital Ocean's Container Registry.

And which registry should I choose? I'd recommend using the one offered by your cloud provider since they will have better integration with resources on your cloud resources. In other words, on AWS? use ECS. On Google Cloud? Use GCR. On Azure, go with ACR. Else, stick with Docker Hub.

Creating our Azure Container Registry

So let's get to work. There are essentially two ways to create our own registry on Azure: via the CLI and via the portal. Since the Azure CLI alternative is already very well documented, let's review how to do it from the portal.

Click New Resource, type Container Registry and you should see:

Click Create, enter the required information:
Review and confirm, the deployment should take a few seconds:

Reviewing our Container Registry

Once the deployment finishes, accessing your own container registry should get you to a panel similar to the below. In yellow I highlighted my registry's URL. We'll need it to push our images.

I like pinning resources by project on a custom Dashboard. Fore more information, check this link.

Preparing our Image

Let's now prepare an image to push to the remote registry. I'll use my own available on GitHub but feel free to use yours if you already have one available.

Pulling a sample .NET image

If you followed my previous post on how to create ASP.NET Core websites using Docker, you probably have our webapp ASP.NET Core 3.1 image on your Docker repository. If not, please run:
git clone https://github.com/hd9/aspnet-docker
cd aspnet-docker
docker build . -t webapp
Then type the below command to confirm your image is available locally:
docker image ls
docker images or docker image ls? Both are the same but since all commands to manage images use the docker image prefix I prefer to follow that pattern. Feel free to use what suits you better.

Tagging our image

Before pushing our image we still need to tag it with the full repo name and add a reasonable version to it:
docker tag webapp <acrname>.azurecr.io/webapp:v1
You have to tag the image with the fully qualified domain name (ex. yourrepo.azurecr.io) in the name else, Docker will try to push it to Docker Hub.

Run docker image ls once more to make sure your image is correctly tagged:

Pushing our Image

With the image ready, let's push it to our registry.

Logging in to our Registry

The first thing we'll need is to authenticate using the Azure CLI. On a Windows terminal, type:
az acr login --name <your-acr-name>
if you used the az tool before, you're probably logged in and you won't be required to enter any username/password. Else, you'll need to sign in with the az login command then sign in again with az acr login. Check this page for more details.

Now the big moment. Push the image to the remote repo with:
docker push <acrname>.azurecr.io/webapp:v1
If all went well, you should see on the Services/Repositories tab, our image webapp as a repository:
And clicking on our repo, we see v1 as Tag and additional info about our image on the right:

Deploying our image locally

Pulling and running your remote image locally should be really simple since tools/credentials are in place. Run:

docker pull hildenco.azurecr.io/webapp:v1
docker run -d webapp

Deploying our image to a remote Linux VM

But we're here to learn so let's complicate this a little and deploy this image to a remote Linux VM. For this part of the tutorial, make sure your VM is ready by installing the following requirements:
To avoid sudoing all the time with Docker, add your user to the docker group with sudo usermod -aG <username>

Authenticating with the Azure CLI

If you recall, we had to login to our repo from our development box. Turns out that our CentOS VM is another host so we'll have to login again first with the main az login:
az login
Then login on ACR:
az acr login --name <your-acr-name>

Pulling our Image from CentOS

With the authentication in place, pulling the image from our CentOS should be familiar:
docker pull hildenco.azurecr.io/webapp:v1

Running our Container

The last step is to run our image and access it from outside. To run our image we do:
docker run --rm -d -p 8080:80 hildenco.azurecr.io/webapp:v1
Where:
  • --rm: remove container when stopped
  • -d: run in detached (background) mode
  • -p 8080:80: map port 80 on container to external 8080 on host

Accessing our Container

Then, accessing from outside requires knowing the VM's IP address and accessing it from the development box (host). To know your CentOS's IP, type:
ip a show eth0
 Then, with the IP, access it from your favourite browser:

Conclusion

On this article we described how to create our own Azure's Container Registry on Azure, push, pull and deploy images from it to a CentOS virtual machine. With the growth of microservices, knowing Docker becomes an essential skill these days. And since containers are the way developers build, pack and ship applications these days, knowing how to use a managed container registry will be an essential asset for your team.

Source Code

As always, the source code for this post is available on my GitHub.

References

See Also

Monday, July 6, 2020

20 tips to manage Linux VMs on Azure

Azure offers many tools to manage our Linux VMs. On this post, let's review the most interesting ones.
Photo by Szabo Viktor on Unsplash

On previous posts we learned how to create a CentOS VM on Hyper-V and how to deploy and run it on Azure. Today we will review some of the tools available on Azure to manage our virtual machines. Azure offers many tools to manage our Linux VMs including alerts, automatic backups, security, making managing our VMs an easy task.
On this post I'll be using my a CentOS VM but these tips should apply to any distro of your choice. If you want to know what's necessary to run a custom CentOS VM on Azure, please check the previous posts where where I detailed how to create a CentOS VM locally and how to get it running on to Azure.

Getting ready

So let's get started. Let's access our VM in Azure by clicking on Virtual Machines and selecting it:
With the VM opened, we're provided with the following overview. From here, we can Start/Stop/Delete and Connect to it and also view important information such as our internal and external IPs:

Tip 1 - Deallocating to cut costs

In Azure, stopping and deallocating my VMs are different. If all you want is cutting costs then you should be deallocating your VMs. Just mind that by deallocating the VM, some of your resources will be released including your public IP. Want to keep it? You can as show below, but you will be charged for it.

Tip 2 - Accessing logs with Serial Log

Accessing the logs for your VMs is also simple by going to Boot Diagnostics -> Serial Log. For example, this is the what I got from my boot:

Tip 3 - VM Screenshot

Can't access your VM? The screenshot tool may be useful as it can show you the stat of your virtual machine. To access the screenshot tool, go to Boot Diagnostics -> Secreenshot:

Tip 4 - SSHing into the VM

Most of us know already how to SSH into our VMs: we get our public IP address from the overview page and SSH into directly to the terminal. However, Azure also provides us a nice Connect tab where it shows different ways we could to connect to our servers:
Assuming you have external access to your VM, connecting to it should be as simple as:
ssh user@ip

Tip 5 - Connection Troubleshoot

In case the above doesn't work, we also have access to a Connection Troubleshoot pane where we can test the connection to the VM. This screen is useful as it runs from inside Azure bypassing the external firewall rules isolating potential routing or network problems between our machine and our server:

Tip 6 - Configure the Virtual Network

The Networking tab allows us configuring the virtual network. From here we can allow/restrict access by source, protocol, port, inbound, outbound and load balancing tools. The basic configuration is:

Tip 7 - Allow your own IPs using NSG Firewall Rules

We can leverage custom firewall rules to block malicious IPs directly in Azure directly in the Network Security Group (NSG) associated to your VM. The recommendation is to block everything and only whitelist (allow access) to the IPs you trust. Whitelisting is better than blacklisting because people use cloud servers and rotate their IPs pretty frequently. To do that, choose Source = IP Addresses and specify your own IPs:

Tip 8 - Block specific ports using NSG Firewall Rules

You could also restrict access to whatever you need. For example, to deny requests on port 22 we could add the following rule:
It's recommended to not leave your machine exposed. During my tests my server logged more than 1000 attempts to brute force a password from more than 40 different IPs. Make sure to keep your box secured!

In case you need, here's how to monitor who's trying to break into your box since the last boot:
journalctl -b | grep "Failed password for" | sed 's/.* from //' failed.txt | sed 's/ port.*$//' | sort -u | wc -l
1136

Tip 9 - Accessing the Serial Console

Azure Serial Console is one of my favourites. It allows us to run commands on the VM from Azure. It's an essential tool to debug, fix and inspect our VM. From it you can login to your machine (even if you cannot from your own PC) and manage it remotely via Azure:

Tip 10 - Configure the Public IP

You can also configure how Azure assigns your VM a public IP. Dynamic means it will rotate (change) after each deallocation. Static (more expensive) will reserve that IP for you but will incur costs even if the VM is deallocated:

Tip 11 - Create a custom DNS Name

We could also create a custom DNS name for our VM so that we can access it using the DNS name instead of its IP address. That can be one on Overview -> DNS Name -> Configure and if your domain is managed by Azure (like mine), you should also be able to configure a subdomain to point to that VM directly from here. For example, to create a subdomain like ssh.mydomain.com pointing to your CentOS server you should do:

Tip 12 - Reset Password

Forgot your password? You can reset it directly from Azure. Plus,configuration and SSH public key can also be reset:

Tip 13 - Configure Backup Policies

Auto-backing up of our VM is another nice feature. We can specify the location, interval and time and have Azure do the job:

Tip 14 - View VM Telemetry

Azure also provides interesting telemetry about our VMs. And you can even configure your own if you wish:

Tip 15 - Create Custom Alerts

Azure utilizes insights captured for your VM to generate alerts based on your custom criteria and allows us to create custom alerts that can notify us by Email/text. Simply go to Create Rule:
Then create one by using one of the available metrics:

Tip 16 - Specify Auto-shutdown

Auto-shutdown is also very useful. It allows us to schedule shutdowns helping saving on costs:

Tip 17 - Specify a Disaster Recovery Strategy

We can also leverage Azure's Site Recovery functionality to replicate our VMs. That's very useful for keeping the SLAs and recovering from disaster recovery situations:

Tip 18 - Automated Security Recommendations

If you're willing to pay a little more Azure also offers security recommendations via its Security Center where you can increase your protection:

Tip 19 - Export Template Scripts

This is also nice as we can export a reproducible script to automate the creation of an equivalent VM. Save this on your company's repository so you can quickly span new clones of this VM in the future.

Tip 20 - Run adhoc commands

This is also a very useful feature where you can, from the portal, run a command on your machine:

Conclusion

On this post we presented various tools to manage your Linux VM on Azure. While we can always manage our VMs using SSH, it's also important to remember that when running on the cloud, there are other awesome tools we can leverage to monitor, inspect, alert, manage, secure and troubleshoot your server.

And remember that it's very, very important to secure your VMs in the cloud.

See Also

Monday, June 29, 2020

How to create a custom CentOS Stream VM on Azure

There isn't a one-click experience for creating CentOS VMs in Azure. Learn how to create yours.
Photo by Robert Eklund on Unsplash

Running CentOS on Azure is great. However, getting there requires some work because none of the on Azure are available at the moment are free. On this post we will continue illustrating why one should use CentOS by deploying our own CentOS Stream VM to Azure.

With the news that Red Hat is shutting down the CentOS project, I definitely cannot recommend CentOS for your server anymore. 😌 However, it still has its value if you're developing for RHEL.

Azure Requirements

Before getting our hands dirty let's review requirements to run a custom VM on Azure:
  • Disk format: at the moment, only fixed VHD is supported;
  • Gen: Azure supports Gen1 (BIOS boot) & Gen2 (UEFI boot) Virtual machines. Gen1 worked better for me;
  • Disk Space: Minimum 7Gb of disk space;
  • Partitioning: Use default partitions instead of LVM or raid;
  • Swap: Swap should be disabled as Azure does not support a swap partition on the OS disk;
  • Virtual Size: All VHDs on Azure must have a virtual size aligned to 1 MB;
  • Supported formats: XFS is now the default file system but ext4 is still supported.

Installing CentOS on Hyper-V

The first thing that we have to do is to produce a virtual hard disk (VHD) with a bootable CentOS installed using Hyper-V as explained in detail on a previous post. Today we'll extend that setup adding what's necessary to run it on Azure. On this tutorial we will:
  1. Download the CentOS 8 Stream ISO
  2. Create a virtual hard disk (VHD) in Hyper-V
  3. Create and configure the VM in Hyper-V
  4. Install CentOS on the VM by:
    1. Specifying a software selection
    2. Configuring networking (we'll need to install software after the first boot)
    3. Configuring partitions on disk
    4. Creating accounts
    5. Modify the system to add settings required by Azure

Downloading the CentOS 8 ISO

This should be obvious to most people. In order to get our custom installed on a VHD with Hyper-V, please go ahead and download the latest ISO to your computer. We'll need that ISO to load the installer and install it to our VHD. As previously, we'll use CentOS Stream.

Creating a Virtual Hard Disk (VHD)

With the ISO downloaded, let's create a virtual hard disk (VHD) on Hyper-V. To do so, open Hyper-V Manager, click New -> Hard Disk and choose VHD on the Choose Disk Format screen:
Next, on Choose Disk Type, choose Fixed size:
In Configure Disk, set the disk size. In my tests, 6GB was a reasonable size for a simple server and enough space on the home partition:

Creating the VM

The process to create the Hyper-V VM remains the same. Make sure to review the previous post it in detail as I'll only describe the essential bits that required by the Azure customization here.

Configuring Networking

Make sure that you choose the Default Switch in Configure Networking:

Connecting the Virtual Hard Disk

On Connect Virtual Hard Disk, we'll choose Use an existing virtual hard disk and point it to the one you just created. This is necessary because Hyper-V auto-creates VHDXs by default while Azure requires VHDs:
To finish up, validate on Summary that all looks good and confirm:

Specifying the ISO

The last thing before starting up the VM is to specify the ISO as a DVD drive. That's done on Hyper-V manager by selecting DVD Drive -> Media, choosing Image file and locating yours on disk:
I also like to disable checkpoints and unset automatic start/stop actions.

Installing CentOS Stream

After starting the VM in Hyper-V, you should be prompted with the screen below. Choose Install CentOS Stream 8-stream:

The installer

After the boot ends, you should be running the installer called Anaconda. Choose your language and click Continue:

Installation Summary

On Installation Summary, we'll essentially configure software selection, network. We'll also need to setup partitions on the Installation Destination screen.

Software selection

For the software selection, we'll go with Minimal Install as I don't want to delay the process by installing what we don't need. The only requirement we'll need to install is a text editor (I'll be using Vim, my favourite text editor but feel free to use yours) so we can make the necessary changes.

During installation, click on Software Selection and choose Minimal Install:

Disk Partitioning

Because Azure requires some special settings (see requirements above), we'll need to do manual partitioning. But don't be scared, that shouldn't be complicated. We'll divide our disk in three main partitions:
  • /boot, 300Mb - where the boot files will be placed (including the kernel)
  • /, 4Gb - where all the files of the system will be placed (including software, logs, services and libraries)
  • /home, 1Gb - to store user files
  • no swap - we don't need a swap partition as Azure will privision one for us.
We'll also use XFS whenever applicable since it's the default in Azure now.

Choose your disk and click on Custom:
 On the Manual Partitioning screen, click on Standard Partition:
Add the manual partitions by clicking on the + sign below.
 The first to add is /boot. Enter 300m on the popup so you see:
Add 1GB for /home:
And the remainder (4.7G) for /:
Confirm to complete:

Networking

Enable networking as we'll need to install our text editor (and if you wish, update the instance before uploading to Azure):

Start Installation

After all the settings were entered, click continue to proceed. During install, you will be prompted with a screen similar to:
It's recommended to set a root password and to create an account. I also recommend checking the Make this user administrator option as we should be using root as little as possible:

Before the first boot

Once the installation finishes eject the virtual ISO by goint to Hyper-V manager, choosing your VM -> Settings -> DVD Drive and set it to None -> Apply:

First Boot

After ejecting the DVD and starting the VM you should see the following boot loader:
Then the following login screen after the boot finishes:

Azure Configuration

We'll now proceed with our Azure configuration. For CentOS 8, the documentation is specified here (although in less detail than on this blog post). Login as root and follow the next steps.

Testing the network

If you recall, we chose Minimum Install during the installation. That means that we don't have a text editor yet so let's proceeed with the installation as we'll need one to modify our configuration. To confirm our network can access the internet, type:
ping github.com

No network?

If no network is available, check the status of your connection with:
nmcli con status
If eth0 is down, we should enable eth0 to auto-get an ip from our Hyper-V Virtual switch with:
nmcli con up eth0
Try pinging again and it should work fine now.
ping github.com

Installing Vim

For editing files I'll install Vim, my favourite text editor. That can be done with:
dnf install vim

Whenever possible, I'll be using DNF as it's what I'm used as a Fedora user. Feel free to use Yum if you prefer it.

Configuring the network

To configure the network, the first step is to create or edit the file /etc/sysconfig/network and add the following:
NETWORKING=yes
HOSTNAME=localhost.localdomain
You can run this as a oneliner with:
printf "NETWORKING=yes\nHOSTNAME=localhost.localdomain\n" >> /etc/sysconfig/network

Create or edit the file /etc/sysconfig/network-scripts/ifcfg-eth0 and add the following text:
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
USERCTL=no
PEERDNS=yes
IPV6INIT=no
NM_CONTROLLED=no
Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
ln -s /dev/null /etc/udev/rules.d/75-persistent-net-generator.rules

Modifying GRUB

Next, we'll modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open /etc/default/grub and set GRUB_CMDLINE_LINUX to:
GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
Rebuild the grub configuration:
grub2-mkconfig -o /boot/grub2/grub.cfg

Installing the Azure Linux Client

We'll now install the Azure Linux Client. This package is required by Azure to perform essential tasks on our VM including provisioning, networking, user management, ssh, swap, diagnostics, etc. Installing it on CentOS is super simple as the package is available in the repo:
dnf install WALinuxAgent
Then modify /etc/waagent.conf making sure you have:
ResourceDisk.Format=y
ResourceDisk.Filesystem=ext4
ResourceDisk.MountPoint=/mnt/resource
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=4096    ## setting swap to 4Gb
To finish off, enable it on boot with:
systemctl enable waagent

Deprovisioning and powering off

Almost there. Just run the following commands to deprovision the virtual machine and prepare it for Azure with:
waagent -force -deprovision
export HISTSIZE=010i
systemclt poweroff
The machine will shut down. Let's move to the Azure part now.

Uploading virtual hard disk to Azure

Now that our setup is complete, we'll upload our VHD to Azure so we can create new virtual machines from it. There are two ways to do this:
  1. using AzCopy (only for the brave)
  2. use Azure Storage Explorer (recommended)
Unfortunately I can't recommend using AzCopy at the moment as the tool is full of bugs. It could be that Microsoft is still learning Go 😉.

Uploading using AzCopy (only for the brave)

To upload our VHD, you should install AzCopy and install the Azure CLI. After the installations finish, close and open all PowerShell/terminal windows so all the env vars are reloaded.

Login in Azure using the CLI

Let's login to the Azure CLI by typing on a PowerShell window:
az login

Create the disk

In order to create our managed disk, first we need to determine it's actual size. To get your disk size, type the command below and copy the output as will be necessary by the upload:
wc -c <file.vhd>
Now, run a command similar to the below replacing items in <> with your data:
az disk create -n <disk-name> -g <resourcegroup> -l <region> --for-upload --upload-size-bytes <your-vhd-size> --sku standard_lrs
To upload, first we'll need to generate a SAS token with:
az disk grant-access -n <disk-name> -g <resourcegroup> --access-level Write --duration-in-seconds 86400
If you got a json response with a "accessSas" token on it, copy that url. We'll use it to upload our VHD file to Azure using the azcopy tool:
azCopy copy "<path-to-vhd>" "<sas-token>" --blob-type PageBlob
After the upload is complete, and you no longer need to write any more data to the disk, revoke the SAS. Revoking the SAS will change the state of the managed disk and allow you to attach the disk to a VM.
az disk revoke-access -n <disk-name> -g <resourcegroup>

Option 2 (Recommended): Using Azure Storage Explorer

I usually don't recommend GUIs but AzCopy is unusable at the moment. Also uploading via Azure Storage Explorer was way faster and didn't timeout on me 😒. So install Azure Storage Explorer, open a Blob container, find or create a folder and click Upload File. Select your VHD and don't forget to set it to Page Blob:
After completed, you should see your VHD on your remove blog storage folder:
Right-click it -> properties on your disk and copy the Uri:
Next, run the following command to create a VHD from that image (it should be quick):
az disk create -n <disk-name> -g <resourcegroup> -l <region> --source <your-vhd-url>
At this point our disk should be recognized by Azure:

Creating the VM

With the disk available, we're ready to create our VM with:
az vm create -g <resourcegroup> -l <region> --name <vmname> --os-type linux --attach-os-disk <disk-name>

You should now see your VM on Azure as expected:

Testing the VM

Now the fun part, let's see if this works. If you look carefully the image above, you'll see our IP listed there. We can try to ssh into it with:
ssh bruno@<ip>

Yay! Our custom CentOS VM is available on Azure and we can access remotely. From here, it's up to you to install the serices you need. Or, just play a little with it, tear it down, recreate and so on.

Security Considerations

I dream of the day we no longer have to discuss hardening our VMs on public cloud providers. Unfortunately we're not there yet. There's a lot of bots scanning for open ports and vulnerabilities on public IPs so make sure you take the necessary precautions to secure your SSH service such as preventing root login, changing the SSH port number and even banning failed login attempts.

There's also some other measures that we can take on Azure to block that traffic but that's beyond the scope of this post.

Troubleshooting

Before wrapping up, I'd like to leave a few tips. It's important to remember that a lot can go wrong here. So far we created a virtual machine on Hyper-V locally, modified the configuration, uploaded our VHD to a blob storage and ran some commands using the Azure CLI to create our resourses on Azure. What else could go wrong? Let's see next.

What's my public ip?

You can get that information on the overview or, from the shell with:
curl ipinfo.me

Can't access the VM via SSH

I had this error too. There are two possible problems: (1) your VM is not accessing the internet or (2) either a firewall or Azure's Networking is not properly set. Both should be fine if it's your first time accessing your VM.

The VM isn't starting up

Recheck the steps above. A solution would be running it locally from Hyper-V and make sure you didn't break it while applying the configuration.

Can't access the internet

This error may happen if your VM is incorrectly configured or if it couldn't get an IP from Azure's DHCP server. Try accessing it from the Serial Console to get insights about the eth0 ethernet adapter, IP and connection status with:
ip a                                        # check my adapters. Expected: eth0
nmcli device show eth0        # shows the status of the ethernet connection
nmcli con up eth0                 # starts the connection

The VM won't get an IP

This is probably Azure's fault as the IP should be auto-given to you by their DHCP servers. Anyhow, we can retry being assigned an ip with:
sudo dhclient

waagent is not working

Using the Serial Console, check if the agent is working and the status is active (running) with the command below:
sudo systemctl status waagent
sudo systemctl start waagent

Can't connect to my VM via SSH

This could happen if your instance can't access the internet or if the service is not running. Try connecting to it via the Azure Serial Client and check the previous steps to make sure that the VM can ping an external site. Also confirm that your public IP is correct. If you did not specify, Azure will release your previous IPs and is not guaranteed that a new one will be the same.

Virtual routing problems

If you think that the problem is related to virtual network routing in Azure, please check these links:

Conclusion

On this post we reviewed in detail how to create a custom CentOS Stream image and run it on Azure. For this demo we used CentOS, my favorite distro for the server but most of the information described here should also be useful for other distributions. We also demoed how to use the Azure CLI and showed some of the features Azure provides to us.

References

See Also

About the Author

Bruno Hildenbrand      
Principal Architect, HildenCo Solutions.