Grafana, the most popular open-source analytics visualization tool is now
available on Azure as a managed service. With it, customers can run Grafana natively within the Azure cloud platform without needing to provision or managing the backend services needed to run it.
Why use Grafana?
With Grafana, users can bring together logs, traces, metrics, and other
disparate data from across an organization, regardless of where they are
stored. With Azure Managed Grafana, the Grafana dashboards our customers are familiar with are now integrated seamlessly with the services and security of Azure.
Features
Azure Managed Grafana is a fully managed service for analytics and monitoring solutions. It's supported by Grafana Enterprise, which provides extensible data visualizations. Quickly and easily deploy Grafana dashboards with built-in high availability and control access with Azure security.
Azure Managed Grafana also provides a rich set of built-in dashboards for various Azure Monitor features to help customers easily build new visualizations. For example, some features with built-in dashboards include Azure Monitor application insights, Azure Monitor container insights, Azure Monitor virtual machines insights, and Azure Monitor alerts.
How to get started
Getting started with Grafana on Azure is easy. Here are some links you should check:
Depending on your preconceptions, Vim may look exotic or sexy. Let's review
those assumptions and provide rational reasons to use this fantastic text
editor.
It may be possible that you heard about Vim. It may be possible that you didn't. Depending on your background, it may
even be possible that have preconceptions about it. On this post, let's try
to review all assumptions and provide concrete reasons to use this fantastic
text editor.
This article is an adaptation of another publication made by me on
Vim4us. I'm
re-publishing here to a wider audience with a few tweaks.
Vim is ubiquitous
Vim has been around for almost thirty years. Due to its simplicity, ubiquity and low resource requirements, it's the
preferred editor by sysadmins worldwide.
Easy to install
Vim is also easy to install on Windows and Macs and is packaged in most Linux
distros meaning that, even if it isn't installed in your system, Vim is one
line from the terminal and two clicks from your software manager.
Vim is lightweight
Differently from most editors, Vim is very lightweight. The installation
package is only 10 Mb and depending on your setup, memory consumption
reaches 20 Mb. Compare that with most text editors, especially the Electron-based editors like Visual Studio. Install size is not less than 200Mb, memory consumption
quickly 1Gb (50 times more!) while requiring 1.5Gb of storage, making it slow, even on modern hardware.
If you're running a Mac, a low end computer, a phone, or even a Raspberry Pi, Vim is definitely a good option for you.
Vim is stable
As previously said, Vim has been around for almost 30 years. And will probably
be for at two more decades. Learning Vim is an excellent investment as you
will be able to use your knowledge for the next two decades at least.
Vim works well with anything you want, as long as it's text. Vim works by
default with most file formats, has locales, can be localized, supports
eastern typography such as Arabic and Hebrew and comes with built-in support
(including highlighting) for most languages.
Vim respects your freedom
Vim does not contain any built-in telemetry. It's (unfortunately) common theses days the companies are abusing your
statistics in favor of improvements in their system.
Sysadmins trust that Vim will not be reaching the network to run ad-hoc
requests.
Vim is efficient
Vim is brilliant in how it optimizes your use of the keyboard. We'll talk
about that later but for now, understand that its combination of multiple
modes, motions, macros and other brilliant features makes it literally
light-years ahead of other text editors.
Thriving Ecosystem
Stop for a second and think about which feature you couldn't live without
today on your current text editor? The answer you probably be that Python or Go extension, meaning that what you'll miss is not actually about the
editor but about its ecosystem.
Vim has a brilliant ecosystem. You'll find thousands of extensions covering
anything you need. You can also host your extensions anywhere (on GitHub, for example) without being locked by any vendor. You could also host them
in private/corporate repos just for your team or share on public directories
like Vim Awesome.
Vim is ultra-customizable
Even if by default Vim has most of what you need, it's important to understand
that Vim lets you change pretty much everything. For example, you can make
temporary/local customizations (by using the Ex mode), permanent
customizations (by changing your .vimrc) or even customizations based
on file type.
Vim is always getting better
Vim is actively developed meaning that it keeps getting better. Vim users get security patches
and new features all the time. Vim is also updated to accommodate the latest
upgrades on modern operating systems while also supporting older systems too!
Learning how to learn Vim is the key to a continuous understanding of the tool
and not getting frustrated. There are many ways to get help on Vim: using its
built-in help system, using the man pages and obviously, accessing the
communities listed above.
Vim is free
These days it may be odd to say that Vim's free. Vim's freedom goes beyond its
price, but also your freedom to modify it to your needs and deploy it wherever
you want. Vim developers also have a strong commitment to helping needed people around the world.
GUI-less
Vim also runs GUI-less, meaning it runs on your terminal. So you get a full
featured text-editor on any system you're working on, regardless if it's a
local desktop or remote supercomputer. This feature is essential for sysadmins
and developers who often need to modify text files on remote machines trough
an SSH connection.
Rich out-of-the-box toolset
Vim comes with fantastic tooling by default: powerful search, regular
expression support, syntax highlighting, text sort, integrated terminal,
integrated file manager, cryptography, color schemes, plugin management
and much more. All without a single plugin installed!
Vim integrates into your workflow
Differently from other text editors which force you into their thing, Vim
adjusts seamlessly to your workflow via powerful customization, extension
support, integrated shell support and ability to pipe data
in/out from it.
Vim can be programmed
Want to go the extra mile? Vim also has its own language, called VimL. With it
you can create your own plugins and optimize even further the system to your
needs.
Vim will boost your productivity
There are multiple ways Vim will boost your productivity. First, Vim's
extensive use of the home row of the keyboard saves you from having to reach the arrow keys (or even worse, the
mouse) to do your work. Second, with Vim you can quickly create macros to
reproduce repetitive operations, third, the combination of motions, plugins,
custom shortcuts and shell integration will definitely boost your productivity
way more than you could imagine.
Vim will make you type better and faster
Being keyboard based, Vim's workflow based on the home row will definitely help force you to type
better. With Vim you'll realize that you probably move your hands way more
than you should and will significantly increase your typing speed.
Vim will make you learn more
Most editors these days do too much. Yes, part of that is imposed on us by
languages that require a lot of metadata (Java and C# for example). One problem with that is that you end up relying on the
text editor much more than you need. Without access to Eclipse or Visual Studio it may be possible that you'll feel the impostor syndrome.
With Vim, despite being able to, you'll feel closer to your
work, resulting in a better understanding of what you're doing. You'll also
realize that you will learn more and memorize better the contents of what
you're working on.
Conclusion
On this post we provided many tips why one should learn Vim. Vim is
stable, ubiquitous and is supported by an engaged, growing community. Given all its
features, Vim is definitely a good tool to learn now and harvest the benefits
for decades to come.
Among the many benefits of using .NET in Google Cloud is the ability to
build and run .NET apps on a serverless platform like Google Cloud Functions.
Since it's now possible to run .NET apps on Cloud
Functions, let's understand how all of that works.
What is Cloud Functions?
Cloud Functions is Google Cloud’s Function-as-a-Service platform that allows
developers to build serverless apps. Since serverless apps do not require a
server to run, cloud functions are a great fit for serverless applications,
mobile or IoT backends, real-time data processing systems, video, image and
sentiment analysis and even things like chatbots, or virtual assistants.
FaaS
To develop your .NET apps so they're compatible with Cloud Functions, Google
has made available this GitHub repo. The Functions Framework lets you write lightweight functions that run
in many different environments, including:
Assuming you're using .NET Core, the first thing you'll need is to build
and run a deployable container on your local machine. For that, make sure that
you have either Docker and the pack tool installed.
Next, build a container from your function using the Functions buildpacks:
docker run --rm -p 8080:8080 my-first-function # Output: Serving
function...
Send a request to this function by navigating to localhost:8080. You should
see Hello, Functions Framework.
Cloud Event Functions
After installing the same template package described above, use the gcf-event
template: mkdir HelloEvents cd HelloEvents dotnet new gcf-event
VB and F# support
The templates package also supports VB and F# projects. Just use -lang vb or
-lang f# in the dotnet new command. For example, the HTTP function example above
can be used with VB like this: mkdir HelloFunctions
cd HelloFunctions dotnet new gcf-http -lang vb
Running your function on serverless platforms
After you finished your project. you can use the Google Cloud SDK to deploy to Google Cloud Functions from the command line with the
gcloud tool.
Once you have created and configured a Google Cloud project (as described in
the Google Cloud Functions Quickstarts and installed the Google Cloud SDK, open a command line and navigate to the function directory. Use the gcloud
functions deploy command to deploy the function.
For the quickstart HTTP function described above, you could run:
Note that other function types require different command line options. See
the deployment documentation for more details.
Trying Cloud Functions for .NET
To get started with Cloud Functions for .NET, read the quickstart guide and learn how to write your first functions. You can even try it out
with a Google Cloud Platform free trial.
The command-line (aka terminal) is a scary thing for most users. But
understanding it can be a huge step in your learning journey and add a significant boost to your career in tech.
Depending on your technical skills, the command-line interface (also known as CLI or terminal) may look scary. But it shouldn't! The
CLI is a powerful and resourceful tool that every person aspiring greater tech
skills should learn and be comfortable with. On this article, let's review
many reasons why you should learn and use the command line, commonly (and
often incorrectly) referred to as terminal, shell, bash and CLI.
This article is an adaptation of another one originally published by me on
Linux4us. I'm
re-publishing here to a wider audience with a few tweaks.
Ubiquitous
The command-line interface (CLI) is available in every operating system, not
only in Linux. Very frequently, this is where developers and system
administrators spend a lot of time. But, if you want to work with Linux,
development, the cloud or with technology in general, better start learning
it.
Terminals are available in every operating system including Linux,
Windows and Macs
Powerful
CLI-based apps are much more powerful than their GUI-based equivalents. That
happens because usually GUIs are usually wrappersaround libraries that power both the GUIs and the terminal apps. Very frequently, these
libraries contain way more functionality than what's available in the
graphical interface because, as you might expect, since software development
takes time and costs money to produce, developers only add to GUI apps the
most popular features.
For example, take a look at the plethora of options that the GNU find tool provides us:
Does your GUI-based find tool has all those options?
Quicker
Common and repetitive tasks are also faster in the terminal with the advantage
that you will be able to repeat and even schedule these tasks so they run
automatically, releasing you to do actual work, leaving the repetitive tasks
to computer.
For example, consider this standard development workflow:
If you were doing the above using a GUI-based git client (for example, Tortoise Git), the workflow would be similar to the below, taking you approximately 20
minutes to complete:
Right-click a folder in Windows Explorer (or Nautilus, or Finder) ->
Select clone -> Paster the Url -> Click OK
Wait for the download to Complete -> Click OK
Back to Windows Explorer -> Find File -> Open it
Make your changes (by probably using GEdit, KEdit or Visual Studio Code)
-> Save
Back to Windows Explorer
Right Click -> Commit
Right Click -> Push
Take a deep breath
In the terminal (for example, in Ubuntu), the workflow would be equivalent to the below and could be completed in
less than 2 minutes:
sudo apt update && sudo apt install git -y # install
git git clone <url> # clone the GitHub repo
locally vim/nano file -> save # edit the file using a text-based
editor git commit -m <msg> # commits the file locally git
push # push the changes back to our GitHub repo
Automation
Terminal/CLI-based tasks can be scripted (automated) and easily repeated,
meaning that you will be able to optimize a big part of your workflow. Another
benefit is that these scripts can be easily shared, exactly as business and professional developers do!
So let's continue the above example. Our developer realized she is wasting too
much time in the GUI and would like to speed up her workflow even more. She
learned some bash scripting and wrote the function below:
gcp ()
{
msg="More updates";
if [ -n "$1" ]; then
msg=$1;
fi;
git add ./ && git commit -m "$msg" && git push
}
She's happy because now she can run from the terminal, the below command as
soon as she finishes her changes:
gcp <commit-msg>
What previously took 5 minutes is now is done in 2 seconds (1.8 seconds to
write the commit message and 0.2 to push the code upstream). A significant improvement in her workflow. Imagine how much more productive she would be during the
course of her career!
It's important to always think how can you optimize your workflow. These small
optimizations add up to your productivity significantly over time.
Lightweight
Not only the CLI is faster and more lightweight than equivalent GUI-based
applications but it's quicker to run the same commands. For example, consider
a Git client like Tortoise Git. It
was supposed to be lightweight (what most GUI apps aren't) but it takes 3s to
completely load and uses 10Mb of memory:
Our GUI-based git client TortoiseGit
Now take a look at its CLI equivalent. git status runs in 0.3s and consumes
less than 1Mb. In other words, 20 times more efficient memory-wise and 10
times faster.
A simple CLI command is 20x more efficient and 10x faster then its GUI
equivalent
Disk Space Efficient
Another advantage of terminal apps over their GUI-equivalents is reduced disk
space. For example, contrast these two popular apps. Can you spot the
differences?
Application
Installation Size
Total Size
Memory Usage
Visual Studio Code
80Mb
300Mb
500Mb (on sunny days)
Nano
0.2 Mb
0.8 Mb
3 Mb
400x more efficient
375x more efficient
160x more efficient
Extensible
Another important aspect is that the CLI is extensible. From it, skilled users
could easily either extend its basic functionality using its built-in features
like pipes and redirections combining inputs and outputs from different tools.
For example, sysadmins could list the first two users in the system who use
Bash as a shell, ordered alphabetically with:
What's interesting from the above command is how we combined 5 different tools
to get the results we need. Once you master the Linux terminal, you'll too
will be able to utilize these tools effectively to get work done significantly
faster!
This is a more advanced topic. We'll see in future posts more details about
it.
Customizable
As you might expect, the terminal is extremely customizable. Everything from
the prompt to functions (as seen above) and even custom keybindings can be
customized. For example, In
Linux, binding the shortcut
Ctrl+V to open the Vim text editor on the terminal is simple. Add this to your .bashrc file:
bind '"\C-V":"vim\n"'
Extensive range of Apps
Contrary to what most newcomers thing, the terminal has apps too! You will
find apps for pretty much any use case. For example:
Want to work with Linux, as a developer or with the cloud? Another important
aspect of using the terminal is that it will make you more ready for the job
market. Since servers usually run Linux and don't have GUIs, you will end up
having to use some of the above tools on your day-to-day work. Developers
frequently use it to run repetitive tasks, becoming way more productive. So
why not start now?
Learn more about your System
Hopefully at this point you realize that you will learn way more about your
system and computers in general when you use the terminal. And I'm not talking
solely to Linux users. Windows and Mac users will learn a lot too! This is the
secret sauce that the most productive developers want you to know!
It's also a huge win for testing new tools, maintaining your system,
installing software, fixing issues and tweaking as you wish.
Getting Started
Ready to get started on your terminal/CLI journey? Here's a video that may
serve as a good intro:
Conclusion
Every modern computer has a terminal. Learning it will save you time, allow
you to automate common actions, make you learn more about your system, grow
professionally and be more productive. Well worth the effort, isn't it?
Docker and Containers - Everything you should know
Much has been discussed about Docker, containers, virtualization, microservices and distributed applications. On this post let's recap the essential concepts and review related technologies.
Much has been discussed about Docker, microservices, virtualization and containerized applications. So much, that most people probably didn't catch up. As the ecosystem matures and new technologies and standards come and go, the container ecosystem can be confusing at times. On this post we will recap the essential concepts and a solid reference for the future.
Virtualization
So let's start with a bit of history. More a less 20 years ago the industry saw a big growth in processing power, memory,
storage and a significant decrease in hardware prices. Engineers realized that their applications weren't utilizing the resources effectively so they developed Virtual machines (VMs) and hypervisors to run multiple operating systems in parallel on the same server.
A hypervisor is computer software,
firmware or hardware that creates and runs virtual machines. The computer where the hypervisor runs is called the host, and the VM is called a guest.
The first container technologies
As virtualization grew, engineers realized that VMs were difficult to
scale, hard to secure, utilized a lot of redundant resources and maxed
out at a dozen per
server. Those limitations led to the first containerization tools listed below.
FreeBSD Jails:FreeBSD jails appeared in 2000 allowing the partitioning of a FreeBSD system into multiple subsystems. Jails was developed so that the same server could be sharded with multiple
users without securely.
Google's lmctfy: Google also had their own container implementation called lmcty (Let Me Contain That For You). According to the project page, lmctfy used to be Google’s container stack which now seems to be moved to runc.
Podman/Buildah: Podman and Buildah are also tools to create and manage containers.
Podman provides an equivalent Docker CLI and improves on Docker by neither requiring a daemon (service) nor requiring root privileges. Podman's available by default on RH-based distros (RHEL, CentOS and Fedora).
LXD: LXD is another system container manager. Developed by Canonical, Ubuntu's parent company, it offers pre-made images for multiple Linux distributions and is built around a REST API. Clients, such as the command line tool provided with LXD itself then do everything through that REST API.
Docker
Docker first appeared in 2008 as dotCloud and became open-source in 2013. Docker
is by far the most used container implementation.
According to Docker Inc., more than 3.5 million Docker applications have
been
deployed and over 37 billion containerized
applications downloaded.
Docker grew so fast because it allowed developers to easily pull, run and share
containers remotely on Docker Hub as simple as:
docker run -it nginx /bin/bash
Differences between containers and VMs
So what's the difference between containers and VMs? While each VM has
to have their own kernel, applications, libraries and services, containers don't as they
share some of the host's resources. VMs
are also slower to build, provision, deploy and restore. Since containers also provide a way to run isolated
services, are lightweight (some are
only a few MBs), start fast and are easier to deploy and scale, containers became the standard today.
The image below shows a visual comparison between VMs and Containers:
Here are guidelines that could help you decide if you should be using containers instead of VMs:
containers share the operating system's kernel with other containers
containers are designed to run one main process, VMs manage multiple sets of processes
containers maximize the host's resource utilization
containers faster to run, download and start
containers are easier to scale
containers are more portable than VMs
containers are usually more secure due to the reduced attack surface
containers are easier to deploy
containers can be very lightweight (some are just a few MBs)
Containers are not only advantages. They also bring many technical challenges and will require you to
not only rethink how your system is designed but also to use different tools. Look at the
Ecosystem section below to understand.
Usage of Containers
And how much are containers being used? According to the a Cloud Native Computing Foundation survey, 84% of companies today use containers in production, a 15% increase from last year. Another good metric is provided by the Docker Index:
Now let's dive into the technologies used by Docker (and OCI containers in general). The image below shows a detailed overview of the internals of a container. For clarity, we'll break the discussion in user and kernel space.
User space technologies
In usersland, Docker and other OCI containers utilize essentially these technologies:
runc: runc is a CLI tool for spawning and running containers. runc is a fork of libcontainer, a library developed by Docker that was donated to the OCI and includes all modifications needed to make it run independently of Docker.
containerd: containerd is a project developed by Docker and donated to the CNCF that builds on top of runc adding features, such as
image transfer, storage, execution, network and more.
gRPC: gRPC is an open source remote
procedure call system developed by Google. It uses
HTTP/2 for transport, Protocol Buffers as the interface description
language, and provides features such as authentication, bidirectional
streaming and flow control, blocking or nonblocking bindings, and
cancellation and timeouts.
In order to provide isolation,
security and resource management, Docker relies on the following features from the Linux Kernel:
Union Filesystem (or UnionFS, UFS): UnionFS is a filesystem that allows files and directories of separate
file systems to be transparently overlaid, forming a single file system. Docker implements some of them including brtfs and zfs.
Namespaces: Namespaces are a feature of the Linux kernel that
partitions kernel resources so that one set of processes sees one set
of resources while another set of processes sees a different set of
resources. Specifically for Docker, PID, net, ipc, mnt and ufs are required.
Cgroups: Cgroups allow you to allocate resources — such as CPU time, system memory, network bandwidth, or combinations of these resources — among groups of processes running on a system.
chroot: chroot changes the apparent root directory
for the current running process and its children. A program that is run
in such a modified environment cannot name files outside the designated
directory tree.
Docker Overview
You probably installed Docker on your machine, pulled images and executed them. Three distinct tools participated on that operation: two local Docker tools and a remote container registry. On your local machine the two tools are:
Docker client: this is the CLI tool you use to run your commands. The CLI is essentially a wrapper to interact with the daemon (service) via a REST API.
Docker daemon (service): the daemon is a backend service that runs on your machine. The Docker daemon is the tool that performs most of the jobs such as downloading, running and creating resources on your machine.
The image below shows how the client and the daemon interact with each other:
And what happens when you push your images to a container registry such as Docker Hub? The next image shows the relationship between client, dameon and the remote registry.
Images are built on layers, utilizing the the union file system.
Images are readonly. Modifications made by the user are stored on a
separate docker volume managed by the Docker daemon. They are removed as
soon as you remove the container.
Images are managed using docker image <operation> <imageid>
An instance of an image is called a container.
Containers are managed with the docker container <operation> <containerid>
You can inspect details about your image with docker image inspect <imageid>
Images can be created with docker commit, docker build or Dockerfiles
Every image has to have a base image. scratch is the base empty image.
Dockerfiles are templates to script images. Developed by Docker, they became the standard for the industry.
The docker tool allows you to not only create and run images but also to create volumes, networks and much more.
Due to the new practices of containers new security measures had to be applied. By default, containers are very reliable on some of the security measures of the host operating system kernel. Docker applies the principle of least privilege to provide isolation and reduce the attack surface. In essence, the best practices around container practice are:
signing containers
only used images from trusted registries
harden the host operating system
enforce the principle of least privilege and do not elevate access to access devices
offer centralized logging and monitoring
run automated vulnerability scanning
The Ecosystem
Since this post is primarily about containers I'll defer the discussion of some the ecosystem for the future. However, it's important to list the main areas people working with containers, microservices and distributed applications should learn:
Container Registries: remote registries that allow you to push and share your own images.
Orchestration: orchestration tools deploy, manage and monitor your microservices.
DNS and Service Discovery: with containers and microservices, you'll probably need DNS and service discovery so that your services can see and talk to each onther.
Key-Value Stores: provide a reliable way to store data
that needs to be accessed by a distributed system or cluster.
Routing: routes the communication between microservices.
Load Balancing: load balancing in a distributed system is a complex problem. Consider specific tooling for your app.
Logging: microservices and distributed applications will require you to rethink your logging strategy so they're available on a central location.
Communication Bus: your applications will need to communicate and using a Bus is the preferred way.
Redundancy: necessary to guarantee that your system can sustain load and keep operating on crashes.
Health Checking: consistent health checking is necessary to guarantee all services are operating.
Self-healing: microservices will fail. Self-healing is the process of redeploying services when they crash.
Deployments, CI, CD: redeploying microservices is different than the traditional deployment.
You'll probably have to rethink your deployments, CI and CD.
Monitoring: monitoring should be centralized for distributed applications.
Alerting: it's a good practice to have alerting systems on events triggered from your system.
Serverless: allows you to build and run applications and services without running the servers..
FaaS - Functions as a service: allows you to develop, run, and manage application functionalities without maintaining the infrastructure.
Conclusion
On this post we reviewed the most important concepts about Docker containers, virtualization and the whole ecosystem. As you probably realized by the lenght of this post, the ecosystem
around containers and microservices is huge - and keeps growing! We will cover
in more detail much of the topic addressed here on future posts.
In the next posts, we will start divining in the details of some of these technologies.
At this point, you probably used Docker and
Docker Hub already.
Docker Hub is the
world's most popular container registry and an amazing source of high-quality
software. But do you know that there are alternatives to it offered on the
cloud by Google, Amazon, Microsoft and others?
Today, let's learn about them.
After Docker Hub restricted its support for Open Source projects, this article is getting a lot of traction again. Hope it helps!
Container Registries
But first, let's review what are container registries.
Container registries are cloud-based repositories for storing and distributing Docker (and OCI-compatible) images. They provide a central place to store and share images, which can then be deployed to any environment that hosts containers (like Kubernetes or GKE for example).
Besides that, container registries can
build, store, secure, scan, replicate, and manage your images from fully
managed, geo-replicated instances, significantly reducing costs and maintenance efforts.
Container registries such as Docker Hub usually operate like
this:
Managed Container Registries?
Managed container registries are regular container registries hosted on the
cloud. However, they provide significant benefits and, contrary to what you
think, are not expensive. Using a managed container registry is recommended as
the offered features will save your team a lot of time.
Why use a managed container registry?
As with any other cloud services, there are benefits in using a managed
(cloud-based) container registry. The main reason to use them are:
Fully-managed: by using fully managed registries, you can release
your ops team from maintaining your own repo
Private registries: keep images in private repositories and only
accessible to team members.
Secured: you can use cloud firewall to protect your services.
Lower latency: you want a minimum latency between your images and
your deployment targets.
Integrated security: it's common to have custom authentication,
role-based access control and virtual network integration
Integrated with your cloud: most managed container registries will
provide some integration with your cloud meaning that'll be easier to
share and deploy those images to your environments.
Automated builds: managed registries allow you to build container
images automatically after pushing to your remote repo.
CI/CD pipelines: some registries also offer CI/CD pipelines
that automatically build and deploy directly to Kubernetes and other
tools.
Auto-scaling: allows serving users and hosts wherever they are,
with multi-master geo-replication
Automated vulnerability scans: some registries will automatically
scan your images and alert you on your
Geo-replicated: got a team distributed around the world? A
geo-replicated container registry may speed up things for team members as
it'll be sitting beside them.
Docker Hub
Docker Hub is the
world's most popular Docker container registry. With it you can create,
manage, and deliver your teams' container applications. Currently, the main
features of the paid version of
Docker Hub offers:
Fully-managed and highly available hosting: ACR hosts and manages
your repo for you.
Public and Private repos: with the paid plan you can have public
and private repos.
Parallel Builds: multiple teams can build projects in parallel.
Security features: Docker Hub offers important security features
such as
vulnerability scanning,
encryption, TLS and role-based access controls.
One of the main advantages of Docker Hub is that it's where you'll get
official images for the most popular images such as CentOS, Python,
Go, Ubuntu, MariaDb, nginx, Node, Alpine, MongoDB and more!
Docker discontinued its support for Open Source projects so I can no longer recommend it.
Fore more information about Docker Hub, please
click here.
GitHub Container Registry
GitHub Container Registry is a software package hosting service from GitHub that allows you to store and manage your Docker images. It supports both public and private repositories and is integrated with GitHub, allowing you to quickly and easily deploy your images to cloud-based services.
GitHub Container Registry also integrates with GitHub Actions, providing an easy way to automate the build, test, and deploy process for your Docker images.
GitHub remains my favourite as it integrates great with your code (if you're using a GitHub repo), and is very generous for open source repositories
You can store and manage Docker
and OCI images in the Container registry, which uses the package namespace https://ghcr.io.
Google Container Registry
Google Container Registry (GCR)
is Google's container history. As Docker Hub, GCR offers the a fully managed
image registry allowing you to push/pull your images. Currently the main
features of GCR are:
Fully-managed and highly available hosting: GCR hosts and manages
your repo for you.
Extensibe CI/CD integrations: so you can fully automate your
pipelines
Google Cloudintegration: GCR offers built-in integration
with the
Google Cloud
Google Kubernets Engine integration: GRC offers
Google Kubernetes Engine
integration. It uses the service account configured on the VM instances of
cluster nodes to push and pull images.
Security features: GCR offers important security features such
asvulnerability
scanning,
encryption, TLS and role-based access controls.
Fore more information about Google Container Registry, please
click here.
Amazon Elastic Container Registry (ECR)
Amazon Elastic Container Registry (ECR)
- ECR is a fully-managed container registry that makes it easy for developers
to store, manage, and deploy your images. Currently the main features offered
by Amazon ECR are:
Fully-managed and highly available hosting: ECR hosts and manages
your repo for you.
AWS Marketplace: ECR can store your containers and those you buy
from AWS Marketplace
CI/CD integrations: so you can fully automate your pipelines
One of the most interesting features of ECR is its built-in integration with
Amazon Elastic Container Service (ECS). From it, you can directly run your containers in production simplifying and
accelerating your workflow.
For more information about Amazon Elastic Container Registry, please
click here.
Azure Container Registry (ACR)
Azure Container Registry (ACR)
is another a fully-managed Docker container registry allowing you to build,
store, secure, scan, replicate, and manage container images. ACR is the
recommended tool for those running Azure services already. Currently the main
features of Azure ACR are:
Fully-managed and highly available hosting: ACR hosts and manages
your repo for you.
Geo-replication: to efficiently manage a single registry across
multiple regions.
One of the main features of ACR is its geo-replication. With it you can enable
a registry to serve users, hosts, synchronize artifactsand receive
notifications via webhooks.
According to Microsoft, global scaling looks like this:
Fore more information about Azure Container Registry, please
click here.
Quay
Quay
is offered by
Red Hat allows you to
store your containers on private and public repos. Quay also allows you to
automate your container builds, and integrates with GitHub and others. Quay
also provides automated scan containers for vulnerabilities and other tools.
Currently the main features of Quay are:
Public and Private repos: with quay you not only can have private
but also public repos to share youre images with the world.
High availability and geo-replication:
Quay also offers geographic replication for the running of multiple
instances of Red Hat Quay across several regions and syncing between
data centers.
Robot accounts: Create credentials designed for deploying software
automatically.
Security features: such as authentication, SSL, etc.
Logging and auditing: Auditing is essential for everything in
your CI pipeline. Actions via API and UI are tracked.
CI/CD integrations: so you can fully automate your
pipelines.
Granular management: Complete control over who can access your
containers, track changes, and automatically scan for
vulnerabilities.
Public and private clouds: Quay is offered on its public cloud
or on a on premises version (see below)
Security features: Quay offers important security features
such as vulnerability scanning, encryption, TLS and role-based access
controls.
In case your organization needs, Quay
can also be installed on premises
using OpenShift. This is
a very important feature for big organizations that run their private clouds
and need to keep everything under their own infrastructure.
Fully-managed and highly available hosting: ACR hosts and manages
your repo for you.
Public and Private repos: with the paid plan you can have public
and private repos.
Parallel Builds: multiple teams can build projects in parallel.
Security features: Docker Hub offers important security features
such as
vulnerability scanning,
encryption, TLS and role-based access controls.
Highlighted Feature: Integration with Digital Ocean Kubernetes
With DOCR you can build your container images on any machine, and push them to
DigitalOcean Container Registry with the Docker CLI. DigitalOcean Kubernetes
seamlessly integrates to facilitate continuous deployment.
Fore more information about Digital Ocean Container Registry, please
click here.
Conclusion
On this post we reviewed five alternatives to Docker Hub. As the alternatives
discussed offer essentially the same features, rule of thumb should be using
what's more convenient for your team. As a guideline, you should choose the
service from your of cloud provider as it will integrate with other products
you probably use. If on a private cloud, Quay can be a good alternative.
The essential requirements to look for wen looking for a container registry
should be: being fully-managed, private repositories, CI/CD integrations (so
you can automate your workflow) and robust security features.