Showing posts with label Development. Show all posts
Showing posts with label Development. Show all posts

Wednesday, November 1, 2023

OWASP: Your Guide to Secure Web Development

With cyber threats on the rise, more and more developers are expected to build more trustworthy applications. Learn how
Photo by Ante Hamersmit on Unsplash

Cyber threat is a trend that unfortunately will not go away any time soon. In fact, it will just keep growing. As as developers, it's our duty to build solutions that are reliable and resistant to attacks, which is a complex undertaking.

Fortunately, the OWASP project provides a lot of information on how to secure applications. It's available for free, and is created and maintained by security experts. So let's learn more about it.

What is OWASP

For starters, OWASP is an acronym for:
  • O – Open
  • W – Web 
  • A – Application
  • S – Security
  • P – Project

The Open Web Application Security Project is an online community that produces freely-available articles, methodologies, documentation, tools, and technologies in the field of web application security.

OWASP is non-profit organization community of international security researchers and experts dedicated to improving the security of software, with a especial focus on AppSec (application security).

What it offers

Contrary to what you might think, OWASP is not only about documentation. Here are some highlights of what the project offers:

  • Wide range of resources to help organizations mitigate security threats and reduce their exposure.
  • Extensive documentation on cybersecurity practices
  • Tooling to learn, test and validate different aspects of security
  • Resources that help identify and mitigate security vulnerabilities in their web applications and APIs.
Some of my favourite resources are:
  • OWASP Top Ten (ten most popular threats in AppSec)
  • OWASP Projects (extense and diverse compilation of projects and tools, as we’ll see)
  • Extensive technical documentation
  • Chapters (community for application security professionals around the world)
  • Conferences
  • Web Application Security Testing Guidelines (WSTG)
  • Education and Training
  • Industry Reports

Let's learn more about them.

When, How and Why leverage OWASP

To keep it simple, you should use OWASP whenever you are building any application (client-facing or not) that interacts with data and is used by users (in other words, for most projects deployed in production).

More importantly, you should leverage OWASP because:

  • Security and AppSec are HARD!
  • Security is a moving target
  • It offers a collection of best practices from security experts
  • It is continuously updated to cover most popular attacks in AppSec
  • Btw, did I mention that security is HARD?

Don’t implement security related features “your way”. Most likely it is not secure enough. Leverage well established patterns such as those provided by OWASP.

Flagship Projects

So let's take a look at some flagship projects.

OWASP Top Ten

One of OWASP’s most popular projects, OWASP Top 10 is a reference standard for the most critical web application security risks. The community regularly updates the list with the ten most critical (and popular) web application security risks.

Active for over 20 years, the project receives contribution from the international community of security experts and researchers. One of its benefits is to bring awareness to the most critical attacks, as well as helping developers and security professionals prioritize their efforts in securing web applications.

Here are is the top 10 attacks in 2021:

OWASP Top 10 2021. Source: OWASP

OWASP Cheat Sheet Series

Another essential resource for building secure web applications, the OWASP Cheat Sheet Series provides easily accessible practice guides for application developers and defenders to follow.

The project offers more than 80 cheat sheets and security best practice in form of guides for application developers and defenders to follow.

You should leverage it as it helps developers and security professionals prioritize their efforts in securing web application.

OWASP Dependency-Check

OWASP Dependency-Check is a Software Composition Analysis (SCA) tool suite that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities.

OWASP Juice Shop

OWASP Juice Shop is a very sophisticated (and insecure) web application for security trainings. Also great voluntary guinea pig for your security tools and DevSecOps pipelines!

Getting started with Juice Shop is easy! Check this GitHub page for more information.

OWASP Mobile Application Security

The OWASP Mobile Application Security project offers security standards for mobile apps and a comprehensive testing guide that covers the processes, techniques, and tools used during a mobile application security assessment.

OWASP Web Security Testing Guide

The OWASP Web Security Testing Guide project produces the premier cybersecurity testing resource for web application developers and security professionals. A PDF is available for free on from GitHub.

Some highlights of WSTG:

  • Fantastic guide to testing the security of web applications and web services.
  • Created by security professionals and dedicated volunteers
  • Framework of best practices used by penetration testers all over the world.
  • 450+ pages of AppSec!
Don't forget to download your WSTG PDF directly from GitHub.

OWASP ZAP

One of my favourite ones, OWASP ZAP is the world’s most widely used web app scanner. Free and open source. Actively maintained by a dedicated international team of volunteers. ZAP is a free alternative to the very popular (and excellent) Burp Suite.

Some of the features available on OWASP ZAP:

  • Automated Scanning
  • Manual Testing
  • Spidering and Crawling
  • Active and Passive Scanning
  • Alerts and Reporting
  • Session Management
  • Fuzzer
  • Authentication Support
  • Plug-in Support
  • WebSocket Testing
  • Automation and Integration
  • Community and Updates
  • Multi-Platform Support

OWASP Amass

The OWASP Amass tool Performs network mapping of attack surfaces and external asset discovery using open source information gathering and active reconnaissance techniques..

Conclusion

As cyber threats grow, developers should protect their applications from increasingly complex and sophisticated attacks. For that, OWASP is an essential project to know, study and use.

Hope it helps.

Wednesday, June 1, 2022

Stress test your cloud applications with Azure Chaos Studio

Azure Chaos Studio makes it possible to stress test your applications directly from Azure, and can significantly help in your business continuity and disaster recovery (BCDR) strategies.

Azure Chaos Studio
Source: Azure

Azure Chaos Studio is a new service in Azure that allows you to test and improve the reliability of your applications. With it, teams can quickly identify weak spots in their architecture, addressing the enterprise goals of business continuity and disaster recovery (BCDR).

Subjecting applications to real or simulated faults allows observing how applications respond to real-world disruptions.

Running chaos experiments used to be a complex task and required deploying very complex workloads. But with Chaos Studio it just became less complex, due to its availability from the Azure portal.

Azure Chaos Studio - Console
Source: Azure

What is Chaos Engineering?

Chaos engineering is a practice that helps teams measure, understand and improve their cloud applications by submitting those application to failures in controlled experiments. This practice helps identifying weak spots in your architecture, which if fixed, increases your service resilience.

Why Chaos Engineering?

The problem that Chaos Studio tries to solve is not new. Disaster recovery and business continuity are usually treated very seriously by organizations as outages can significantly impact reputations, revenues, and much more.

That said, practicing chaos engineering is a must for organizations actively working on business continuity and disaster recovery (BCDR) strategy. These drills ensure that applications can recover quickly and preserve critical data during failures.

Another important factor to consider is high availability (HA). Chaos Engineering helps validating application resilience against regional outages, network configuration errors, high load, and more.

Features

Some of the most interesting features provided by Azure Chaos Studio are:

  • Test resilience against real-world incidents, like outages or high CPU utilization
  • Reproduce incidents to better understand the failure.
  • Ensure that post-incident repairs prevent the incident from recurring.
  • Prepare for a major event or season with "game day" load, scale, performance, and resilience validation.
  • Do business continuity and disaster recovery (BCDR) drills to ensure that your application can recover quickly and preserve critical data in a disaster.
  • Run high availability (HA) drills to test application resilience against region outages, network configuration errors and high stress events.
  • Develop application performance benchmarks.
  • Plan capacity needs for production environments.
  • Run stress tests or load tests.
  • Ensure that services migrated from an on-premises or other cloud environment remain resilient to known failures.
  • Build confidence in services built on cloud-native architectures.
  • Validate that live site tooling, observability data, and on-call processes still work in unexpected conditions.

How to get started

Getting started with Azure Chaos Studio is simple, just log into your Azure Account and follow these steps.

References

Monday, May 2, 2022

Managed Grafana now available on Azure

It's now possible to run Grafana natively on Azure. Read to understand.

Source: Azure

Grafana, the most popular open-source analytics visualization tool is now available on Azure as a managed service. With it, customers can run Grafana natively within the Azure cloud platform without needing to provision or managing the backend services needed to run it.

Why use Grafana?

With Grafana, users can bring together logs, traces, metrics, and other disparate data from across an organization, regardless of where they are stored. With Azure Managed Grafana, the Grafana dashboards our customers are familiar with are now integrated seamlessly with the services and security of Azure.

Features

Azure Managed Grafana is a fully managed service for analytics and monitoring solutions. It's supported by Grafana Enterprise, which provides extensible data visualizations. Quickly and easily deploy Grafana dashboards with built-in high availability and control access with Azure security.

Source: Azure

Azure Managed Grafana also provides a rich set of built-in dashboards for various Azure Monitor features to help customers easily build new visualizations. For example, some features with built-in dashboards include Azure Monitor application insights, Azure Monitor container insights, Azure Monitor virtual machines insights, and Azure Monitor alerts.

How to get started

Getting started with Grafana on Azure is easy. Here are some links you should check:

References

Thursday, March 17, 2022

Why use Vim

Depending on your preconceptions, Vim may look exotic or sexy. Let's review those assumptions and provide rational reasons to use this fantastic text editor.
Photo by Alex Knight on Unsplash

It may be possible that you heard about Vim. It may be possible that you didn't. Depending on your background, it may even be possible that have preconceptions about it. On this post, let's try to review all assumptions and provide concrete reasons to use this fantastic text editor.

This article is an adaptation of another publication made by me on Vim4us. I'm re-publishing here to a wider audience with a few tweaks.

Vim is ubiquitous

Vim has been around for almost thirty years. Due to its simplicity, ubiquity  and low resource requirements, it's the preferred editor by sysadmins worldwide.

Easy to install

Vim is also easy to install on Windows and Macs and is packaged in most Linux distros meaning that, even if it isn't installed in your system, Vim is one line from the terminal and two clicks from your software manager.

Vim is lightweight

Differently from most editors, Vim is very lightweight. The installation package is only 10 Mb and depending on your setup, memory consumption reaches  20 Mb. Compare that with most text editors, especially the Electron-based editors like Visual Studio. Install size is not less than 200Mb, memory consumption quickly 1Gb (50 times more!) while requiring 1.5Gb of storage, making it slow, even on modern hardware.

If you're running a Mac, a low end computer, a phone, or even a Raspberry Pi, Vim is definitely a good option for you.

Vim is stable

As previously said, Vim has been around for almost 30 years. And will probably be for at two more decades. Learning Vim is an excellent investment as you will be able to use your knowledge for the next two decades at least.

Compare that to the editor you use today (EclipseVisual StudioSublime TextVisual Studio Code) - can you really guarantee you'll be using them ten years from now?

Vim is language-independent

Vim works well with anything you want, as long as it's text. Vim works by default with most file formats, has locales, can be localized, supports eastern typography such as Arabic and Hebrew and comes with built-in support (including highlighting) for most languages.

Vim respects your freedom

Vim does not contain any built-in telemetry. It's (unfortunately) common theses days the companies are abusing your statistics in favor of improvements in their system. Sysadmins trust that Vim will not be reaching the network to run ad-hoc requests.

Vim is efficient

Vim is brilliant in how it optimizes your use of the keyboard. We'll talk about that later but for now, understand that its combination of multiple modes, motions, macros and other brilliant features makes it literally light-years ahead of other text editors.

Thriving Ecosystem

Stop for a second and think about which feature you couldn't live without today on your current text editor? The answer you probably be that Python or Go extension, meaning that what you'll miss is not actually about the editor but about its ecosystem.

Vim has a brilliant ecosystem. You'll find thousands of extensions covering anything you need. You can also host your extensions anywhere (on GitHub, for example) without being locked by any vendor. You could also host them in private/corporate repos just for your team or share on public directories like Vim Awesome.

Vim is ultra-customizable

Even if by default Vim has most of what you need, it's important to understand that Vim lets you change pretty much everything. For example, you can make temporary/local customizations (by using the Ex mode), permanent customizations (by changing your .vimrc) or even customizations based on file type.

Vim is always getting better

Vim is actively developed meaning that it keeps getting better. Vim users get security patches and new features all the time. Vim is also updated to accommodate the latest upgrades on modern operating systems while also supporting older systems too!

Huge Community

Vim's community is huge and you can get help easily. These days, the most active discussions happen on Vim's mailing listsStack ExchangeIRCYouTube and of course, Reddit.

Extensive documentation

Learning how to learn Vim is the key to a continuous understanding of the tool and not getting frustrated. There are many ways to get help on Vim: using its built-in help system, using the man pages and obviously, accessing the communities listed above.

Vim is free

These days it may be odd to say that Vim's free. Vim's freedom goes beyond its price, but also your freedom to modify it to your needs and deploy it wherever you want. Vim developers also have a strong commitment to helping needed people around the world.

GUI-less

Vim also runs GUI-less, meaning it runs on your terminal. So you get a full featured text-editor on any system you're working on, regardless if it's a local desktop or remote supercomputer. This feature is essential for sysadmins and developers who often need to modify text files on remote machines trough an SSH connection.

Rich out-of-the-box toolset

Vim comes with fantastic tooling by default: powerful search, regular expression support, syntax highlighting, text sort, integrated terminal, integrated file manager, cryptography, color schemes, plugin management and much more. All without a single plugin installed!

Vim integrates into your workflow

Differently from other text editors which force you into their thing, Vim adjusts seamlessly to your workflow via powerful customization, extension support, integrated shell support and ability to pipe data in/out from it. 

Vim can be programmed

Want to go the extra mile? Vim also has its own language, called VimL. With it you can create your own plugins and optimize even further the system to your needs.

Vim will boost your productivity

There are multiple ways Vim will boost your productivity. First, Vim's extensive use of the home row of the keyboard saves you from having to reach the arrow keys (or even worse, the mouse) to do your work. Second, with Vim you can quickly create macros to reproduce repetitive operations, third, the combination of motions, plugins, custom shortcuts and shell integration will definitely boost your productivity way more than you could imagine.

Vim will make you type better and faster

Being keyboard based, Vim's workflow based on the home row will definitely help force you to type better. With Vim you'll realize that you probably move your hands way more than you should and will significantly increase your typing speed.

Vim will make you learn more

Most editors these days do too much. Yes, part of that is imposed on us by languages that require a lot of metadata (Java and C# for example). One problem with that is that you end up relying on the text editor much more than you need. Without access to Eclipse or Visual Studio it may be possible that you'll feel the impostor syndrome

With Vim, despite being able to, you'll feel closer to your work, resulting in a better understanding of what you're doing. You'll also realize that you will learn more and memorize better the contents of what you're working on.

Conclusion

On this post we provided many tips why one should learn Vim. Vim is stable, ubiquitous and is supported by an engaged, growing community. Given all its features, Vim is definitely a good tool to learn now and harvest the benefits for decades to come.

References

See Also

Thursday, February 3, 2022

Build .NET apps on Google Cloud Functions

It's now possible to build serverless .NET apps on Google Cloud Functions

Source: Google Cloud Blog

Among the many benefits of  using .NET in Google Cloud is the ability to build and run .NET apps on a serverless platform like Google Cloud Functions. Since it's now possible to run .NET apps on Cloud Functions, let's understand how all of that works.

What is Cloud Functions?

Cloud Functions is Google Cloud’s Function-as-a-Service platform that allows developers to build serverless apps. Since serverless apps do not require a server to run, cloud functions are a great fit for serverless applications, mobile or IoT backends, real-time data processing systems, video, image and sentiment analysis and even things like chatbots, or virtual assistants.

FaaS

To develop your .NET apps so they're compatible with Cloud Functions, Google has made available this GitHub repo.  The Functions Framework lets you write lightweight functions that run in many different environments, including:

Building your C# App

Assuming you're using .NET Core, the first thing you'll need is to build and run a deployable container on your local machine. For that, make sure that you have either Docker and the pack tool installed.

Next, build a container from your function using the Functions buildpacks:

pack build \
  --builder gcr.io/buildpacks/builder:v1 \
  --env GOOGLE_FUNCTION_SIGNATURE_TYPE=http \
  --env GOOGLE_FUNCTION_TARGET=HelloFunctions.Function \   my-first-function

Start the built container:

docker run --rm -p 8080:8080 my-first-function
# Output: Serving function...

Send a request to this function by navigating to localhost:8080. You should see Hello, Functions Framework.

Cloud Event Functions

After installing the same template package described above, use the gcf-event template:
mkdir HelloEvents cd HelloEvents dotnet new gcf-event

VB and F# support

The templates package also supports VB and F# projects. Just use -lang vb or -lang f# in the dotnet new command. For example, the HTTP function example above can be used with VB like this:
mkdir HelloFunctions
cd HelloFunctions
dotnet new gcf-http -lang vb

Running your function on serverless platforms

After you finished your project. you can use the Google Cloud SDK to deploy to Google Cloud Functions from the command line with the gcloud tool.

Once you have created and configured a Google Cloud project (as described in the Google Cloud Functions Quickstarts and installed the Google Cloud SDK, open a command line and navigate to the function directory. Use the gcloud functions deploy command to deploy the function.

For the quickstart HTTP function described above, you could run:

gcloud functions deploy hello-functions --runtime dotnet3 --trigger-http --entry-point HelloFunctions.Function

Note that other function types require different command line options. See the deployment documentation for more details.

Trying Cloud Functions for .NET

To get started with Cloud Functions for .NET, read the quickstart guide and learn how to write your first functions. You can even try it out with a Google Cloud Platform free trial.

References

See Also

Monday, January 3, 2022

Why use the terminal

The command-line (aka terminal) is a scary thing for most users. But understanding it can be a huge step in your learning journey and add a significant boost to your career in tech.

Photo by Tianyi Ma on Unsplash

Depending on your technical skills, the command-line interface (also known as CLI or terminal) may look scary. But it shouldn't! The CLI is a powerful and resourceful tool that every person aspiring greater tech skills should learn and be comfortable with. On this article, let's review many reasons why you should learn and use the command line, commonly (and often incorrectly) referred to as terminal, shell, bash and CLI. 

This article is an adaptation of another one originally published by me on Linux4us. I'm re-publishing here to a wider audience with a few tweaks.

Ubiquitous

The command-line interface (CLI) is available in every operating system, not only in Linux. Very frequently, this is where developers and system administrators spend a lot of time. But, if you want to work with Linux, development, the cloud or with technology in general, better start learning it.

Terminals are available in every operating system including Linux, Windows and Macs

Powerful

CLI-based apps are much more powerful than their GUI-based equivalents. That happens because usually GUIs are usually wrappers around libraries that power both the GUIs and the terminal apps. Very frequently, these libraries contain way more functionality than what's available in the graphical interface because, as you might expect, since software development takes time and costs money to produce, developers only add to GUI apps the most popular features. 

For example, take a look at the plethora of options that the GNU find tool provides us:

Does your GUI-based find tool has all those options?

Quicker

Common and repetitive tasks are also faster in the terminal with the advantage that you will be able to repeat and even schedule these tasks so they run automatically, releasing you to do actual work, leaving the repetitive tasks to computer.

For example, consider this standard development workflow:

  1. download code from GitHub
  2. make changes
  3. commit code locally
  4. push changes back to GitHub

If you were doing the above using a GUI-based git client (for example, Tortoise Git), the workflow would be similar to the below, taking you approximately 20 minutes to complete:

  1. Open Tortoise Git's web page
  2. Click Download
  3. Next -> Next -> Next -> Finish
  4. Right-click a folder in Windows Explorer (or Nautilus, or Finder) -> Select clone -> Paster the Url -> Click OK
  5. Wait for the download to Complete -> Click OK
  6. Back to Windows Explorer -> Find File -> Open it
  7. Make your changes (by probably using GEdit, KEdit or Visual Studio Code) -> Save
  8. Back to Windows Explorer
  9. Right Click -> Commit
  10. Right Click -> Push
  11. Take a deep breath

In the terminal (for example, in Ubuntu), the workflow would be equivalent to the below and could be completed in less than 2 minutes:

sudo apt update && sudo apt install git -y   # install git
git clone <url>     # clone the GitHub repo locally
vim/nano file -> save  # edit the file using a text-based editor
git commit -m <msg> # commits the file locally
git push  # push the changes back to our GitHub repo

Automation

Terminal/CLI-based tasks can be scripted (automated) and easily repeated, meaning that you will be able to optimize a big part of your workflow. Another benefit is that these scripts can be easily shared, exactly as business and professional developers do!

So let's continue the above example. Our developer realized she is wasting too much time in the GUI and would like to speed up her workflow even more. She learned some bash scripting and wrote the function below:

gcp ()
{
    msg="More updates";
    if [ -n "$1" ]; then
        msg=$1;
    fi;
    git add ./ && git commit -m "$msg" && git push

She's happy because now she can run from the terminal, the below command as soon as she finishes her changes:

gcp <commit-msg>

What previously took 5 minutes is now is done in 2 seconds (1.8 seconds to write the commit message and 0.2 to push the code upstream). A significant improvement in her workflow. Imagine how much more productive she would be during the course of her career!

It's important to always think how can you optimize your workflow. These small optimizations add up to your productivity significantly over time.

Lightweight

Not only the CLI is faster and more lightweight than equivalent GUI-based applications but it's quicker to run the same commands. For example, consider a Git client like Tortoise Git. It was supposed to be lightweight (what most GUI apps aren't) but it takes 3s to completely load and uses 10Mb of memory:

Our GUI-based git client TortoiseGit

Now take a look at its CLI equivalent. git status runs in 0.3s and consumes less than 1Mb. In other words, 20 times more efficient memory-wise and 10 times faster. 

A simple CLI command is 20x more efficient and 10x faster then its GUI equivalent

Disk Space Efficient

Another advantage of terminal apps over their GUI-equivalents is reduced disk space. For example, contrast these two popular apps. Can you spot the differences?

Application    Installation Size       Total Size       Memory Usage   
Visual Studio Code        80Mb 300Mb 500Mb (on sunny days)
Nano 0.2 Mb 0.8 Mb 3 Mb
400x more efficient 375x more efficient 160x more efficient

Extensible

Another important aspect is that the CLI is extensible. From it, skilled users could easily either extend its basic functionality using its built-in features like pipes and redirections combining inputs and outputs from different tools.

For example, sysadmins could list the first two users in the system who use Bash as a shell, ordered alphabetically with:

cat /etc/passwd | grep bash | cut -d : -f 1 | sort | head -2

What's interesting from the above command is how we combined 5 different tools to get the results we need. Once you master the Linux terminal, you'll too will be able to utilize these tools effectively to get work done significantly faster!

This is a more advanced topic. We'll see in future posts more details about it.

Customizable

As you might expect, the terminal is extremely customizable. Everything from the prompt to functions (as seen above) and even custom keybindings can be customized. For example, In Linux, binding the shortcut Ctrl+V to open the Vim text editor on the terminal is simple. Add this to your .bashrc file:

bind '"\C-V":"vim\n"'

Extensive range of Apps

Contrary to what most newcomers thing, the terminal has apps too! You will find apps for pretty much any use case. For example:

The above list is far from comprehensive. It's just to give you an idea of what you'd be able to find in there

For example, here's the Castero Podcast app running on a terminal:

Source; GitHub

Professional Development

Want to work with Linux, as a developer or with the cloud? Another important aspect of using the terminal is that it will make you more ready for the job market. Since servers usually run Linux and don't have GUIs, you will end up having to use some of the above tools on your day-to-day work. Developers frequently use it to run repetitive tasks, becoming way more productive. So why not start now?

Learn more about your System

Hopefully at this point you realize that you will learn way more about your system and computers in general when you use the terminal. And I'm not talking solely to Linux users. Windows and Mac users will learn a lot too! This is the secret sauce that the most productive developers want you to know!

It's also a huge win for testing new tools, maintaining your system, installing software, fixing issues and tweaking as you wish.

Getting Started

Ready to get started on your terminal/CLI journey? Here's a video that may serve as a good intro: 

Conclusion

Every modern computer has a terminal. Learning it will save you time, allow you to automate common actions, make you learn more about your system, grow professionally and be more productive. Well worth the effort, isn't it?

See Also

Wednesday, December 1, 2021

Vimium, the hacker's browser

Vimium is an essential tool for those looking to increase their productivity regardless if you're on Windows, Mac or Linux. Read to understand.
Photo by James Pond on Unsplash

If you read this blog before, you probably know my perfect setup: Fedora Linux, the i3 window manager (or Sway), the terminalRangerVim and lot, lots of automation. I got to this setup after meticulously searching for tools that could improve my workflow so I could be more productive, doing less. However, during that journey I realized that the browsing experience  - which takes a lot of our productive time - wasn't as optimal as it could be, so I started looking for ways to optimize it as well.

Turns out that Vimium is the key ingredient on that setup. On this post, let's learn what Vimium is, what it offers, how to use it, and how you too can be more productive, regardless of what your perfect setup might be.

About Vimium

So what's Vimium? Vimium's is a browser extension that provides keyboard shortcuts for navigating and controlling your browser inspired by the Vim text editor.

But why use Vimium?

So why should you care for yet another browser extension? Because Vimium:
  • will increase your productivity: by allowing you navigate the web without using the mouse.
  • makes you work faster: when you get used with Vimium you'll be able to accomplish work faster.
  • is highly customizable: allowing you to set your own keyboard shortcuts
  • is simple to use: once you understand how it works, it'll be very intuitive
  • has Vim-like keybindings: this is what makes Vimium 
  • helps reducing your fatigue: during the day, we do thousands of movements from the keyboard to the mouse. Keeping your hands centered on the keyboard will save you a lot of energy.
  • is an active open-source project: mature and healthy open-source projects are important as they guarantee you'll receive updates, fixes and improvements. You can find it's source code here.

Supported browsers

Currently Vimium runs on most browsers including Google Chrome, Firefox, Edge and Brave.

Why based on Vim?

Contrary to what you may have heard, Vim is a fantastic text editor. Vim emphasizes good typing practices by leveraging the keys located around the home row. The home row (F and J) is the most efficient place to place of your fingers causing less muscular stress and reduced arm movement. Vimium brings these concepts to the browser, transforming the traditional point-click browsing experience into productivity through the use of the keyboard.

Installing Vimium

Installing Vimium is very simple. Just open the app store for your browser and click Add extension (or equivalent) button on the extension page. Google Chrome users check this page, Firefox users can find Vimium here.
Installing Vimium on your browser should be as simple as navigating to the links above, clicking the Add extension button and confirming. No restart is necessary.

Using Vimium

With Vimium installed, let's start with the basics. The most essential shortcuts are:
  • f: pressing f will make Vimium highlight all hyperlinks. Entering the key opens the link on the same tab
  • F: same as f but opens on another tab
  • x: close the current tab
  • j: scroll down
  • k: scroll up
  • d: scroll down half a page
  • u: scroll up half a page
  • gg: scroll to top of the page
  • G: scroll to bottom of the page
  • H: go to the previous page
  • L: go the the next page 
  • b: open a bookmark
  • /: search
Vimium does not run on all pages. If the V icon on your bar is grey, it's turned off.  Vim also does not run by default on Private Mode but you can configure it to, on the extension settings page.

Managing Tabs

Vimium can also manage your tabs. The most used commands are:
  • x: close the current tab
  • F + link: opens link in another tab
  • J: previous tab
  • K: next tab
  • g<num>: goes to tab <num>
  • t: create new tab
  • yt: duplicate current tab
  • X: undo close tab

Getting Help

With Vimium installed, press ? to view the default shortcuts. You should see a screen like this:

A simple example

So let's do a simple example if possible, by just using the keyboard. With Vimium installed, open its GitHub page and press f. You should see:
As you can realize, all the yellow boxes contain letters inside. Typing them will tell the browser to click those links. For example, if I pressed S, I'd be taken to the link that here points to on the same tab. Need to continue working? Just open a new tab and go from there. Change tabs with J or K (uppercase), close with x, rinse and repeat.
I used F instead of f, typing S next would open here in another tab.

Advanced Features

As previously said, Vimium is also highly configurable. Because it's out of the scope of this post, will simply point you to the official configuration. There's a lot more there and once you get used with the tool you'll probably want to explore and customize it to your needs.

Conclusion

On this post we explained why using Vimium may yield good results by increasing your productivity and reducing your fatigue. I hope you are excited to try it out. If you want to learn other productivity hacks, check the Ranger file manager and the Vim text editor. Together with Vimium, these tools will make your workflow way more productive.

See Also

Monday, November 1, 2021

Docker and Containers - Everything you should know

Much has been discussed about Docker, containers, virtualization, microservices and distributed applications. On this post let's recap the essential concepts and review related technologies.
Photo by chuttersnap on Unsplash

Much has been discussed about Docker, microservices, virtualization and containerized applications. So much, that most people probably didn't catch up. As the ecosystem matures and new technologies and standards come and go, the container ecosystem can be confusing at times. On this post we will recap the essential concepts and a solid reference for the future.

Virtualization

So let's start with a bit of history. More a less 20 years ago the industry saw a big growth in processing power, memory, storage and a significant decrease in hardware prices. Engineers realized that their applications weren't utilizing the resources effectively so they developed Virtual machines (VMs) and hypervisors to run multiple operating systems in parallel on the same server.
Source: Resellers Panel
A hypervisor is computer software, firmware or hardware that creates and runs virtual machines. The computer where the hypervisor runs is called the host, and the VM is called a guest.

The first container technologies

As virtualization grew, engineers realized that VMs were difficult to scale, hard to secure, utilized a lot of redundant resources and maxed out at a dozen per server. Those limitations led to the first containerization tools listed below.
  • FreeBSD Jails: FreeBSD jails appeared in 2000 allowing the partitioning of a FreeBSD system into multiple subsystems. Jails was developed so that the same server could be sharded with multiple users without securely. 
  • Google's lmctfy: Google also had their own container implementation called lmcty (Let Me Contain That For You). According to the project page, lmctfy used to be Google’s container stack which now seems to be moved to runc. 
  • rkt: rkt was another container engine for Linux. rkt has ended and with CoreOS transitioning into Fedora CoreOS. Most of the efforts on that front should be happening into Podman now. 
  • LXC: released on 2008, the Linux Containers project (LXC) is another container solution for Linux. LXC provides a CLI, tools, libraries and a reference specification that's followed by Docker, LXD, systemd-nspawn and Podman/Buildah. 
  • Podman/Buildah: Podman and Buildah are also tools to create and manage containers. Podman provides an equivalent Docker CLI and improves on Docker by neither requiring a daemon (service) nor requiring root privileges. Podman's available by default on RH-based distros (RHEL, CentOS and Fedora). 
  • LXD: LXD is another system container manager. Developed by Canonical, Ubuntu's parent company, it offers pre-made images for multiple Linux distributions and is built around a REST API. Clients, such as the command line tool provided with LXD itself then do everything through that REST API. 

Docker

Docker first appeared in 2008 as dotCloud and became open-source in 2013. Docker is by far the most used container implementation. According to Docker Inc., more than 3.5 million Docker applications have been deployed and over 37 billion containerized applications downloaded.

Docker grew so fast because it allowed developers to easily pull, run and share containers remotely on Docker Hub as simple as:
docker run -it nginx /bin/bash

Differences between containers and VMs

So what's the difference between containers and VMs? While each VM has to have their own kernel, applications, libraries and services, containers don't as they share some of the host's resources. VMs are also slower to build, provision, deploy and restore. Since containers also provide a way to run isolated services, are lightweight (some are only a few MBs), start fast and are easier to deploy and scale, containers became the standard today.

The image below shows a visual comparison between VMs and Containers:
Source: ZDNnet

Why Containers?

Here are guidelines that could help you decide if you should be using containers instead of VMs:
  • containers share the operating system's kernel with other containers
  • containers are designed to run one main process, VMs manage multiple sets of processes
  • containers maximize the host's resource utilization 
  • containers faster to run, download and start
  • containers are easier to scale
  • containers are more portable than VMs
  • containers are usually more secure due to the reduced attack surface
  • containers are easier to deploy 
  • containers can be very lightweight (some are just a few MBs)
Containers are not only advantages. They also bring many technical challenges and will require you to not only rethink how your system is designed but also to use different tools. Look at the Ecosystem section below to understand.

Usage of Containers

And how much are containers being used? According to the a Cloud Native Computing Foundation survey, 84% of companies today use containers in production, a 15% increase from last year. Another good metric is provided by the Docker Index:

Open Collaboration

As the ecosystem stabilized, companies such as Amazon, Google, Microsoft and Red Hat collaborated on a shared format under Open Container Initiative (OCI). OCI was created from standards and technologies developed by Docker such as libcontainer. The standardization means that today you can run Docker and other LXC-based containers such as Podman on any OS.

The Cloud Native Computing Foundation (CNCF), part of the Linux Foundation is another significant entity in the area. CNF hosts many of the fastest-growing open source projects, including Kubernetes, Prometheus, and Envoy. CNCF's mission is to promote, monitor and hosts critical components of the global technology infrastructure.

The Technologies

Now let's dive into the technologies used by Docker (and OCI containers in general). The image below shows a detailed overview of the internals of a container. For clarity, we'll break the discussion in user and kernel space.

User space technologies

In usersland, Docker and other OCI containers utilize essentially these technologies:
  • runc: runc is a CLI tool for spawning and running containers. runc is a fork of libcontainer, a library developed by Docker that was donated to the OCI and includes all modifications needed to make it run independently of Docker. 
  • containerd: containerd is a project developed by Docker and donated to the CNCF that builds on top of runc adding features, such as image transfer, storage, execution, network and more.
  • CRI: CRI is the containerd plugin for the Kubernetes Container Runtime Interface. With it, you could run Kubernetes using containerd as the container runtime. 
  • Prometheus: Prometheus is an open-source systems monitoring and alerting toolkit. Prometheus is an independent project and member of the Cloud Native Computing Foundation.
  • gRPC: gRPC is an open source remote procedure call system developed by Google. It uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, blocking or nonblocking bindings, and cancellation and timeouts.
  • Go: yes, some of the tools are developed in C but Go shines in the area. Most of the open-source projects around containers use Go including: runc, runtime-tools, Docker CE, containerd, Kubernetes, libcontainer, Podman, Buildah, rkt, CoreDNS, LXD, Prometheus, CRI, etc. 

Kernel space technologies

In order to provide isolation, security and resource management, Docker relies on the following features from the Linux Kernel:
  • Union Filesystem (or UnionFS, UFS): UnionFS is a filesystem that allows files and directories of separate file systems to be transparently overlaid, forming a single file system. Docker implements some of them including brtfs and zfs.
  • Namespaces: Namespaces are a feature of the Linux kernel that partitions kernel resources so that one set of processes sees one set of resources while another set of processes sees a different set of resources. Specifically for Docker, PID, net, ipc, mnt and ufs are required.
  • Cgroups: Cgroups allow you to allocate resources — such as CPU time, system memory, network bandwidth, or combinations of these resources — among groups of processes running on a system. 
  • chroot chroot changes the apparent root directory for the current running process and its children. A program that is run in such a modified environment cannot name files outside the designated directory tree. 

Docker Overview

You probably installed Docker on your machine, pulled images and executed them. Three distinct tools participated on that operation: two local Docker tools and a remote container registry. On your local machine the two tools are:
  • Docker client: this is the CLI tool you use to run your commands. The CLI is essentially a wrapper to interact with the daemon (service) via a REST API.
  • Docker daemon (service): the daemon is a backend service that runs on your machine. The Docker daemon is the tool that performs most of the jobs such as downloading, running and creating resources on your machine.
The image below shows how the client and the daemon interact with each other:
Source: Docker Overview

Remote Registry

And what happens when you push your images to a container registry such as Docker Hub? The next image shows the relationship between client, dameon and the remote registry.
Source: Docker Overview

Images and Containers

Moving lower on the stack, it's time to take a quick look at Docker images. Internally, a Docker image can look like this:

Important concepts about images and containers that you should know:
  • Images are built on layers, utilizing the the union file system.
  • Images are readonly. Modifications made by the user are stored on a separate docker volume managed by the Docker daemon. They are removed as soon as you remove the container.
  • Images are managed using  docker image <operation> <imageid>
  • An instance of an image is called a container.
  • Containers are managed with the  docker container <operation> <containerid>
  • You can inspect details about your image with docker image inspect <imageid>
  • Images can be created with docker commit, docker build or Dockerfiles
  • Every image has to have a base image. scratch is the base empty image.
  • Dockerfiles are templates to script images. Developed by Docker, they became the standard for the industry.
  • The docker tool allows you to not only create and run images but also to create volumes, networks and much more.
For more information about how to build your images, check the official documentation.

Container Security

Due to the new practices of containers new security measures had to be applied. By default, containers are very reliable on some of the security measures of the host operating system kernel. Docker applies the principle of least privilege to provide isolation and reduce the attack surface. In essence, the best practices around container practice are:
  • signing containers 
  • only used images from trusted registries
  • harden the host operating system
  • enforce the principle of least privilege and do not elevate access to access devices
  • offer centralized logging and monitoring
  • run automated vulnerability scanning

The Ecosystem

Since this post is primarily about containers I'll defer the discussion of some the ecosystem for the future. However, it's important to list the main areas people working with containers, microservices and distributed applications should learn:
  • Container Registries: remote registries that allow you to push and share your own images.
  • Orchestration: orchestration tools deploy, manage and monitor your microservices.
  • DNS and Service Discovery: with containers and microservices, you'll probably need DNS and service discovery so that your services can see and talk to each onther.
  • Key-Value Stores: provide a reliable way to store data that needs to be accessed by a distributed system or cluster.
  • Routing: routes the communication between microservices.
  • Load Balancing: load balancing in a distributed system is a complex problem. Consider specific tooling for your app.
  • Logging: microservices and distributed applications will require you to rethink your logging strategy so they're available on a central location.
  • Communication Bus: your applications will need to communicate and using a Bus is the preferred way.
  • Redundancy: necessary to guarantee that your system can sustain load and keep operating on crashes.
  • Health Checking: consistent health checking is necessary to guarantee all services are operating.
  • Self-healing: microservices will fail. Self-healing is the process of redeploying services when they crash.
  • Deployments, CI, CD: redeploying microservices is different than the traditional deployment. You'll probably have to rethink your deployments, CI and CD.
  • Monitoring: monitoring should be centralized for distributed applications.
  • Alerting: it's a good practice to have alerting systems on events triggered from your system.
  • Serverless: allows you to build and run applications and services without running the servers..
  • FaaS - Functions as a service: allows you to develop, run, and manage application functionalities without maintaining the infrastructure.

Conclusion

On this post we reviewed the most important concepts about Docker containers, virtualization and the whole ecosystem. As you probably realized by the lenght of this post, the ecosystem around containers and microservices is huge - and keeps growing! We will cover in more detail much of the topic addressed here on future posts.

In the next posts, we will start divining in the details of some of these technologies.

References

See Also

About the Author

Bruno Hildenbrand      
Principal Architect, HildenCo Solutions.