Thursday, December 12, 2019

An in depth review of the RavenDB Cloud

The RavenDB Cloud may be an excellent choice for developers looking for a SAAS database on the cloud.

The RavenDB team recently launched the RavenDB Cloud. Since I have been working with  RavenDB for some time, I thought it would be interesting to investigate the Raven Cloud more in-depth.

In this article we will learn:

  1. About RavenDB;
  2. About cloud integrations (AWS, Azure, Google Cloud);
  3. How to create an account;
  4. How to deploy you free AWS instance;
  5. How to manage your server;
  6. How to interact with it through code;
  7. Potential risks;
  8. What's next?

About RavenDB

I’m a big fan of RavenDB. I've been using it on production for the last 5 years and the database has been fast, reliable, secure and given that it provides a robust and friendly C# Api and lots of interesting features, it was definitely a good choice for our product.

If you want to know more about my RavenDB experience, please click here.

RavenDB 4 introduces many welcome enhancements. Below I highlight my favourites:
  • Management Studio – much faster, customizeable and intuitive.
  • Speed – Raven 4 is faster than 3.5. All Raven 4 tested features (Db imports, queries, patches and custom implementation) showed significant improvements against the previous version.
  • Security – the authentication now happens via certificates instead of usernames/passwords and the databse offers encrypted storage and backups out of the box.
  • New Server Dashboard – the new dashboard offers a holistic overview of the cluster, nodes and databases deployed on a cloud account.
  • Clustering – Raven 4 works on a cluster fashion instead of an instance fashion. This brings better performance, stability and consistency for applications.
  • Auto-Backups: you can setup the database to run periodic backups (incremental or not). This feature is also present on the RavenDB helping to reduce the reliance on personalized jobs and/or scripts. Backups can be encrypted and uploaded to different servers including blob storages on AWS and Azure.
  • Ongoing Tasks – the new Manage Ongoing Tasks interface simplifies the configuration and deployment of important services such as ETL, Sql Replication and Backups.
  • RQL RQL is the new Raven Query Language - a mix of LINQ and JavaScript. Much clearer and intuitive than Lucene. See the Queries section for more information.
With all that information now let’s review some of the most interesting aspects of the RavenDB Cloud.

Licensing

Currently, three types of licenses are available: Free, Developer and Production. The main differences are:
  • Only one free version per cloud account
  • The free version runs only on AWS US East 1
  • Some features (such as SQL replication) are only available on the Production version
If you would like to test RavenDB, remember that you can also have local database.
Other aspects of the the free version are (as of Dec, 2019):

Services

Which services are available on the free version? In summary, a free RavenDB Cloud license allows to:
  • Create a new RavenDB instance
  • Manage the RavenDB server and databases
  • Configure some aspects of the database (other were not available with the free version)
  • Import an  existing database
  • Query data using a simple console tool
  • Test partially the SQL replication feature
  • Test partially the backup feature

AWS, Azure and Google Cloud Integration

The RavenDB Cloud can aslo be deployed on AWS and Azure’s most popular regions and they're also working on the Google Cloud. That means the database could sit on the same datacenter as your application reducing the latency between your services.

Pricing

The price varies by tier and time utilization. This is the estimated pricing from the Development Tier (Dec 09, 2019):

Creating an Account

The process to create an account is simple. Just go to cloud.raven.net, click Get started for free:
Enter your email and proceed with the setup. You should get an access token to access your created account.

Creating your Instance

To create your database, log in to your account utilizing the email token and click Add Product:
Specify the required information:
Aand review your request to proceed with the deployment:

Deployment

With all the above information submitted, the deployment process starts. It took me around 2 minutes to have the new cluster provisioned. Once deployed, you should see it on the Raven Cloud portal:

Accessing the Instance

In order to access that cluster, clicking on the Manage button takes us to the new Management Studio also available on RavenDB 4. New users will be required to install a new certificate. Once logged in, here’s what the new RavenDB Studio looks like:
Some of the interesting features of the RavenDB Studio are:
  • a nice overview of our databases on the right. From there you can quickly view failed indexes, alerts and errors.
  • database telemetry including cpu, memory, storage and indexing
  • database management
  • a global overview of the database cluster 
  • real-time cluster monitoring
  • customizeable

Creating Databases

Let's now create and import databases. The process to create a new database is simple. Click Databases -> New Database:

Enter the database name and some other properties and the database is quickly created. The deployment of a new database takes less than 5 seconds. An empty database utilizes aproximately 60 Mb on disk.

Costs

Unfortunately I can't provide an estimate on costs but I'd like to recall that you should consider also consider costs for:
  • networking: networking costs will vary and likely increase your costs.
  • storage: backup costs will also vary and will probably increase your costs.

Replication

Replication is done directly through Raven Studio either on creation or on settings. The database administrator sets how many nodes of the cluster he’d like to use and chooses between dynamic/manual replication and Raven handles all the rest. This is an important feature as it allows your database to be up in case one node within your cluster fails.

Importing Databases

I also tested the database important and more importantly, if it would be easy to migrate data between Raven 3.5 and Raven 4. Luckily the process of importing databases didn’t change much and a new RavenDB 4 accepts most of the imported data successfully. This is how the import process looks like:
For the record, my cloud instance imported of 1.5 million records in just over 2:40 minutes:

Managing your Server

From the Manage your Server section you'll have access to tools such as cluster, client configuration, logs, certificates, backups, traffic, storage, queries and more. You can see what's available below to manage your cluster:

Database Tools

Under Manage Ongoing Tasks you will also find interesting database-specific resources:

Backups

Backups are handled directly from Management Portal -> Database -> Settings -> Manage Ongoing Tasks tool:

I was pleasantly surprised that they offer backups to AmazonS3 buckets and Azure blob storage by default:
Once setup, you'll see that the automatic backup runs periodically:

Backup Considerations

Other imporant considerations regarding backups are:
  • The Free and Production tiers are regularly and automatically backed up.
  • You can define your own custom backup tasks, as you would with an on-premises RavenDB server.
  • A mandatory-backup task that stores a full backup every 24 hours 
  • An incremental backup happens every 30 minutes
  • Backups created by the mandatory backup routine are stored in a RavenDB Cloud
  • You will have no direct access to backups
  • You can view and restore them using your portal's Backups tab and the management Studio.
  • Mandatory-backup files are kept in RavenDB's own cloud.
  • RavenDB offers 1 GB per product per month for free
  • The backup storage usage is measured once a day, and you'll be charged each month based on your average daily usage.

External Replication

We can also configure external replication by Selecting a Database / Settings / Manage Ongoing Tasks / Add Task / External Replication. The screenshot below shows the options:

SQL Replication

The Raven Cloud also supports SQL replication. Unfortunately this feature was not available on the free version. From what I could test, the feature didn’t change much from Raven 3.5 and runs well.
The next step is to write a transformation script. For example:
All this done, the Raven will automatically replicate its records to the remote SQL database. This is excellent for reporting and, in case you need, querying your NoSql data from a traditional SQL database.

Scheduled Backups

Scheduled backups can also be customized and are available at Database / Setup / Manage Ongoing Tasks / Add Task:

Scaling

Being clustered by default, RavenDB 4 can be easily scaled via the Portal. The documentation describes in detail how it can be configured. Below, a screenshot provided by RavenDB on how it should work (the feature isn't available on the free tier):

Security

The RavenDB Cloud offer strong security features including:
  • Authentication: RavenDB uses X.509 certificate-based authentication. All access happens via certificates, all instances are encrypted using HTTPS / TLS 1.2 / X.509 certificates.
  • IP restriction: you can choose which IP addresses your server can be contacted by.
  • Database Encryption: implemented at the storage level, with XChaCha20-Poly1305 authenticated encryption using 256 bit keys.
  • Encryption at Rest: the raw data is encrypted and unreadable without possession of the secret key.
  • Encrypted Backups: your mandatory backup routines produce encrypted backup files. 

For more information, please read RavenDB on the Cloud: Security.

Cluster Api

There's also a Cluster API to managing the cluster. It allows your devops team to script database operations including:

For example, we can dynamically add a node to a cluster by running the following PowerShell script:
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$clientCert = Get-PfxCertificate -FilePath <path-to-pfx-cert>
Invoke-WebRequest -Method Put -URI "http:///admin/cluster/node?url=&tag=&watcher=&assignedCores= -Certificate $cert"

Or with the equivalent in cURL:
curl -X PUT http:///admin/cluster/node?url=&tag=&watcher=&assignedCores= --cert

Database Api

Since we’re talking devops, it’s important to note that the client api only manages nodes on the cluster. By itself, that feature still isn’t sufficient to start a new working environment since databases, indexes and data would be required.

Querying the Cloud Database

A major change happened on Raven 4: RQL replaces Lucene as the default language for queries and patches. If you don’t run RavenDB yet, you shouldn’t be alarmed. But for folks migrating from RavenDB 3.5, that potentially will be a significant impact requiring bigchanges on the code.

The good news is that RQL comes with important changes and multiple improvements on the new Management Studio. The UI is now friendlier, faster and simpler to use, query and export data.

RQL’s the syntax is a mix of .NET’s LINQ and JavaScript. Overall, it’s elegant, clean and simple to use. It also makes querying and patching the Raven database simpler. However, for users currently relying on Lucene, this may represent a risk as those queries will have to be migrated (and subsequently, extensively tested) to RQL.

Further Reference:
Breaking the language barrier
Querying: RQL - Raven Query Language

Running queries on the Portal

Querying using RQL from is straightforward. .NET developers should recognize the language since it's very similar to LINQ:

Customizating Queries

The Studio allows us to choose which columns should be returned by the results table:

Patching Data

The patch tool accepts RQL. For example, this is how we apply a simple patch using the new syntax:

Managing the database using the C# api

When we consider a database on the cloud we should ask how is the support for automation. RavenDB provides a powerful C# api that I show below some of the operations (source here):

Risks

With every new technology, there are risks. It’s important to understand that even if RavenDB is a mature technology (and their developers are bright!), there are risks that should be considered with this and any new platform. I highlight:
  • Costs: I wasn't able to determine the overall cost mainly because all values provided by RavenDB are estimates. If the costs are an important requirement for the migration, a more in-depth evaluation should be performed.
  • Performance: I didn't invest much time testing the performance of the Raven Cloud. The good new is that Raven 4 is much faster than 3.5 and is potentially offered on the same Cloud provider / region as your application.
  • Hidden Costs: as previously said, all prices listed are estimates. It’s probable that other costs will be added to your bill at the end of the month.
  • RQL RQL is the new way to run queries against the Raven 4 database. However, due to the amount and complexity of some of our queries relying on Lucene (the old advanced way of querying the Raven database), migrating all the complex queries to RQL will be a challenge in terms of time and testing efforts necessary. 
  • Major changes on the API: ignore this if you're new to RavenDB. But, if you were using RavenDB 3.5 there aresignificant changes on the RavenDB 4 Api. Broken dependencies will potentially be: business logic (if implicitly coupled to RavenDb 3.5), indexes, tests and tools. Also, Lucene queries, Map-Reduce indexes, patches and logic that contains bulk-insert operations will likely have to be upgraded.
  • NServiceBus: the Raven 4 api requires conflicting libraries with NServiceBus. So it may be possible that a RavenDB upgrade will also require you a NServiceBus upgrade.

For Further Investigation

My short experience with the RavenDB Cloud was solid. However I would like to highlight other topics that could potentially be researched in the future:
  • Full Cost Estimate – all the costs on this post are estimates are subject to variation. Most of these estimates were provided by RavenDB on the Raven Cloud website. It’s highly probable that on a real production environment, costs will be bigger. But, for what the Raven Cloud provide, I still find their prices very attractive.
  • Performance Benchmarks – I personally didn’t do any performance benchmark when testing the Raven Cloud. Based on this exercise, I did realize that both the local and the cloud versions or RavenDB 4 showed a good increase in the overall performance.
  • Security – No security tests were performed as outside of the scope of the spike. My understanding is that security is way beefier on Raven 4. But how secure is it?
  • SQL Integration – The free version doesn’t support sql replication. It’s a very important feature for those that need some sort of reporting. Probably a good reason to go to the dev/prod subscriptions.
  • Backup/Restore – The backup/restore feature wasn’t tested because the only available option for the AWS free version was on S3 storage. Worth investigating if considering using the Raven Cloud on production. My experience with a local install of Raven 4 is that it’s reliable and super fast!
  • Smuggler – The smuggler tool is available on the Raven 4 Api. I built a simple console tool to manage databases and import/export data. The source code is available here.
  • Cluster Api – since the free version does not include clustering, I couldn’t test the Api. However, since the Raven Apis are extensive and well document, I don’t expect any problems with that.

Conclusion

This ends the Raven Cloud evaluation, thanks for reading! Hopefully this quick look on these features helped clarigying what RavenDB offers. For me, RavenDB is a very strong alternative on the NoSql market and its cloud brings significant benefits for teams looking forward to reducing costs.

I dare to say that, for all it provides, RavenDB Cloud is a strong contender against MongoDB Atlas, Elastic Search and Azure CosmosDB.

References

See Also

Monday, October 7, 2019

Definition of Ready

While not part of the official Scrum Book, the Definition of ready can be a valuable asset for Agile Teams.
Over the years, we've seen stories being cancelled, deferred, incorrectly estimated and deemed incomplete causing a big frustration on development teams. Why? Excluding technical impediments or poor architectural decisions, a big problem could be on how the requirements were collected and transformed into a story. Other reasons could also be: incomplete or missing requirements and external unmet dependencies.
Given that all of the above contributes to wrong estimates and growing insatisfaction on development teams we'd like to share a template that could potentially be helpful for teams suffering from the above issues: a template for Definition of Ready.

Definition of Ready

The definition of ready (DOR) specifies the entry-criteria for an feature that is currently in in the backlog so that it's possible to work on it.  A Definition of Ready enables the team to specify pre-conditions that must be fulfilled before a story is estimated. The goal is of this document should be to provide a template that:
  • Helps the development team to properly estimate stories without inferring or discussing what features would look like during the estimation session.
  • Helps story writer to identify minimum requirements that should be addressed by addressing gaps and unknowns on stories that lead to rework, change of scope or disruptive changes
  • Provides clear requirements so that stories are better and more commonly understood by the team
  • Helps identifying and prevents problems early enough so they don’t surface after development starts
  • Reduces the time spent discussing stories. With clear requirements and dependencies met, the development team shouldn’t ask questions that potentially modify the scope and the size of the work
  • Identifies dependencies and impacts reducing risk

Requirements

We recommend the following requirements are satisfied to meet the definition of ready: 
  • The story provides a description, personas affected and clear acceptance criteria.
  • If the story requires front-end modifications, designs, mock ups, diagrams and explicit changes of any screens affected by the story are provided.
  • All external dependencies have been resolved and were documented.
  • If necessary, copies are written by the author. It helps decreasing external dependencies on stakeholders and helps building a shared knowledge by the team.
  • Necessary spikes were already addressed and no technical questions remain.
  • No business-related outstanding questions exist. They usually increase the scope of the story and invalidate the previous estimates.
  • We understand that for each story meeting the definition of ready, the development team can provide an estimate without neither going in too much detail nor skipping unmet dependencies/gaps in the contents of the story. With the estimate provided, the story can then be assigned to a sprint. We would also like to propose that the maximum size for stories is equivalent to half of the team’s velocity.

Checklist Template

The following table could also serve as a guideline to assert if the definition of ready is met.
Requirement
Y/N
Comments
The story has an objective description including who, what, why? Ex. Includes affected systems, objective screen names, urls, required data changes, etc?


The affected personas were listed?


The story has sufficient acceptance criteria?


If requiring front-end changes, were the copies provided?


If requiring front-end changes, were the wires provided?


Are there outstanding questions?


Were all the technical impediments (usually identified as Spikes or unknowns by the dev team) removed?


Are all the statements in alignment with each other (no conflicting statements)?


Is the vocabulary in use non-ambiguous?


Is the story potentially releasable?


Has the story dependencies on other stories? Which? Why?


Do you understand what business value the story provides?



The next sections will provide a better understanding on some of the above requirements.

Acceptance Criteria

While the purpose of this document is to help writing stories/requirements and acceptance criterias, the items below could also be relevant when writing acceptance criterias:
  1. What are the inputs and outputs?
  2. What changes are expected when the story is ready?
  3. Can you provide a list of high-level test cases related to the story that are understood by all team members?
  4. Not-expected: list of requirements, business rules, etc.
Expected format:
  • Be generic, in form of bullet points
  • Avoid duplicated, conflicting or ambiguous statements
  • Have at maximum, 6 requirements. We understand that more than that, potentially the story is too big.

Wires

Wires are very important for understanding, estimating and developing the story. I propose that all stories that require front-end modifications provide wires and copies before they can be estimated by the development team.

But not every requirement requires wires. As an example, these requirements that don’t require mockups:
  • Changes on the business logic on the backend
  • New jobs or changes in them
  • Email Copy changes
  • Changes on the database (patches, new columns without front-end requirements, etc)

Copies

If your team usually gets stories with copies like “Please insert your title here”, “Placeholder for copy”, etc. We recommend that the author of the story writes the copy for front-end and emails so that we don't rely on the business to provide them because:
  • changing copies on that last-minute forces tests to be redone and requires last minute builds, deployments and regressions that could be avoided.
  • depending on the business for providing the copy usually delays the start of the story by the development team
Providing the copies ahead of time will help developers to better estimate the story. Sometimes copies describe features the developers didn’t think of were they not provided.

Dependencies

As previously discussed, the team should ensure that external dependencies are met (except for copies). That includes business logic, external dependencies and potentially technical requirements. Unknown requirements, complexities and technical dependencies should be excluded from the estimates thus simplifying the discussions about the actual requirements and not on what they should be. In case there are unknowns, the team should reject the story proposing spikes or follow-up discussions.

Changes

Be noted that the objective of the Definition of Ready is not to create hard dependencies and roadblocks for the stories to be estimated. That said, it's expect that stories accommodate slight changes as it’s normal and expected that some refinement happens during development and we are open to them.

For the Development Team

In order to accommodate potential disruptive changes (see section Changes for more details) during the development of a story, I like to propose that the development team adheres to the following practices:
  • all stories are developed on a separate branch. That would protect the team in case either disruptive changes are required for a given story or if the development team cannot finish the story on time.
  • any significant disruptive change is evaluated and if big enough, the team will request that the story is removed from the sprint
  • spikes are created for the complex technical dependencies identified when discussing the stories
  • the maximum size for stories is equivalent to half of the team’s velocity
  • each story is treated as a potentially releasable feature

Conclusion

Remember that the Definition of Ready is not present on the official Scrum Book and is not part of Scrum. But, given how frequently team miss the deadlines, get invalid/incomplete requirements and struggle to understand the values of the stories, a definition of ready could be worth helping the team and the business to build a more robust framework that results in more satisfaction on both ends.

Check our Definition of Ready

As always, our Definition of Ready is available in our GitHub repo.

See Also

Monday, August 5, 2019

How I fell in love with i3

Understand what i3 is and how it can drastically change how you use your Linux desktop.

I've been using the i3 window manager for a couple of years and would like to share some thoughts about it. But first let's understand what i3 is and how it can drastically change how you use your Linux desktop.

What is i3?

The official documentation describes i3 as:
a tilingwindow manager, completely written from scratch. The target platforms are GNU/Linux and BSD operating systems, our code is Free and Open Source Software (FOSS) under the BSD license. i3 is primarily targeted at advanced users and developers.
But what's a tiling window manager?

Tiling Window Managers

A tiling window manager is a program that runs on top of your operating system's graphical user interface (GUI) that auto-manages your windows for you. The most common way users interact with their computers these days is via desktop mangers (GNOME, KDE, XFCE, etc). That same program includes tools to set wallpapers, login managers, drag and move windows around and interact with other running windows and services.
Source: DevianArt

Differences

So what are the differences between a tiling window manager and a desktop manager? Many. For simplicity, tiling window managers:
  • are way simpler than full desktop managers
  • consume way less resources
  • require you to setup most things yourself
  • auto-place windows on the desktop
  • automatically split window space
  • do not allow dragging or moving windows around
  • always use 100% of the allocated space
  • are easily customizable
  • allow managing desktop applications using the keyboard
  • can be configured to pre-load specific configurations

Why i3

Working with i3 may be a radical shift in how we use our computers, so why should one switch from traditional desktop environments like Gnome, KDE, MATE, Cinnamon to i3? In summary, you should consider i3 because i3:
  • will make you more productive
  • is simple, concise
  • is lightweight
  • is fast, super fast
  • is not bloated, not fancy (but can be)
  • is extremely customizable allowing you to configure it the way you like
  • reduces context switching, saving you time and brain power since you will stop wasting time dragging and searching for windows around
  • allows you to manage your workspace entirely using the keyboard
  • has vim-like keybindings (yes, this is a plus!!)
  • has easy support for vertical and horizontal splits, and parent containers. 
  • improves your battery life
  • can integrate with other tools of your system
  • will make you feel less tired after a day of work
  • will make you learn better the GNU/Linux operating system
  • will make you more the terminal and terminal-based tools
So let's review some of the main reasons to switch to i3.

A Beautiful Desktop

i3 will make your desktop beautiful. Through its simplicity you will discover a more uniform and elegant experience. For example, take a look at this beautiful Arch desktop running i3. See how all applications integrate seamlessly. No overridden windows, no pixels wasted.
Source: Reddit

Productivity

My productivity increased significantly using i3. Why? Because it's keyboard-friendly nature made me stopped using the mouse significantly. Yes, I still have to use the it but now I try to keep that to a minimum. Today, 90% of my work can be easily accomplished via keystrokes.
Source: Reddit

Efficiency

On traditional desktop environments spend a lot of time dragging windows around and alt-tabbing between them. i3 saves hundreds of alt-tabs and hand-right-hand-left movements to reach the mouse. That's a lot of context switch saved, and a lot of efficiency gained!

Less Fatigue

i3 will also reduce your fatigue. Why? Arm right, Arm left, that involuntary movement we do thousands of times a day to reach the mouse adds a lot of fatigue to our body and it's one of the main reasons we feel exhausted after using the computer for a day. With i3, you'll keep your hands on the home row of my keyboard and move less your arms to achieve the tasks you need. You'll probably feel less tired after a day of work on my Fedora at home than after a couple of hours on Windows.

Highly Customizable

Unless you're super minimalist, you will like customize your i3. There are a lot of tutorials out there and I urge you pick some specific for your distro. In general people add a different color scheme, change icons, fonts, the toolbar, and the Gnome theme when applicable. Some examples can seen here.
Source: Reddit
Source: Reddit

The i3 configuration is simple to read, understand, share and modify. Don't like that keybinding? Change your ~/.config/i3/config file and do your changes. For example, here are some of my custom bindings:

Easy to get started

i3 is available on repositories for Fedora, Ubuntu, Arch and other major distros. That said, installation should be straightforward by using your package manager (see below). After you start i3 the first time, you are prompted for an initial configuration that will set the basics for you to get rolling.
After installation, you'll be prompted with this screen on your first login

Compatible with GNOME/KDE tools

Be assured that you will still use all your GUI applications with i3. Firefox, Chromium, Calculator, Nautilus, Gnome settings or Gimp, everything should be available and accessible trough the default  dmenu.
Source: https://i3wm.org/screenshots/

You will use more the terminal

I realized that with i3 I've been using more and more the terminal. I replaced most of the visual GUI applications with tools like:
  • system management: systemctl, dnf, journalct, etc
  • networking: nmcli, ifconfig, iwconfig, netstat, etc
  • process management: top, htop, etc
  • text editor: Vim
  • text manipulation: sed, awk
  • search: fzf, find, grep
  • file management: ranger, xargs

You may not realize but once you memorize the commands and rely less on the mouse and on graphical applications which by design are less feature-rich, you will become more confident using your system, improve and accelerate your workflow. Then you learn more and repeat the cycle.

You will learn new tools

You will also learn new tools. And because you'll be using more and more the terminal, you will probably change your whole workflow and realize you'll be more productive using the terminal. For example, these are the tools I'm using more and more:
  • Vim - my main text editor. Adheres very well to the i3 workflow.
  • Mutt - not perfect but very decent email client for the terminal
  • Ranger - a fantastic file management for the terminal!
  • rtv - Reddit on the terminal
  • w3m/lynx/links - Terminal-based web browsers
  • Tmux - essential with WSL and on a SSH session. But not a strong requirement for i3 users
  • fzf - fantastic command line fuzzer. Also available as a fzf.vim plugin
  • Grep - powerful search from the command line
  • Awk, Sed - utilities manipulate streams

Better performance, less memory

Computational performance is like free beer, we never say no =). GNOME was already fast on my notebook but i3 makes it even faster. Add to that less memory consumption (my system running i3 utilizes around 400Mb of memory, while GNOME consumes 1GB) and you realize how performant your machine be! And it gets even better with old hardware paired with XFCE, LXDE or LXQT.

You will learn more about Linux

Using i3 made me learn and know more about the Linux system and the GNU tools. Because I drastically shifted how I do my work on my Linux box to using tools such as grep, Vim, Tmux, ranger and mutt. I've also stopped and finally learned how to work well with sed, awk, systemd, firewalld, networkd, auditctl and lots of other system tools that I never bothered with.

Installing i3

If you sympathized with i3, let's see how to install it.

Installing on Fedora

sudo dnf install i3 i3status dmenu i3lock xbacklight feh conky

Installing on Ubuntu

sudo apt update
sudo apt install i3

Logging in

Assuming the installation was successful, logout and before logging in, remember to change the toggle to use i3:

Source: Fedora Magazine
On your first login, you should be presented with this screen that will automatically generate a configuration for your user:

Next Steps

The best way to get started with i3 (and its sibling Sway) is of course, by using Fedora. The community has produced two spins with the basic setups called Fedora i3 Spin and Fedora Sway Spin. Please check those pages for more information.

Test it on a VM

Once you read the documentation, I'd recommend to install it on the VM hypervisor of your choice. (Hyper-V, VirtualBox or VMware workstation are the most popular). Please git yourself some time to familiarize yourself with the proposal before giving up.

Read the docs

The first thing you should do is read and understand well the documentation. i3's official documentation is also an excellent resource and very well documented. YouTube, GitHub and the i3wm community on Reddit are also great resources to get started and learn how to tweak your setup.

Get used

Once you're comfortable with the setup, consider doing some of these:

  • Get used to using the <mod>+enter to start your terminal
  • Map applications you use the most i3 bindings (see Customization above for some examples)
  • Configure your toolbar to add/remove information you need
  • Keep learning more about i3. Use it for some time before removing removing it if you're struggling. 
  • Once you start getting comfortable with it, start replacing GUI-based applications for TUI-based applications (those that run on the terminal)
  • Consider changing your workflow to optimize repetitive actions (using aliases for example)
  • Continue learning and tweaking your config files until you're productivity goes up

Tweak

Next, feel free to tweak i3 as much as you need! In case the defaults don't appeal to you (probably they won't), remember, you can always change. For example, it's simple to switch the defaults to:
  • change the toolbar: i3blocks or polybar
  • add padding between tiles (windows): i3-gaps
  • add fancy UI transitions with compton
  • enhance your desktop background: conky, feh
  • replace your application launcher: rofi

Conclusion

Let me be clear: i3 is not for everyone. If you're a mouse person, if you don't like to spend time configuring your desktop, learning new tools, using the terminal, don't bother with i3. Linux desktop environments are amazing and have everything that a user already needs out of the box.

But, if you want to be more productive, learn better your Linux system, configure your system as you want, I would urge you to try i3. Set aside some time to learn the default key bindings, learn how to configure it and use it for a couple of weeks. Don't give up before that. Let your muscle memory work 😉.

See Also

About the Author

Bruno Hildenbrand      
Principal Architect, HildenCo Solutions.