Monday, December 17, 2018

Accessing Entity Framework context on the background on .NET Core

You got the "Cannot access a disposed object using Entity Framework Core". What should you do?
Photo by Caspar Camille Rubin on Unsplash

You tried to access Entity Framework Core's db context on a background thread in .NET Core and got this error:
Cannot access a disposed object. A common cause of this error is disposing a context that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. This may occur if you are calling Dispose() on the context, or wrapping the context in a using statement. If you are using dependency injection, you should let the dependency injection container take care of disposing context instances.
This exception happens because EF Core disposes the connection just after the request to the controller is closed. Indeed that's the right approach and is the default behaviour as we don't want those resources hanging open for much longer. But before going forward, I'd like you to know that most times you don't need or want to access EF Core on a background context. A few good explanations are described here. And, if you need an introduction, I'd also recommend reading Scott Hanselman's introduction on the topic.

However, such an approach sometimes may be necessary. For example, I came across that issue writing a MVP, a proof of concept where a Vue.JS chat room using EF Core communicated with a SignalR Core backend running on Linux. In my opinion MVPs and proofs of concept are the only acceptable use cases for this solution. As always, the default approach should be accessing the service via the injected dependency.

Enough talking. Let's take a look at how to address the problem.

IServiceScopeFactory

The key to doing this is using IServiceScopeFactory. Available on the Microsoft.Extensions.DependencyInjection Nuget package, IServiceScopeFactory provides us a singleton from which we can resolve services trough DI the same way the .NET Core framework does for us.

Microsoft describes it as:
A factory for creating instances of IServiceScope, which is used to create services within a scope. Create an IServiceScope which contains an IServiceProvider used to resolve dependencies from a newly created scope.

The Implementation

The implementation is divided in 3 (three) steps:
  1. Inject the IServiceScopeFactory singleton on your controller
  2. Pass the instance of IServiceScopeFactory to your background task or thread
  3. Resolve the service from the background task

Step 1 - Inject IServiceScopeFactory in your controller

First, you need to inject IServiceScopeFactory in your controller.

Step 2 -  Pass it to your background thread

Then, you have some code that supposedly invokes the bg thread/task. For example:

Step 3 -  Resolve the service from the background task

And finally, when your background thread is run, access the scope and have the framework initialize the EF context for you with:
And because it's a singleton, IServiceScopeFactory won't throw an exception when you try to access it.
at Microsoft.EntityFrameworkCore.DbContext.CheckDisposed()
at Microsoft.EntityFrameworkCore.DbContext.Add[TEntity](TEntity entity)
at Microsoft.EntityFrameworkCore.Internal.InternalDbSet`1.Add(TEntity entity)

Conclusion

While you shouldn't use this as a pattern to process background tasks, there are situations where this is necessary. Since the there isn't much documentation around IServiceScopeFactory I thought it was good to document it. Hope it helps!

References

See Also

Monday, December 3, 2018

Simplifying Razor logic with C# Local Functions in ASP.NET Core

How could we use C# Local functions and reduce the boilerplate on our ASP.NET Core pages?

One common requirement developers face is to add custom classes to links dynamically. On this post, I want to show how to do it cleanly in ASP.NET Core by combining two important C# language features: local functions and string interpolation.

Local Functions

Starting with C# 7.0, Microsoft introduced local functions to C#. Local functions are:
private methods of a type that are nested in another member. They can only be called from their containing member. Local functions can be declared in and called from: Methods, Constructors, Property accessors, Event accessors, Anonymous methods, Lambda expressions, Finalizers and Other local functions.

A simple example of how local functions are used is listed below. Note the AppendPathSeparator function: it's what's called a local function in a C# context:
     private static string GetText(string path, string filename)
    {
         var sr = File.OpenText(AppendPathSeparator(path) + filename);
         var text = sr.ReadToEnd();
         return text;    

         // Declare a local function.
         string AppendPathSeparator(string filepath)
         {
            if (! filepath.EndsWith(@"\"))
               filepath += @"\";

            return filepath; 
         }
    }

Now let's look at how we could use local functions in Razor.

Local functions in ASP.NET Core Razor

As we expect, Razor understands local functions. As an example, let's look at a common requirement: adding custom classes on menus depending on the Url the user is in. There are multiple ways to do this. On this post, I want to show how to do it cleanly in Asp.Net Core by using local functions. The solution is done in 3 (three) steps:
  1. Step 1 - Get the controller name
  2. Step 2 - Define the GetClass function to build the css class based on the controller name and link
  3. Step 3 - Use the function in our link;

Step 1 - Get the controller name

In Asp.NET Core, we access the controller name with:
    var controllerName = ((ControllerActionDescriptor)ViewContext.ActionDescriptor).ControllerName;

Step 2 - Define the GetClass function

The next step is to define a local function to toggle the css class based on the controller name. This function gets the controller name as a parameter and appends the class active to the currently set nav-link using string interpolation, another nice feature from C#:
    string GetClass(string controller)
    {
        return $"nav-link { (controllerName == controller ? "active" : "" ) }";
    }

Step 3 - Use the GetClass function in our link

Last step is to use the function in our cshtml templates like:
<a class="@GetClass("Users")" asp-controller="Users" asp-action="Index">Users</a>

Source Code

Here's the full source. See how clean the code got just by using this simple local function and string interpolation in a Bootstrap template layout:

Conclusion

On this post we examined how to use 2 very useful C# language features - local functions and string interpolation - to simplify the development of our Razor templates. Just remember to not violate DRY. If you need the same function elsewhere, move to an extension method so you can re-use it.

References

See Also

Monday, November 26, 2018

JavaScript caching best practices in ASP.NET

Looking to improve the performance of your site with a few simple tips? Read to understand.

One of the first things a developer should do when inspecting the performance of their ASP.NET website is to understand how the static assets of the site are served and if they are correctly being cached by the visitor's browser. On this post let's review a solution that works well, is built into the ASP.NET framework and helps you reduce costs and improve the performance of your site.

How caching works

Before getting to solution, it's important to understand how web browsers cache static resources. In order to improve performance and reduce network usage, browsers usually cache JavaScript and external resources a web page may depend on. However, under certain circumstances, modifications on those files won’t be automatically detected by the browser resulting in users seeing stale content.

Approaching the Problem

A common solution to this problem is to add query string parameters to the referenced JavaScript files. Assuming that the suffix has changed, the browser treats the new url as a new resource (even if it’s the same file name), issuing a new request and refreshing the content. Whenever that resource changes, the query string changes and the browser automatically refreshes the resource making users always seeing the most up to date content. Also note how we can use that approach for all linked resources: JavaScript files, CSS, images and even icons.

So how to ensure that our pages output references as showed above? In ASP.NET, the solution to that is to use bundles.

Bundling and Minification

According to Microsoft,
Bundling and minification are two techniques you can use in ASP.NET 4.5 to improve request load time. Bundling and minification improves load time by reducing the number of requests to the server and reducing the size of requested assets (such as CSS and JavaScript.)
Another benefit of bundling is that, by default, the Asp.Net framework comes with a cache-prevention feature
Bundles set the HTTP Expires Header one year from when the bundle is created. If you navigate to a previously viewed page, Fiddler shows IE does not make a conditional request for the bundle, that is, there are no HTTP GET requests from IE for the bundles and no HTTP 304 responses from the server. You can force IE to make a conditional request for each bundle with the F5 key (resulting in a HTTP 304 response for each bundle). You can force a full refresh by using ^F5 (resulting in a HTTP 200 response for each bundle.)

Bundling in practice

Before reviewing the actual solution, let’s compare a page serving bundled content and a page showing unbundled content and understand how the browser behaves on each of them.

Reviewing a request with bundled content

Let's take a look at a request using bundles to understand this a little better:
From the above, note that:
  • in yellow, the auto-added suffix. We'll see later why this is there and how to do it ourselves;
  • in red, a confirmation that that resource was cached by the browser and loaded locally, not issuing any request to the server;
  • the time column on the right, shows the time spent by the browser to load that resource. For most files it was 0ms (or, from cache);
Now if you expand the bundle request, we can see that the cache is set to 1 year from the request date, meaning that the file won't be refreshed for the next 365 days:

Serving unbundled content

Now if you take a look at a request to a page that has unbundled content, you'll see that the resources are also being cached (column Size). However, individual requests are still being issued for each file referenced causing two problems:
  1. In HTTP 1.1, there’s a virtual limit of max 6 concurrent requests per domain. On that case, after the 6th request is issued, others will be waiting until a new slot is available
  2. Because after each deploy file names won’t change, the browser will reuse the cached version event if modified (because by seeing the same file name, it understands the resource didn’t change). Our problem is that updates on those files won’t be automatically refreshed making users see stale content.

Bundling in ASP.NET

Because the ASP.NET framework already provides bundling, it's Simply by using bundles and referencing them in our scripts solves the issue. In ASP.NET we do it by registering our bundle in our Global.asax.cs file in the Application_Start method:
protected void Application_Start()
{
    // ...
    BundleConfig.RegisterBundles(BundleTable.Bundles);
    // ...
}
Where the RegisterBundles method, looks like:
public static void RegisterBundles(BundleCollection bundles)
{
    bundles.Add(new StyleBundle("~/Content/css")
        .Include("~/Content/images/icon-fonts/entypo/entypo.css")
        .Include("~/Content/site.css"));
}
Now, build and deploy your site. Just don't forget to remove debug=true on your web.config, compilation element:
<compilation targetFramework="4.7.2" />
Whenever that content changes or a new deploy happens, that querystring value for each bundle is automatically modified by the framework ensuring that users will always have your latest JavaScript, optimized, without risking serving an obsolete version to your users.

Conclusion

On this post we reviewed some practices for increasing the performance of your site. Caching, bundling and minification are standard practices that developers should consider to increase the performance of their ASP.NET websites.

References

See Also

Monday, November 19, 2018

5 tools for Azure Development on Linux

If you develop on Azure and Linux, here are 5 tools you should be using.
Photo by Barn Images on Unsplash

Azure may not be the primary choice for developers utilizing non-Microsoft technologies these days but it's important to not ignore it. Especially now that Linux dominates Azure. The always growing number of services, the price, its global reach, the Apis and the features are awesome! If you haven't tried, I'd recommend to test it out and utilize the free credits available for testing. So let's review some of the tools available for developers.

Development - Visual Studio Code

Developing for Azure with Linux is fun! In Windows, my default development editor is Visual Studio. On Linux, I usually use Visual Studio Code for development and Vim for all the rest. VSCode is a multi-platform, extremely customizable IDE with integrated debugger, git and IntelliSense. If you haven't used it yet, I urge you to take a look at the features. Here's what it looks like on Ubuntu:

Installing Visual Studio Code on Ubuntu/Debian-based distributions

The easiest way to install Visual Studio Code for Ubuntu/Debian-based distributions is to download and install the .deb package (64-bit), either through the graphical software center if it's available, or through the command line with:

sudo dpkg -i <file>.deb
sudo apt-get install -f # Install dependencies

Installing Visual Studio Code on RHEL/CentOS/Fedora

In RHEL, CentOS and Fedora, you'll have to add Microsoft's repository, update  the package cache and install it using dnf:
sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
sudo sh -c 'echo -e "[code] name=Visual Studio Code \
     baseurl=https://packages.microsoft.com/yumrepos/vscode \
     enabled=1 gpgcheck=1 gpgkey=https://packages.microsoft.com/keys/microsoft.asc" \
 > /etc/yum.repos.d/vscode.repo'
dnf check-update
sudo dnf install code

For other distros, different OS or troubleshooting, please check the installation page.

Database - Azure Data Studio

The very popular SQL Server in Azure and on-premises can be accessed via Azure Data Studio. Available for Linux, Windows and Mac, the installation is straightforward, let's take a look.

Installing Azure Data Studio on Debian/Ubuntu systems

Download the executable and run the script below:
cd ~
sudo dpkg -i ./Downloads/azuredatastudio-linux-<version string>.deb
azuredatastudio

Installing Azure Data Studio on RHEL/CentOS/Fedora systems

Download the executable and run the script below:
cd ~
yum install ./Downloads/azuredatastudio-linux-<version string>.rpm
azuredatastudio

In case you need dependencies, please check this page.

Using Azure Sql Data Studio

Once  Azure Sql Data Studio is installed, you're ready to connect to your database. For more detailed information on the topic, please click here.

Storage - Azure Storage Explorer

Azure Storage Explorer is another excellent and fundamental tool for your Azure development. The main features are:
  • queue storage management
  • blob storage management
  • file storage management
  • cosmos db management
  • data lake management

Installation (all Distros)

These are the pre-requisites for the installation:

Download the .tar.gz from the Azure Storage Explorer home page and run:
# install libgnome-keyring (RHEL/CentOS/Fedora specific)
dnf install libgnome-keyring

# extract the files
tar -xvf StorageExplorer-linux-x64.tar.gz

# run it
./StorageExplorer

DevOps - Azure CLI

If you plan or have to building automation scripts that work against the Azure Resource Manager. Azure CLI is the cross-platform tool you need. With the CLI you can do pretty much everything that you can do trough the Azure Portal but trough scripts, from the command line.

Installing the Azure CLI

This page contains the necessary information to install the CLI. Let's quickly review how to install it on Ubuntu/Fedora.

Installing the CLI on Ubuntu

# 1. Modify your sources list
AZ_REPO=$(lsb_release -cs)
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | \
    sudo tee /etc/apt/sources.list.d/azure-cli.list

# 2. Get the Microsoft signing key:
curl -L https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -

# 3. Install the CLI:
sudo apt-get update
sudo apt-get install apt-transport-https azure-cli

Installing the Azure CLI on RHEL/CentOS/Fedora

# 1. Import the Microsoft repository key.
sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc

# 2. Create local azure-cli repository information.
sudo sh -c 'echo -e "[azure-cli]\nname=Azure CLI\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo'

# 3. Install with the yum install command.
sudo yum install azure-cli

Note: if you already added Microsoft's package repo, you will not need to run step 1.

After steps 1 and 2 above, my local install was as simple as running dnf install azure-cli from the terminal, as root:

And, after running the az command, I get a warm welcome message and am ready to interact with Azure from the command line:

DevOps - Azure PowerShell

Azure PowerShell provides a set of cmdlets to manage your Azure resources. Via a local installation we can interact with Azure using PowerShell. The requirement is... Powershell. Let's see how we can install it on Linux.

Installing PowerShell on Linux

After Adding the Azure-CLI and the Microsoft repos as previously noted, Powershell is available for installation via: dnf install powershell as root, confirm.
After the installation ends, just type pwsh from the terminal and you're running a full PowerShell session (apart from that ugly traditional blue background =))

Once PowerShell is installed, to install the Azure PowerShell modules, simply type Install-Module Az from your PowerShell terminal as root, accept and they you're ready to interact with Azure using PowerShell.

Conclusion

On this post I presented multiple ways we can interface with Azure using extremely productive development tools. We also saw how easy it is to install these tools and quickly build a very powerful development environment. Yes, I know, we have Vim 😁 but sometimes, some nice, extensible and productive tools won't hurt, will they?

References

See Also

Monday, November 12, 2018

Windows Subsystem for Linux, the best way to learn Linux on Windows

Want to learn Linux but don't know how/where to start? WSL may be a good option.
In 2018, Microsoft released the Windows Subsystem for Linux (WSL). WSL lets developers run the GNU/Linux shell on a Windows 10 PC, a very convenient way to access the beloved tools, utilities and services Linux offers without the overhead of a VM.
WSL is also the best way to learn Linux on Windows!

About WSL

Currently WSL supports Ubuntu, Debian, Suse and Kali distributions and can:
  • run bash shell scripts 
  • run GNU/Linux command-line applications including: vim, emacs, tmux
  • run programming languages like JavaScript, Node.js, Ruby, Python, Golang, Rust, C/C++, C# & F#, etc.
  • run background services like ssh shells, MySQL, Apache, lighttpd;
  • install additional software using own GNU/Linux distribution package manager.
  • invoke Windows applications.
  • access your Windows filesystem

Installing WSL on Windows 10

Installing WSL is covered by Microsoft on this article and is as easy is two steps.

Step 1 - Run a Powershell Command

On your Windows PC, you will need to run this PowerShell script as Administrator (shift + right-click):
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows -Subsystem-Linux
After the installation ends, restart your PC.

Step 2 - Install WSL from the Windows Store

After the reboot, WSL can be installed through the Windows Store. To open the Windows Store on your Windows 10, click:
Start -> Type Store -> Click on the Windows Store:
Then type "Linux" on the search box and you should get something similar results to this:

Click on the icon, accept the terms and Windows will download and install WSL for you.

Running WSL

After installation started, you will be prompted to enter your username and password. After done, you'll get a cool Linux terminal to start playing with. You can even have multiple Distros installed on your Windows 10 machine. On mine, I installed Debian and Ubuntu.

Using the Terminal

Okay, so now that we have access to our Linux shell, what to do next? Let's go through these use cases:
  • Accessing my Windows files
  • Access internet resources
  • I install software

Accessing Windows Files

WSL mounts your Windows files on the /mnt/c mount point. To verify on yours type mount on the command prompt and look for C: on it. Your windows files should be there.
In case you don't know Linux, listing files is done with   ls  . This is the content of my C drive as as seen from WSL:

Accessing the Internet

Your WSL instance should have access to the internet. Testing the internet is as simple as doing a ping to Google:
You can also verify your network info with ifconfig:
 

Installing Software

Installing software on Ubuntu/Debian is done by the apt command. For example, this is how we search packages:
To install packages, use apt-get install. For example, to install Ruby on the Ubuntu WSL, run the command below:
sudo apt-get install ruby-full

Using git

We can leverage apt and install git with:
sudo apt-get install git
... # apt installs git
git --help # to get help
And, I'd recommend learn to use it on the terminal. Atlassian has an excellent tutorial to learn git.

Getting Help

Need help? The man tool is there to help you. For example, we could run the commands below to get help on git for example:
man git

Additional tip: try the new Windows Terminal

And, if you want to invest more time on your WSL, I'd suggest that you install the new Windows Terminal. Download the last release from GitHub and install it on your box. It's very customizeable and contains shells for WSL, PowerShell, Azure CLI and the traditional Windows terminal.

What's next?

Now that you know how to locate your files, have access to the internet and installed some software, I'd recommend that you:

Conclusion

Congratulations! You have the WSL installed on your machine and now you have a Linux terminal to starting playing with. Now what? The first thing I'd recommend is to get comfortable with basic system commands, understand the filesystem, learn to add/remove software and run administrative tasks on the terminal. WSL is perfect for users who want to learn Linux and to those who spent a lot of time on Windows but need access to a Linux terminal.

If you want to know more about my setup, here's why I use Fedora Linux with the fantastic i3 window manager on the desktop and CentOS on servers. Happy hacking!

References

See Also

    Tuesday, November 6, 2018

    Why use Fedora

    Ubuntu may be the most popular Linux distribution however Fedora may be a better alternative for your desktop. Read to understand.
    Photo by Clem Onojeghuo on Unsplash

    Fedora Linux is a fantastic operating system. On this post I'd like to explore the reasons why I use and recommend Fedora as my daily driver, and present you with some of the reasons why I think it could be your next (and last) Linux distribution.

    It's 2023 and there are endless praises to Fedora in YouTube. TL;DR: Since this post was written (7 years ago), Fedora became the most recommended Linux distro for users that want the best Linux experience. Hope you enjoy the article!

    My personal journey

    Everyone has their own Linux journey. In the beginning it's common for people to try out many distributions until they settle down to one they're comfortable with. In the end, it's all about trade offs. So let let me tell you mine and how I got to choosing Fedora as my main Linux Distribution.

    I always used Linux... in dual-boot mode. My journey with it started two decades ago when I was introduced to Slackware Linux. What I loved in Slackware was the level of granularity in which I could tweak my system. But soon I realized that Slackware required a lot of hands on and quickly moved to Red Hat Linux (before RHEL) which provided a more streamlined experience with its package-based system based on yum/RPMs.

    Since then I've tried (in that order): Mandriva, Fedora Core, OpenSuse and more recently Ubuntu, Mint, elementary OS, Xubuntu, Lubuntu, Arch, Manjaro and SOLUS. Yes, call me a distrohopper. Then I decided to retry Fedora 24 and check how it had progressed. Eight years later, Fedora remains my primary desktop.

    With all that said, let me describe some of the reasons I think you should consider using Fedora on your desktop.

    Why I chose Fedora

    So let's explore some honest reasons to use Fedora.

    Simplicity

    Simplicity definitely comes first. Simplicity is one of the characteristics that I enjoy the most in Fedora and is the reason I often recommend it to new users. From the installation to the default UI, everything seems integrated, accessible, fast and intuitive to use. Plus, the default GNOME desktop is pretty solid, its animations are smooth and the performance is really good.

    A Fedora workstation running Gnome 3.30 / Source

    Stability

    Fedora is very stable. And differently from what people think, Fedora is not a beta-testing environment for RHEL. Packages are tested on its development channel, validated and once stable with all dependencies met, released to millions of users. Be sure that for each update that you get, hundreds of hours of tests were performed.

    Security

    Fedora shines on the security aspect. We know that online security is way more than keeping the system up to date is as important as having a good anti-virus software, a working firewall and safe web browsing habits. Fedora counts with a robust SELinux integration and given its First Principle, Fedora users frequent more up to date software and kernel updates than all other non-rolling release distros.
    For those who don't know, SELinux was developed by the NSA and is the standard access control tool for critical services blocking suspicious behavior by default.

    Privacy

    Linux users shouldn't be concerned about privacy. Or should we?  Fedora discloses its Privacy Policy publicly and adheres to it as to its core principles: Freedom, Friends, Features and First. Differently from other distributions, Fedora never sent your desktop search results to e-commerce sites, or contains intrusive telemetry that cannot be turned off.

    Development-Friendly

    Developers are pretty much covered with everything on the development site. You will find  Go, Rust Docker, Swift, Python, NodeJs, Ruby, Java,  among others can be found in the repos. Anything else? You will find it there.

    Performance

    Fedora performs. My 5 yr-old laptop boots in less than 10 seconds. The performance is mainly due to more up to date software and to keeping the OS running only with the necessary resources. Yes there are faster options and yes, Gnome is not as light as other window managers (but got really better as of Gnome 3.36) but it's still the best choice for the average user.

    Freedom

    With Fedora, free alternatives are chosen to proprietary code and content and limit the effects of proprietary or patent encumbered code on the Project. Releases that are predictable and include only 100% free software.

    Community-Driven

    Differently from other distros which are subject to private interests, Fedora is driven by a community of dedicated and passionate enthusiasts.  All communication is open to the public and everyone is invited to collaborate! The Fedora communities on the internet are pretty receptive and you'll always find someone willing to help you.

    Reliability

    The Fedora project serves as the base for RHEL and CentOS. So it needs to be mature, it needs to be stable. It's a very serious Linux desktop operating system used daily by millions of users and servers around the world. It needs to be reliable. Personally, I had zero problems in my last 5 years doing distribution upgrades.

    Cutting-Edge Software

    Fedora is frequently mentioned as the distribution that explores the outer limits of what Linux can do. But that's no news to Fedora users since is the First foundation adopted by the project. Fedora repos usually land features way earlier than Ubuntu and derivatives (Elementary, Mint and Pop!_OS) which IMHO is as best as it can be without being a rolling release. And remember, you can always add different repos or install software trough universal Linux packages like Flatpak.

    Frequent Updates

    Fedora updates are frequent. Sometimes, multiple times a week I get software upgrades covering security issues, performance, stability and even Kernel updates. It's also common to get multiple kernel updates per release. That usually means running a more mature, stable and secure kernel. It also means better performance and supported hardware too.

    Impressive Software Selection

    The default package repository contains all the software you will ever need. Plus, in Fedora the software can be managed by the powerful DNF package manager. For example, this is the number of packages (software) available from the repository ran from my XFCE instance:
    DNF also handles distribution updates which happen twice a year.

    Custom Software Repositories

    The amount of software available on the official Fedora repo is incredible. Probably everything that you need, you can find there. But if you're not covered for the package selection on the free/default repo, you still could make use of RPM Fusion to install software that doesn't adhere to Fedora's requirements.

    Not to mention Copr (Cool Other Package Repo), a Fedora project to help  building and managing third party package repositories easy. Copr is hosted under the Fedora Infrastructure and allows developers to create repos so they can be shared with users.

    Excellent Hardware Support

    Fedora has excellent support for hardware. I don't have cutting-edge hardware neither am a gamer so I cannot comment on that but for most people, both the Fedora installer and the kernel are very good at recognizing and activating the hardware. That's probably a good reason why most people stop distrohopping when they get to Fedora. Most of the issues (including lack of hardware support) disappear.

    Incubator for new features

    The Fedora community creates many of the technical features that have made Linux powerful, flexible, and usable for a wide spectrum of millions of users, administrators, and developers worldwide. In fact, this is the Missions and Foundations of the project:
    The Fedora community prefers approaches that benefit the progress of free software in the future over those that emphasize short term ease of use.
    Some of the features developed with Fedora include:
    • The Linux Kernel - there are hundreds of kernel hackers worldwide using Fedora. The most famous of them is Linus Torwalds, the creator of Linux ("It just works").
    • Wayland - a new display system replacing the venerable X.org.
    • XWayland - extensions to the X server so it can be run as a Wayland protocol.
    • systemd - Linux's default init system and used on 95% of the modern and most popular distros.
    • GNOME - a free and open-source desktop environment for Unix-like operating systems. 
    • The GTK toolkit - a cross-platform widget toolkit for creating graphical user interfaces.
    • PipeWire - a new audio and video subsystem
    • Flatpak - application packaging
    • The cockpit project - Manager your server in a web browser and perform system tasks with a mouse.
    • Anaconda installer - Anaconda is a free and open-source system installer for Linux distributions used in Fedora, RHEL, CentOS and other Linux distributions. 
    • Podman - a tool to create and maintain containers.
      Buildah - a tool to create and manage container images
    • Silverblue - a next-gen OS for the desktop 
    • Fedora CoreOS - a next-gen cloud appliance. 
    • Modularity - Modularity is a mechanism of making multiple versions of software available to your system.

    Two upgrades per year

    Because Fedora releases happen twice a year, you'll get big system updates (including Gnome, GCC and base libs) twice a year. It's the best way to have up to date software without the complexities and issues rolling release distros have. Upgrading is as simple as 2 clicks on the Software app or running the below on your terminal:
    sudo dnf upgrade --refresh
    sudo dnf install dnf-plugin-system-upgrade
    sudo dnf system-upgrade download --releasever=XXX
    sudo dnf system-upgrade reboot

    Universal Linux Packages

    You know you can install software using the package repos. Another option is using Snap or FlatPak. And installation is as simple as opening Software -> Searching and clicking install.

    RHEL

    Many people ignore this but another strong reason to use Fedora is because it's RHEL's upstream. You'll be using with Yum/DNF, SystemD, SELinux, and testing what will be available on the next RHEL, the leading Linux-based OS on servers. Invaluable knowledge to have.

    Variants 

    Don't like GNOME? Fedora also ships different spins including KDE, Xfce, LXQt, MATE, Cinnamon, LXDE and others. All available in both 64-bit and 32-bit versions. It has even support for ARM and IoT.

    Fedora in some of its variants
    You can also find on Labs, variants that are target to designers, astronomers, scientists and musicians.

    A multitude of options for your desktop

    Don't like the default? You can always change. For example, check this post to learn more about a multitude of desktops and window managers available on Fedora 31. Among my favorites:
    • i3 (and its Wayland alternative Sway) - my personal choice ❤!
    • XFCE - also use on my cloud instances, including on my Ubuntu instance on azure
    • KDE - an excellent desktop for your Linux PC
    • GNOME - I also have it installed at home as a fallback in case my i3 breaks (never happened 😊)
    • And another multitude of options - counting at 38 alternatives to customize your desktop
    Still on tiling window managers, for a couple of years now I'm using the i3 window manager (shown on the screenshot below) and I love it! For more information check my post on why i3 is awesome, how to install it and why it will change the way you use your Linux desktop.
    Source: DevianArt

    So, Fedora is perfect?

    Of course not! No software is perfect as no software will ever be bug-free. However, it's important to consider that open-source operating systems are extremely complex software bundles composed of thousands of packages developed by volunteers worldwide.

    Fedora provides the best balance in what matters most: open-source software, up-to-date software and a strong focus on stability, security and privacy. Plus, you can find on Fedora and its derivatives everything you're looking for. Need a more stable environment for your server or container? Go with Fedora Server or CentOS. Want to go even more cutting edge on the desktop? Try Silverblue. Need a lightweight alternative for that old netbook? Go with Fedora XFCE. Using ARM on a Pi or on a Pinebook Pro? Try Fedora ARM!

    Fedora also shines on its community aspect. The community is very open for onboarding newcomers and volunteers. And as previously mentioned, everyone is invited to collaborate, even if it's just to write articles for the Fedora Magazine. All communication is shared online and the teams are structured to help newcomers trough mentoring programs.

    Conclusion

    Fedora is a polished, modern, stable, secure and privacy-focused GNU/Linux operating system. With all its variants, I'm pretty sure there will be something for you too! If you have never used it, I would suggest that you try it, regardless of your technical background. If you use another distro, I would kindly recommend that you to download Fedora, install it on a VM and give it a try. And don't hesitate to send a big thank you to the @Fedora community on Twitter.

    References

    See Also

    Monday, October 29, 2018

    Tips for passing the PSD exam

    Looking for help regarding Scrum.org PSD exam? Take a look at these tips
    Not sure if you are aware but I'm a Certified Scrum Master. Back in 2014 I wanted to work as a Scrum Master but turns out I love to work with development so I never exercised that role. Lately however, I've been working on a project and realized that as a developer, I wanted to help my company to improve their Scrum. This time, instead of pursuing the PSM II, I decided to go for the Professional Scrum Developer certification.

    The Professional Scrum Developer certification

    But what is the Professional Scrum Developer certification (PSD)? According to Scrum.org:
    Professional Scrum Developers are members of the Scrum Development Team and demonstrate knowledge and understanding of Scrum and their ability to build software using Scrum in real-world situations.  The value of certification is intimately tied to the demonstration of knowledge needed to achieve it. By that measure, the PSD assessment is significantly more valuable than available alternatives for Scrum.

    Why you should consider the PSD exam

    I think every developer that wants to work or works in an agile team should try to get this certificate. Why? Because most teams think they do Scrum right. My experience (as a contractor and employee) is that, it's rare to find a team that does scrum the right way and a company that respects and understands the Scrum framework.

    And on the flip side, the certificate shows recruiters and co-workers that the developer studied, understands and knows the foundations of the Scrum framework validated by Scrum.org, managed by Ken Schwaber, one of the fathers of the Scrum framework.

    What to Study

    First, understand the Scrum Guide. This is the most important resource in Scrum. It may seem simple but you'll see how much each work in there makes sense (and makes a difference). Remember, Scrum is difficult to master. Second, especially for developers, study topics related to agile practices, Extreme Programming and how they apply to Scrum. It's amazing how many concepts were imported from XP for example into Scrum and people most people have no idea that they aren't required.

    My suggestions are:
    • How to use scrum in a development project
    • Working within a Scrum Team
    • Why and how a Definition of Done is important
    • General development practices
    • Agile architecture practices to slice features
    • Test driven development
    • Agile requirement management practices
    • DevOps tools in Scrum
    • TDD, BDD, ATDD, CI, CD, Code Quality

    Tips for passing the exam

    Okay so let's take a look at some tips to pass the exam.

    Tip 1: Read and understand well the Scrum Guide

    The Scrum Guide is the most important document in the Scrum framework. You should read it multiple times carefully reflecting on each word. Understand well the Scrum Guide and the scrum foundations including the associated roles, events, artifacts, and rules. I suggest reading it 3 or 4 times before doing any assessment.

    Tip 2: Practice with the Open Assessments

    You should make exhaustive use of the Open Assessments. Basically they are a subset of the questions that you may encounter in the exam.  I suggest that you only move to the next step once you get constant 100% on all exams for at least 3 days.

    Suggestion: after studying the Scrum Guide, do a couple of assessments. Validate each of your answer against the assessment answer and be sure to review it back on the Scrum Guide and understand the reason for that.

    Tip 3: Explore related content

    Once you understood well the concepts and the Scrum Guide, it's time to search for related resources on the web. A good start point is the Scrum.org web site:
    Also try to find exercise questions and try them. My approach was before seeing the answer I tried to answer it in my mind to then, look at the response. It helped me memorizing and understanding better. However, don't trust all the answers that you see online. Be critical and reflect if the answer that you see (apart from those in the Open Assessments) is correct. I saw multiple errors around. Be critical!

    Tip 4: Study developer-specific content

    Apart from Scrum foundations, developers will find lots of technical questions. For developers, you can find on this page a lot of relevant information. The recommended bibliography is:

    Tip 5: Exam Time

    Okay so you prepared well, understood all the Scrum Guide and know the foundations of the related development questions you saw. How to deal with exam time?
    • Review a couple questions in the morning just so that information is reloaded in your mind
    • Take a few hours and go do something completely unrelated to the exam. Relax
    • Grab a coffee and start the exam.
    • Pay attention to the remaining time but don't be concerned. Time is enough. You have 60 minutes to answer 80 questions. Time is more than enough as the questions are usually small;
    • Don't waste too much time on each question. When in doubt, bookmark it and move to the next;
    • Use the bookmark feature - this is good to mark the questions you're not sure about and come back to them later
    • Remember: keep calm

    Final Thoughts

    Even if you don't consider taking the exam, just studying the Scrum Guide and doing the online assessments is a big step. It's common to see teams violating some of the foundations of the Scrum framework. The result is usually unsatisfied team members, lack of visibility and dysfunctional projects. Knowing what, why not reserve some time to review how's your understating of Scrum doing?

    See Also

    For more posts about Agile on this blog, please click here.

    About the Author

    Bruno Hildenbrand      
    Principal Architect, HildenCo Solutions.