Fedora Server: Expanding Throughout the Galaxy

History

Three years ago, Fedora embarked on a new initiative that we collectively refer to as Fedora.next. As part of this initiative, we decided to start curating deliverable artifacts around specific use-cases rather than the one-size-fits-all approach of Fedora 20 and earlier. One of those specific use-cases was to meet the needs of “server administrators”. And thus, the Fedora Server Edition was born.

One of the earliest things that we did after creating the Fedora Server Working Group (Server WG from here on) was to perform what in the corporate world might be called a “gap analysis”. What this means is that we looked at Fedora from the perspective of the server administrator “personas” we had created and tried to understand their pain points (particularly in contrast to how things function on competitive platforms such as Microsoft Windows Server).

The most obvious gap that we identified was the relative difficulty of getting started with Fedora Server Edition at all. With Microsoft Windows Server, the first experience after logging in is to be presented with a tool called Server Manager that provides basic (graphical) information about the system as well as presenting the user with a list of things that they might want this server to do. It then walks them through a guided installation of those core features (such as domain controller services, remote desktop services and so on). With Fedora, a default install would get you a bash prompt with no guidance; typing “help” would only lead to the extremely expert-focused help documentation for the bash shell itself.

OK, advantage Windows here. So how do we address that? Server WG had agreed early on that we were not willing require a desktop environment for server systems. We instead set our sights on a fledgling project called Cockpit, which was gaining traction and looked to provide an excellent user experience without requiring a local display – it’s a web-based admin console and so can be accessed by users running the operating system of their choice.

Once Cockpit was established as the much-friendlier initial experience for Fedora Server, we started to look at the second part of the problem that we needed to solve: that of simplified deployment of key infrastructure components. To that end, we started the development of a tool that we could integrate with the Cockpit admin console and provide the actual deployment implementation. What we came up with was a python project that we called rolekit that would provide a fairly simple local D-BUS API that Cockpit would be able to call out to in order to deploy the requested services.

While our intentions were good, rolekit faced two serious problems:

  • The creation of the roles were complicated and manual, requiring careful curation and attention to make sure that they continued to work from release to release of Fedora.
  • The Cockpit Project became very popular and its limited resources became dedicated to serving the needs of their other consumers, leaving us unable to get the integration of rolekit completed.

The second of these issues remains and will likely need to be addressed, but that will be a topic for another day. The remainder of this blog entry will discuss our plans for how to improve the creation and maintenance of roles.

Ansible Galaxy

Ansible Galaxy describes itself as “[Y]our hub for finding, reusing and sharing the best Ansible content”. What this means is that the Ansible project runs a public software service enabling the sharing of Github repositories containing useful Ansible roles and playbooks for deploying software services.

The Galaxy hub contains literally thousands of pre-built server roles for Fedora, Red Hat Enterprise Linux and other systems with more being added every day. With such a large community made available to us, the Server WG has decided to explore the use of Ansible Galaxy as the back-end for our server role implementation, replacing rolekit’s custom (and redundant) implementation.

As part of this effort, I attended the Ansible Contributor Conference and AnsibleFest this week in Brooklyn, New York. I spent a great deal of time talking with Chris Houseknecht about ways in which we could enhance Ansible Galaxy to function for our needs.

Required Galaxy Enhancements

There are a few shortcomings to Galaxy that we will need to address before we can implement a complete solution. The first of these is assurance: there is currently no way for a consumer of a role to indicate its suitability. Specifically, we will want there to be a way for Fedora to elevate a set of roles (and specific versions of those roles) to a “recommended” state. In order to do this, Galaxy will be adding support for third-party signing of role versions. Fedora will become a “signing authority” for Ansible Galaxy, indicating that certain roles and their versions should be consumed by users of Fedora.

We will also add filtering to the Galaxy API to enable clients to limit their searches to only those roles that have been signed by a particular signing authority. This will be useful for limiting the list that we expose to users in Cockpit.

The other remaining issue with Ansible is that there is currently no way to execute an Ansible script through an API; at present it must be done via execution of the Ansible CLI tool. Fedora will be collaborating with Ansible on this (see below).

Required Fedora Enhancements

In Fedora, we will need to provide a useful UI for this new functionality. This will most likely need to happen in the Cockpit project, and we will have to find resources to handle this.

Specifically, we will need:

  • UI to handle searching the Galaxy API using the role signatures and other tag filtering.
  • UI for an “answer file” for satisfying required variables in the roles.
  • UI for interrogating a system for what roles have been applied to it.

In addition to Cockpit UI work, we will need to provide infrastructure within the Fedora Project to provide our signatures. This will mean at minimum having secure, audited storage of our private signing key and a tool or service that performs the signing. In the short term, we can allow a set of trusted users to do this signing manually, but in the longer term we will need to focus on setting up a CI environment to enable automated testing and signing of role updates.

Lastly, as mentioned above, we will need to work on an API that Cockpit can invoke to fire off the generated Ansible playbook. This will be provided by Fedora (likely under the rolekit banner) but may be eventually absorbed into the upstream Ansible project once it matures.

Self-Signed SSL/TLS Certificates: Why They are Terrible and a Better Alternative

A Primer on SSL/TLS Certificates

Many of my readers (being technical folks) are probably already aware of the purpose and value of certificates, but in case you are not familiar with them, here’s a quick overview of what they are and how they work.

First, we’ll discuss public-key encryption and public-key infrastructure (PKI). It was realized very early on in human history that sometimes you want to communicate with other people in a way that prevents unauthorized people from listening in. All throughout time, people have been devising mechanisms for obfuscating communication in ways that only the intended recipient of the code would be able to understand. This obfuscation is called encryption, the data being encrypted is called plaintext and the encrypted data is called ciphertext. The cipher is the mathematical transformation that is used to turn the plaintext into the ciphertext and relies upon one or more keys known only to trusted individuals to get the plaintext back.

Early forms of encryption were mainly “symmetric” encryption, meaning that the cipher used the same key for both encryption and decryption. If you’ve ever added a password to a PDF document or a ZIP file, you have been using symmetric encryption. The password is a human-understandable version of a key. For a visual metaphor, think about the key to your front door. You may have one or more such keys, but they’re all exactly alike and each one of them can both lock and unlock the door and let someone in.

Nowadays we also have forms of encryption that are “asymmetric”. What this means is that one key is used to encrypt the message and a completely different key is used to decrypt it. This is a bit harder for many people to grasp, but it works on the basic mathematical principle that some actions are much more complicated to reverse than others. (A good example I’ve heard cited is that it’s pretty easy to figure out the square of any number with a pencil and a couple minutes, but most people can’t figure out a square-root without a modern calculator). This is harder to visualize, but the general idea is that once you lock the door with one key, only the other one can unlock it. Not even the one that locked it in the first place.

So where does the “public” part of public-key infrastructure come in? What normally happens is that once an asymmetric key-pair is generated, the user will keep one of those two keys very secure and private, so that only they have access to it. The other one will be handed out freely through some mechanism to anyone at all that wants to talk to you. Then, if they want to send you a message, they simply encrypt their message using your public key and they know you are the only one who can decrypt it. On the flip side, if the user wanted to send a public message but provide assurance that it came from them, they can also sign a message with the private key, so that the message will contain a special signature that can be decrypted with their public key. Since only one person should have that key, recipients can trust it came from them.

Astute readers will see the catch here: how do users know for certain that your public key is in fact yours? The answer is that they need to have a way of verifying it. We call this establishing trust and it’s exceedingly important (and, not surprisingly, the basis for the rest of this blog entry). There are many ways to establish trust, with the most foolproof being to receive the public key directly from the other party while looking at two forms of picture identification. Obviously, that’s not convenient for the global economy, so there needs to be other mechanisms.

Let’s say the user wants to run a webserver at “www.mydomain.com”. This server might handle private user data (such as their home address), so a wise administrator will set the server up to use HTTPS (secure HTTP). This means that they need a public and private key (which in this case we call a certificate). The common way to do this is for the user to contact a well-known certificate authority and purchase a signature from them. The certificate authority will do the hard work of verifying the user’s identity and then sign their webserver certificate with the CA’s own private key, thus providing trust by way of a third-party. Many well-known certificate authorities have their public keys shipped by default in a variety of operating systems, since the manufacturers of those systems have independently verified the CAs in turn. Now everyone who comes to the site will see the nice green padlock on their URL bar that means their communications are encrypted.

A Primer on Self-Signed Certificates

One of the major drawbacks to purchasing a CA signature is that it isn’t cheap: the CAs (with the exception of Let’s Encrypt) are out there to make money. When you’re developing a new application, you’re going to want to test that everything works with encryption, but you probably aren’t going to want to shell out cash for every test server and virtual machine that you create.

The solution to this has traditionally been to create what is called a self-signed certificate. What this means is that instead of having your certificate signed by a certificate authority, you instead use the certificates public key to add a signature to the private key. The problem with this approach is that web browsers and other clients that verify the security of the connection will be unable to verify that the server is who it says it is. In most cases, the user will be presented with a warning page that informs them that the server is pretending to be the one you went to. When setting up a test server, this is expected. Unfortunately, however, clicking through and saying “I’m sure I want to connect” has a tendency to form bad habits in users and often results in them eventually clicking through when they shouldn’t.

It should be pretty obvious, but I’ll say it anyway: Never use a self-signed certificate for a production website.

One of the problems we need to solve is how to avoid training users to ignore those warnings. One way that people often do this is to load their self-signed certificate into their local trust store (the list of certificate authorities that are trusted, usually provided by the operating system vendor but available to be extended by the user). This can have some unexpected consequences, however. For example, if the test machine is shared by multiple users (or is breached in a malicious attack), then the private key for the certificate might fall into other hands that would then use it to sign additional (potentially malicious) sites. And your computer wouldn’t try to warn you because the site would be signed by a trusted authority!

So now it seems like we’re in a Catch-22 situation: If we load the certificate into the trusted authorities list, we run the risk of a compromised private key for that certificate tricking us into a man-in-the-middle attack somewhere and stealing valuable data. If we don’t load it into the trust store, then we are constantly bombarded by a warning page that we have to ignore (or in the case of non-browser clients, we may have to pass an option not to verify the client) in which case we could still end up in a man-in-the-middle attack, because we’re blindly trusting the connection. Neither of those seems like a great option. What’s a sensible person to do?

Two Better Solutions

So, let’s take both of the situations we just learned about and see if we can locate a middle ground somewhere. Let’s go over what we know:

  • We need to have encryption to protect our data from prying eyes.
  • Our clients need to be able to trust that they are talking to the right system at the other end of the conversation.
  • If the certificate isn’t signed by a certificate in our trust store, the browser or other clients will warn or block us, training the user to skip validation.
  • If the certificate is signed by a certificate in our trust store, then clients will silently accept it.
  • Getting a certificate signed by a well-known CA can be too expensive for an R&D project, but we don’t want to put developers’ machines at risk.

So there are two better ways to deal with this. One is to have an organization-wide certificate authority rather than a public one. This should be managed by the Information Technologies staff. Then, R&D can submit their certificates to the IT department for signing and all company systems will implicitly trust that signature. This approach is powerful, but can also be difficult to set up (particularly in companies with a bring-your-own-device policy in place). So let’s look at a another solution that’s closer to the self-signed approach.

The other way to deal with it would be to create a simple site-specific certificate authority for use just in signing the development/test certificate. In other words, instead of generating a self-signed certificate, you would generate two certificates: one for the service and one to sign that certificate. Then (and this is the key point – pardon the pun), you must delete and destroy the private key for the certificate that did the signing. As a result, only the public key of that private CA will remain in existence, and it will only have ever signed a single service. Then you can provide the public key of this certificate authority to anyone who should have access to the service and they can add this one-time-use CA to their trust store.

Now, I will stress that the same rule holds true here as for self-signed certificates: do not use this setup for a production system. Use a trusted signing authority for such sites. It’s far easier on your users.

A Tool and a Tale

I came up with this approach while I was working on solving some problems for the Fedora Project. Specifically, we wanted to come up with a way to ensure that we could easily and automatically generate a certificate for services that should be running on initial start-up (such as Cockpit or OpenPegasus). Historically, Fedora had been using self-signed certificates, but the downsides I listed above gnawed at me, so I put some time into it and came up with the private-CA approach.

In addition to the algorithm described above, I’ve also built a proof-of-concept tool called sscg (the Simple Signed Certificate Generator) to easily enable the creation of these certificates (and to do so in a way that never drops the CA’s private key onto a filesystem; it remains in memory). I originally wrote it in Python 3 and that version is packaged for use in Fedora today. This past week as a self-assigned exercise to improve my knowledge of Go, I rewrote the sscg in that language. It was a fun project and had the added benefit of removing the fairly heavyweight dependency on the Python 3 version. I plan to package the golang version for Fedora 25 at some point in the near future, but if you’d like to try it out, you can clone my github repository. Patches and suggestions for functionality are most welcome.

Sausage Factory: Multiple Edition Handling in Fedora

First off, let me be very clear up-front: normally, I write my blog articles to be approachable by readers of varying levels of technical background (or none at all). This will not be one of those. This will be a deep dive into the very bowels of the sausage factory.

The Problem

Starting with the Fedora.next initiative, the Fedora Project embarked on a journey to reinvent itself. A major piece of that effort was the creation of different “editions” of Fedora that could be targeted at specific user personas. Instead of having a One-Size-Fits-Some Fedora distribution, instead we would produce an operating system for “doers” (Fedora Workstation Edition), for traditional infrastructure administrators (Fedora Server Edition) and for new, cloudy/DevOps folks (Fedora Cloud Edition).

We made the decision early on that we did not want to produce independent distributions of Fedora. We wanted each of these editions to draw from the same collective set of packages as the classic Fedora. There were multiple reasons for this, but the most important of them was this: Fedora is largely a volunteer effort. If we started requiring that package maintainers had to do three or four times more work to support the editions (as well as the traditional DIY deployments), we would quickly find ourselves without any maintainers left.

However, differentiating the editions solely by the set of packages that they deliver in a default install isn’t very interesting. That’s actually a problem that could have been solved simply by having a few extra choices in the Anaconda installer. We also wanted to solve some classic arguments between Fedora constituencies about what the installed configuration of the system looks like. For example, people using Fedora as a workstation or desktop environment in general do not want OpenSSH running on the system by default (since their access to the system is usually by sitting down physically in front of a keyboard and monitor) and therefore don’t want any potential external access available. On the other hand, most Fedora Server installations are “headless” (no input devices or monitor attached) and thus having SSH access is critical to functionality. Other examples include the default firewall configuration of the system: Fedora Server needs to have a very tightened default firewall allowing basically nothing in but SSH and management features, whereas a firewall that restrictive proves to be harmful to usability of a Workstation.

Creating Per-Edition Default Configuration

The first step to managing separate editions is having a stable mechanism for identifying what edition is installed. This is partly aesthetic, so that the user knows what they’re running, but it’s also an important prerequisite (as we’ll see further on) to allowing the packaging system and systemd to make certain decisions about how to operate.

The advent of systemd brought with it a new file that describes the installed system called os-release. This file is considered to be authoritative for information identifying the system. So this seemed like the obvious place for us to extend to include information about the edition that was running as well. We therefore needed a way to ensure that the different editions of Fedora would produce a unique (and correct) version of the os-release file depending on the edition being installed. We did this by expanding the os-release file to include two new values: VARIANT and VARIANT_ID. VARIANT_ID is a machine-readable unique identifier that describes which version of Fedora is installed. VARIANT is a human-readable description.

In Fedora, the os-release file is maintained by a special RPM package called fedora-release. The purpose of this package is to install the files onto the system that guarantee this system is Fedora. Among other things, this includes os-release, /etc/fedora-release, /etc/issue, and the systemd preset files. (All of those will become interesting shortly).

So the first thing we needed to do was modify the fedora-release package such that it included a series of subpackages for each of the individual Fedora editions. These subpackages would be required to carry their own version of os-release that would supplant the non-edition version provided by the fedora-release base package. I’ll circle back around to precisely how this is done later, but for now accept that this is true.

So now that the os-release file on the system is guaranteed to contain appropriate VARIANT_ID, we needed to design a mechanism by which individual packages could make different decisions about their default configurations based on this. The full technical details of how to do this are captured in the Fedora Packaging Guidelines, but the basic gist of it is that any package that wants to behave differently between two or more editions must read the VARIANT_ID from os-release during its %posttrans (post-transaction) phase of package installation and place a symlink to the correct default configuration file in place. This needs to be done in the %posttrans phase because, due to the way that yum/dnf processes the assorted RPMs, there is no other way to guarantee that the os-release file has the right values until that time. This is because it’s possible for a package to install and run its %post script between the time that the fedora-release and fedora-release-EDITION package gets installed.

That all assumes that the os-release file is correct, so let’s explore how that is made to happen. First of all, we created a new directory in /usr/lib called /usr/lib/os.release.d/ which will contain all of the possible alternate versions of os-release (and some other files as well, as we’ll see later). As part of the %install phase of the fedora-release package, we generate all of the os-release variants and then drop them into os.release.d. We will then later symlink the appropriate one into /usr/lib/os-release and /etc/os-release during %post.

There’s an important caveat here: the /usr/lib/os-release file must be present and valid in order for any package to run the %systemd_post scripts to set up their unit files properly. As a result, we need to take a special precaution. The fedora-release package will always install its generic (non-edition) os-release file during its %post section, to ensure that the %systemd_post scripts will not fail. Then later if a fedora-release-EDITION package is installed, it will overwrite the fedora-release one with the EDITION-specific version.

The more keen-eyed reader may have already spotted a problem with this approach as currently described: What happens if a user installs another fedora-release-EDITION package later? The short answer was that in early attempts at this: “Bad Things Happened”. We originally had considered that installation of a fedora-release-EDITION package atop a system that only had fedora-release on it previously would result in converting the system to that edition. However, that turned out to A) be problematic and B) violate the principle of least surprise for many users.

So we decided to lock the system to the edition that was first installed by adding another file: /usr/lib/variant which is essentially just a copy of the VARIANT_ID line from /etc/os-release. In the %post script of each of the fedora-release subpackages (including the base subpackage), it is checked for its contents. If it does not exist, the %post script of a fedora-release-EDITION package will create it with the appropriate value for that edition. If processing reaches all the way to the %posttrans script of the fedora-release base package (meaning no edition package was part of the transaction), then it will write the variant file at that point to lock it into the non-edition variant.

There remains a known bug with this behavior, in that if the *initial* transaction actually includes two or more fedora-release-EDITION subpackages, whichever one is processed first will “win” and write the variant. In practice, this is effectively unlikely to happen since all of the install media are curated to include at most one fedora-release-EDITION package.

I said above that this “locks” the system into the particular release, but that’s not strictly true. We also ship a script along with fedora-release that will allow an administrator to manually convert between editions by running `/usr/sbin/convert-to-edition -e <edition>`. Effectively, this just reruns the steps that the %post of that edition would run, except that it skips the check for whether the variant file is already present.

Up to now, I’ve talked only about the os-release file, but the edition-handling also addresses several other important files on the system, including /etc/issue and the systemd presets. /etc/issue is handled identically to the os-release file, with the symlink being created by the %post scripts of the fedora-release-EDITION subpackages or the %posttrans of the fedora-release package if it gets that far.

The systemd presets are a bit of a special case, though. First of all, they do not replace the global default presets, but the do supplement them. This means that what we do is symlink in an edition-specific preset into the /usr/lib/systemd/system-preset/ directory. These presets can either enable new services (as in the Server Edition, where it turns on Cockpit and rolekit) or disable them (as in Workstation Edition where it shuts off OpenSSH). However, due to the fact that systemd only processes the preset files during its %post phase, we need to force systemd to reread them after we add the new values.

We need to be careful when doing this, because we only want to apply the new presets if the current transaction is the initial installation of the fedora-release-EDITION package. Otherwise, an upgrade could override choices that the user themselves have made (such as disabling a service that defaults to enabled). This could lead to unexpected security issues, so it has to be handled carefully.

In this implementation, instead of just calling the command to reprocess all presets, we instead parse the preset files and just process only those units that are mentioned in them. (This is to be overcautious in case any other package is changing the default enabled state besides systemd, such as some third-party RPMs that might have `systemctl enable httpd.service` in their %post section, for example.)

Lastly, due to the fact that we are using symlinks to manage most of this, we had to write the %post and %posttrans scripts in the built-in Lua implementation carried by RPM. This allowed us to call posix.symlink() without having to add a dependency on coreutils to do so in bash (which resulted in a circular dependency and broken installations). We wrote this as a single script that is imported by the RPM during the SRPM build phase. This script is actually coped by rpmbuild into the scriptlet sections verbatim, so the script must be present in the dist-git checkout on its own and not even as part of the exploded tarball. So when modifying the Lua script, it’s important to make sure to modify the copy in dist-git as well as the copy upstream.

Using OpenLMI to join a machine to a FreeIPA domain

People who have been following this (admittedly intermittent) blog for a while are probably aware that in the past I was heavily-involved in the SSSD and FreeIPA projects.

Recently, I’ve been thinking a lot about two topics involving FreeIPA. The first is how to deploy a FreeIPA server using OpenLMI. This is the subject of my efforts in the Fedora Server Role project and will be covered in greater detail in another blog post, hopefully next week.

Today’s topic involves enrollment of FreeIPA clients into the domain from a central location, possibly the FreeIPA server itself. Traditionally, enrolling a system has been a “pull” operation, where an admin signs into the system and then requests that it be added to the domain. However, there are many environments where this is difficult, particularly in the case of large-scale datacenter or cloud deployments. In these cases, it would be much better if one could script the enrollment process.

Additionally, it would be excellent if the FreeIPA Web UI (or CLI) could display a list of systems on the network that are not currently joined to a domain and trigger them to join.

There are multiple problems to solve here. The first of course is whether OpenLMI can control the joining. As it turns out, OpenLMI can! OpenLMI 1.0 includes the “realmd” provider, which acts as a remote interface to the ‘realmd’ service on Fedora 20 (or later) and Red Hat Enterprise Linux 7.0 (or later).

Now, there are some pre-requisites that have to be met before using realmd to join a domain. The first is that the system must have DNS configured properly such that realmd will be able to query it for the domain controller properties. For both FreeIPA and Active Directory, this means that the system must be able to query for the _ldap SRV entry that matches the domain the client wishes to join.

In most deployment environments, it’s reasonable to expect that the DNS servers provided by the DHCP lease (or static assignment) will be correctly configured with this information. However, in a development or testing environment (with a non-production FreeIPA server), it may be necessary to first reconfigure the client’s DNS setup.

Since we’re already using OpenLMI, let’s see if we can modify the DNS configuration that way, using the networking provider. As it turns out, we can! Additionally, we can use the lmi metacommand to make this very easy. All we need to do is run the following command:

lmi -h <client> net dns replace x.x.x.x

With that done, we need to do one more thing before we join the domain. Right now, the realmd provider doesn’t support automatically installing the FreeIPA client packages when joining a domain (that’s on the roadmap). So for the moment, you’re going to want to run

lmi -h <client> sw install freeipa-client

(Replacing ‘freeipa-client’ with ‘ipa-client’ if you’re talking to a RHEL 7 machine).

With that done, now it’s time to use realmd to join the machine to the FreeIPA domain. Unfortunately, in OpenLMI 1.0 we do not yet have an lmi metacommand for this. Instead, we will use the lmishell python scripting environment to perform the join (don’t worry, it’s short and easy to follow!)

c = connect('server', 'username', 'password')
realm_obj = c.root.cimv2.LMI_RealmdService.first_instance()
realm_obj.JoinDomain(Domain='domainname.test', User='admin', Password='password')

In these three lines, we are connecting to the client machine using OpenLMI, getting access to the realm object (there’s only one on a system, so that’s why we use first_instance()) and then calling the JoinDomain() method, passing it the credentials of a FreeIPA administrator with privileges to add a machine, or else passing None for the User and a pre-created one-time password for domain join as the Password.

And there you have it, barring an error we have successfully joined a client to a domain!

Final thoughts: I mentioned above that it would be nice to be able to discover unenrolled systems on the network and display them. For this, we need to look into extending the set of attributes we have available in our SLP implementation so that we can query on this. It shouldn’t be too much work, but it’s not ready today.

The Forgotten “F”: A Tale of Fedora’s Foundations

Lately, I’ve been thinking a lot about Fedora’s Foundations: “Freedom, Friends, Features, First”, particularly in relation to some very sticky questions about where certain things fit (such as third-party repositories, free and non-free web services, etc.)

Many of these discussions get hung up on wildly different interpretations of what the “Freedom” Foundation means. First, I’ll reproduce the exact text of the “Freedom” Foundation:

“Freedom represents dedication to free software and content. We believe that advancing software and content freedom is a central goal for the Fedora Project, and that we should accomplish that goal through the use of the software and content we promote. By including free alternatives to proprietary code and content, we can improve the overall state of free and open source software and content, and limit the effects of proprietary or patent encumbered code on the Project. Sometimes this goal prevents us from taking the easy way out by including proprietary or patent encumbered software in Fedora, or using those kinds of products in our other project work. But by concentrating on the free software and content we provide and promote, the end result is that we are able to provide: releases that are predictable and 100% legally redistributable for everyone; innovation in free and open source software that can equal or exceed closed source or proprietary solutions; and, a completely free project that anyone can emulate or copy in whole or in part for their own purposes.”

The language in this Foundation is sometimes dangerously unclear. For example, it pretty much explicitly forbids the use of non-free components in the creation of Fedora (sorry, folks: you can’t use Photoshop to create your package icon!). At the same time, we regularly allow the packaging of software that can interoperate with non-free software; we allow Pidgin and other IM clients to talk to Google and AOL, we allow email clients to connect to Microsoft Exchange, etc. The real problem is that every time a question comes up against the Freedom Foundation, Fedora contributors diverge into two armed camps: the hard-liners who believe that Fedora should never under any circumstances work (interoperate) with proprietary services and the the folks who believe that such a hard-line approach is a path to irrelevance.

To make things clear: I’m personally closer to the second camp than the first. In fact, in keeping with the subject of this post, I’d like to suggest a fifth Foundation, one to ultimately supersede all the rest: “Functional”. Here’s a straw-man phrasing of this proposal:

Functional means that the Fedora community recognizes this to be the ultimate truth: the purpose of an operating system is to enable its users to accomplish the set of tasks they need to perform.

With this in place, it would admittedly water down the Freedom Foundation slightly. “Freedom” would essentially be reduced to: the tools to reproduce the Fedora Build Environment and all packages (source and binary) shipped from this build system must use a compatible open-source license and not be patent-encumbered. Fedora would strive to always provide and promote open-source alternatives to existing (or emerging) proprietary technologies, but accepts that attracting users means not telling them that they must change all of their tools to do so).

The “Functional” Foundation should be placed above the other four and be the goal-post that we measure decisions against: “If we make this change, are we reducing our users’ ability to work with the software they want/need to?”. Any time the answer to that question would be “yes”, we have to recognize that this translates into lost users (or at the very least, users that are working around our intentions).

Now, let me be further clear on this: I am not in any way advocating the use of closed-source software or services. I am not suggesting that we start carrying patent-encumbered software. I think it is absolutely the mission of Fedora to show people that FOSS is the better long-term solution. However, in my experience a person who is exposed to open source and allowed to migrate in their own time is one who is more likely to become a lifelong supporter. A person who is told “if you switch to Fedora, you must stop using Application X” is a person who is not running Fedora.

Proposal: FreeIPA Role for Fedora Servers Using Cockpit

At last week’s Fedora Server Working Group meeting, we encountered some confusion as to what exactly a role should look like. We seemed to be of differing opinions, technologically, on how to implement the roles. I recommended that we take a step back and try to design the user experience that a role should accomplish first. I recommended that we probably wanted to take one example and provide a user story for it and use that as a straw-man to work out the more general cases of server roles.

Continue reading

Why not change the world?

I have always been interested in science, technology and (most of all) computers. These are things that I always loved, even though they were sometimes difficult. I loved math and science class in school; I read science-fiction and fantasy novels in all of my spare time. I was the nerdy kid at school that was bullied and mocked. It would have been so easy to just give in and be “like everyone else”. I could have stopped reading. I could have played more sports.

But I didn’t. I knew what it was that I loved. I knew that what I wanted was more important than what they wanted. Most of all, I wanted to change the world.

No one in history has ever changed the world by being what others wanted them to be. They have changed the world by daring to laugh at conventional wisdom and try something new. They have changed the world by standing up and defying that “might makes right” or that “going along to get along” is the right course of action.

This is the sentiment that drove me into my open source career. I chose this path in my life because I see it as a way to effect real change in the world. This is a change really has happened in my lifetime and continues to do so. It flies in the face of the history of business.

At its heart, the open source movement is an extension of science. Sir Isaac Newton wrote “If I have seen further it is by standing on the shoulders of giants.”. One of the greatest minds in history acknowledged that his contributions to our greater understanding came not from his sole vision but from the fact that thousands of great and lesser minds had worked together to create a world into which his particular spark could ignite change.

The philosophy of open source is the same. It is a mechanism for enabling thousands of people to work together for a common goal. Classic software development has always occurred in closed environments that jealously protect their secrets. Essentially, it has operated on the same philosophies as invention and manufacturing has over the years. It is a difficult habit to break, especially when it seems like you’re giving something up.

That is the core piece that most people have trouble grasping, in my experience. While you are giving up exclusive control, you are gaining something much more valuable: you’re expanding the base level. You are feeding those giants so that you and others can see further and strive higher still. Where once you were an inventor standing on one giant’s shoulders, now you look down and can see the truth for what it is: that giant is standing on other, bigger giants. And on it goes, all the way (“It’s giants all the way down”, to borrow from a popular joke).

Once that realization is made, things become more clear. When you stand taller and can see further, you realize that your efforts can have greater impact than you once believed. Instead of the miserly hoarding of knowledge, you can share it with the world and see what they will do with it too. It won’t always be pretty and it won’t always match your original intentions, but it will always be greater than the sum of its parts. And in a small way, you will have changed the world.

One thing that gets forgotten a lot of the time when folks talk about changing the world is that they talk about massive shifts in direction. They talk about those moments in history where you can look back and say, “There! That’s the moment when everything changed!”. There will always be a few such moments every few centuries, but the truth is that most change happens glacially.

Open source is one of those glacial changes. I don’t use that term to suggest that it moves too slowly. Rather, I chose it to evoke the immensity, inevitability, and incredible moving inertia that drives it. A glacier may take decades, centuries or millenia to move across the land, but nothing can truly stand against that oncoming tide. This is the legacy of open source: the tide of progress.

Over the last twenty-two years of Linux, we’ve seen a great many strides made in the march of progress. When people ask what I work on and I say “Linux” or “open source”, they often give me a blank look. I usually follow up with the following sentence: “Do you have a device with an on-switch that doesn’t start with the letter I? Chances are, it’s running Linux or at least some open source code”. Every time I say this, I boggle at the magnanimity of it. All those years of hard work from so many talented individuals has paid off to such a degree that open source technologies are now the default, rather than the challenger or also-ran. Moreover, it has become so ubiquitous that the public at large is using it without ever caring or needing to know about it.

In a very real way, the open source community has changed the world. It continues to do so every day. I get to be a part of that, and I won’t pretend that the idea doesn’t make me a little heady and giddy.

I started this blog entry up with the question “Why not change the world?”, but that was a little bit misleading. We’ve already succeeded. The real question has to be “How will you change it next?“.