A Hacker’s Guide to Moving Linux Services into Containers

Background

For years, I floundered around with moving my own blog, ticket system and wiki into containers. Literally, ticket #627: Migrate Crunchtools to Containers has been open in Request Tracker since March 11th, 2017. It’s embarrassing to admit given how deeply I have been involved with containers at Red Hat. Since the early days of Docker in RHEL7 (circa 2014), to building OpenShift 3 on Kubernetes instead of Gears – from launching CRI-O as Technology Preview in OpenShift 3.7 to launching the Container Tools module with Podman, Buildah, Skopeo in RHEL 8 – from the acquisition of CoreOS including quay.io (one of my favorite services), to launching Red Hat Universal Base Image – I have been deeply using, testing, and even driving the roadmap for container technologies at Red Hat. Yet, it’s taken me until 2020 to finish ticket #627.

Sure, I’ve built tons of demos, done tons of experiments, thought hours and hours about how to migrate things. I’ve had hundreds or thousands of migration conversations with customers and community members, but I just couldn’t find time to convert my personal Linux services. So don’t feel bad if you are still staring, longingly, from a afar, at other people’s fancy containerized services, as I’ve only recently gotten over the hump myself.

I’ve finally done it, and I want to share my technical solution as well as break down my conscious design decisions. I hope it will get you over the hump toward moving your own services into containers. The goal is to give you some solid, concise, technical guidance. Here’s the three services we are going to take a hard look at in this article:

SERVICE PURPOSE TECHNOLOGY
WordPress Blog Apache, PHP, FastCGI Process Manager (FPM), MariaDB, WordPress
Mediawiki Wiki Apache, PHP, FastCGI Process Manager (FPM), MariaDB, Cron, Mediawiki
Request Tracker Ticket System Apache, FastCGI, PERL, MariaDB, Postfix, Cron, Request Tracker

These are very common Linux services, and they seem so deceptively simple at first glance. But the truth is, they’re really not. You need senior Linux skills to containerize them well. These three services are only a sample, but deep inspection of these containerized services should provide a foothold for your own work.

Methodology

If you search with Google, you will find pages and pages of blogs, white papers, and articles. A quick look at five or ten of the results will lead you to the same three main options:


To be clear, these are the same three options that people have had for many years. As an aside, before mainframes there was no portability, you had to rewrite your application from scratch for every different computer (A Brief History in Code Portability). With the advent operating systems and standardized programming languages, some permutation of the same three options have existed. Here are some examples:

  • Mainframe to Unix – mostly rewrite
  • Unix to Linux – lift & shift, refactor, rewrite
  • Bare metal to virtual machines – mostly lift & shift
  • Virtual machines to cloud – mostly refactor, rewrite
  • Windows to Linux – lift & shift, refactor, rewrite
  • Linux processes to containerized Linux processes – lift & shift, refactor, rewrite

In fact, if you search for migrating to containers, the second article you’ll find is one I wrote, delving into these three options and some techniques on how to analyze architecture, security and performance: (Best practices for migrating to containerized applications – 11 pages). Also, here’s a presentation I did covering the same topic: Containers for Grownups” Migrating Traditional & Existing Applications: Video & Slides.
For simplicity, I will briefly touch on the most essential parts of the above white paper and presentation. The vast majority of the software that we use today was designed and written before Linux containers, so even when you write (or rewrite) software from scratch, you need the same skills. These skills will come naturally for Linux Systems Administrators and Architects. Now, let’s dig into what I did specifically for my own services.

For my blogs, wiki and ticketing system, rewriting and refactoring were completely out of the question. Now, you may be asking yourself, why don’t you just move to Jira for ticketing, wordpress.com for your blogs, and some free service for your wiki? Well, I can’t move for the same reasons most large enterprises businesses can’t move. There is way too much data embedded in my services – from learning Jiu Jitsu, to home projects, to changing the differential in my 2005 Crossfire SRT 6, everything I have done over the last 10+ years is embedded in these Linux services. They are essentially an extension of my brain. There are nearly 1000 tickets in Request Tracker, 800 pages in my wiki, and over 200 articles on my two blogs. In fact, much like a large enterprise, I purposefully chose Mediawiki because I know that it will exist for as long as Wikipedia exists, and will likely outlive me. I literally plan on writing my last Mediawiki entry a few days before I kick off, so I just need Mediawiki to be around for another 20 or 30 years 🙂 Given my business needs, I chose lift & shift, with a little, teeny, tiny bit of refactor mixed in.

Now, let’s move on to the level of effort. Here are some guidelines on how difficult different services are to move:

EASY MODERATE DIFFICULT
Code Completely Isolated

(single process)

Somewhat Isolated

(multiple processes)

Self Modifying

(e.g. Actor Model)

Configuration One File Several Files Anywhere in Filesystem
Data Saved in Single Place Saved in Several Places Anywhere in Filesystem
Secrets Static Files Network Dynamic Generation of Certificates
Network HTTP, HTTPS TCP, UDP IPSEC, Highly Isolated
Installation Packages, Source Installer and Understood Configuration Installers (install.sh)
Licensing Open Source Proprietary Restrictive & Proprietary
Recoverability Easy to Restart Fails Sometimes Fails Often

Once you have decided between lift & shift, refactor or rewrite, you need to gauge the level of effort because even new applications (including micro-services) rely on programming languages and Linux daemons written before containers existed. Luckily, most Unix and Linux services are designed in a modular way that makes them conducive to separating code, configuration and data. This is an absolute necessity when you move any software into containers. Furthermore, you need to think about installation (and updates in the case of WordPress), secrets, and recoverability. For a slightly deeper dive on the above table, see: Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization.

Separating services into code, configuration and data requires a strong set of Linux skills. With a few hours investment, learning the key concepts in containers, a strong Linux Sysadmin or Architect can productively start to move services into containers. For a deeper dive into the skills necessary for an Linux admin to learn Linux containers, see: Lab: Linux Container Internals 2.0.
Now, let’s dig into our first Linux service…

Service #1: WordPress

It seems so deceptively easy. It’s a standard LAMP stack, but there are a few pitfalls we want to avoid. Containers are really two things, they are container images at rest, and Linux processes at runtime, so let’s take a look at both parts – build & run.

Build

WordPress needs PHP and a web server. The most common configuration is to use Apache (or Nginx) with PHP FastCGI Process Manager (php-fpm) and a PHP interpreter. In fact, a general purpose container image can be constructed for almost any PHP based application including WordPress and Mediawiki. Here’s an example of how to build one with Red Hat Universal Base Image:

FROM registry.access.redhat.com/ubi8/ubi-init
MAINTAINER fatherlinux <[email protected]>
RUN yum install -y mariadb-server mariadb php php-apcu php-intl php-mbstring php-xml php-json php-mysqlnd crontabs cronie iputils net-tools;yum clean all
RUN systemctl enable mariadb
RUN systemctl enable httpd
RUN systemctl disable systemd-update-utmp.service
ENTRYPOINT ["/sbin/init"]
CMD ["/sbin/init"]

The ubi-init image is configured out of the box to run systemd in the container when run. This makes it easy to run a few commands at install and rely on the subject matter expertise embedded in the Linux distribution. As I’ve argued for years, the quality of the container image as well as the supply chain hygiene is more important than the absolute smallest individual images we can produce (Container Tidbits: Can Good Supply Chain Hygiene Mitigate Base Image Sizes?). We need to consider the entire size of your supply chain, not the individual images, so I chose the ubi-init image.
Notice how simple the Containerfile (Dockerfile) is? That’s because we are relying on the packagers to start the services correctly. See also: Do Linux Distributions Still Matter with Containers? Emphatically, yes!
It’s a fairly simple build, so let’s move on to the tricky stuff at runtime.

Run

Like traditional services, on traditional servers, running our containers with systemd gives us a convenient way to start them when we boot our container host or when the container is killed (recoverability in the table above). Let’s dissect our systemd unit file to better understand the design decisions and some of the advantages of running services is containers:

[Unit]
Description=Podman container - wordpress.crunchtools.com

[Service]
Type=simple
ExecStart=/usr/bin/podman run -i --read-only --rm -p 80:80 --name wordpress.crunchtools.com \
-v /srv/wordpress.crunchtools.com/code/wordpress:/var/www/html/wordpress.crunchtools.com:Z \
-v /srv/wordpress.crunchtools.com/config/wp-config.php:/var/www/html/wordpress.crunchtools.com/wp-config.php:ro \
-v /srv/wordpress.crunchtools.com/config/wordpress.crunchtools.com.conf:/etc/httpd/conf.d/wordpress.crunchtools.com.conf:ro \
-v /srv/wordpress.crunchtools.com/data/wp-content:/var/www/html/wordpress.crunchtools.com/wp-content:Z \
-v /srv/wordpress.crunchtools.com/data/logs/httpd:/var/log/httpd:Z \
-v /srv/wordpress.crunchtools.com/data/mariadb/:/var/lib/mysql:Z \
--tmpfs /etc \
--tmpfs /var/log/ \
--tmpfs /var/tmp \
localhost/httpd-php
ExecStop=/usr/bin/podman stop -t 3 wordpress.crunchtools.com
ExecStopAfter=/usr/bin/podman rm -f wordpress.crunchtools.com
Restart=always

[Install]
WantedBy=multi-user.target

First and foremost, notice that we are running this entire container as read only and with the –rm option, making it ephemeral. The container is deleted every time it is stopped. This forces us to split up our code configuration and data, and save it on external mounts or it will be lost. This also gives us the ability to kill the container to pick up config file changes like a normal service (more on this later). Apache, PHP FPM and MariaDB run side by side in the container conveniently allowing them to communicate over private sockets in the container. For such a simple service, there is no need to scale MariaDB and Apache separately, so there’s no need to split them up.

Notice that we split the code, configuration and data into separate directories and bind mounts. The main Apache, PHP, and PHP FPM binaries come from the httpd-php container image built on Red Hat Universal Base Image, while the WordPress code comes from the code/wordpress bind mount. In many containers, all of the code will come from the container image (see Request Tracker later). The code/wordpress directory just houses WordPress PHP code downloaded from wordpress.org (https://rup.wordpress.org/download/). None of our personal data or customizations are saved in the code/wordpress directory, but we purposefully made it a separate, writable bind mount to allow WordPress to auto-update itself at runtime. This is contrary to typical best practices with containers, but a very convenient feature for a popular public facing web service which is under constant attack and receives security updates frequently. Architecting it this way gives us over the air updates without having to rebuild the container image. Making services as driverless as possible is definitely useful.

Now, look at the config lines. Every customized config file is bind mounted into the container read-only. This is a solid security upgrade from traditional LAMP servers (virtual machines or bare metal). This prevents the usage of some WordPress plugins which try to change wp-config.php, but most sysadmins would want to disable these anyway. This “could” be made read-write if some of our users really need these plugins.

Next, notice the data directory. We bind mount three different sub directories. All of them are writable:

  • data/wp-content – this directory has our personal data and customizations in it. This includes things like WordPress themes, plugins, and uploaded files (images, videos, mp3s, etc). It should also be noted that this is a WordPress Multi-User (MU) site so multiple sites save their data here. A WordPress Administrator could log-in and create new sites if necessary.
  • data/logs – we want our Apache logs outside the container so that we can track down access/errors or do analytics. We could also use these should somebody hack in and we need to reconstruct what happened. A write only mount option might be useful here.
  • data/mariadb – this is our writable directory for MariaDB. Most of our secrets are stored in the database, and this directory has permissions set correctly for the mysql user/group. This gives us equivalent process level isolation in the container, similar to a normal LAMP server. There is a bit of a security upgrade because this MariaDB instance only has data for WordPress in it. Hackers can’t break into WordPress and get to our Wiki nor Request Tracker, which have their own separate instances of MariaDB.

Next, let’s take a look at the –tempfs mounts. These enable systemd to run properly in a read-only container. Any data written to these mounts will be automatically deleted when the container stops. This makes everything outside of our bind mounts completely ephemeral. Other modifications could be made to capture /var/log/messages, or other logs, if desired.
For backups within WordPress, we rely on UpdraftPlus. UpdraftPlus offers the advantage of backing up everything from a WordPress MU site including themes, plugins, files, and the database – it can even push the backup to remote storage like Dropbox or pCloud (through WebDav). This is a common design pattern with higher level applications like WordPress. Often, Databases, CRMs, etc will have their own backup utilities, or ecosystems of third party backup software. Relying on this existing software is still useful in containers.

Service #2: Mediawiki

Next, we’ll tackle Mediawiki since it’s also an Apache, PHP FPM, PHP based service.

Build

Mediawiki runs in a container image built from the exact same Containerfile (like a Dockerfile). Notice one small thing not mentioned in the Word{ress section, we install crontabs, and Cronie. Unlike WordPress, which has an advanced backup utility, with Mediawiki we must dump the MariaDB database to get backups, so we need cron.

FROM registry.access.redhat.com/ubi8/ubi-init
MAINTAINER fatherlinux <[email protected]>
RUN yum install -y mariadb-server mariadb php php-apcu php-intl php-mbstring php-xml php-json php-mysqlnd crontabs cronie iputils net-tools;yum clean all
RUN systemctl enable mariadb
RUN systemctl enable httpd
RUN systemctl disable systemd-update-utmp.service
ENTRYPOINT ["/sbin/init"]
CMD ["/sbin/init"]

Other than utilization of cron, Mediawiki does not rely on anything special in the httpd-php container image.

Run

Now, let’s take a look at how we run Mediawiki slightly different than WordPress:

[Unit]
Description=Podman container - learn.fatherlinux.com

[Service]
Type=simple
ExecStart=/usr/bin/podman run -i --read-only --rm -p 8080:80 --name learn.fatherlinux.com \
-v /srv/learn.fatherlinux.com/code/mediawiki:/var/www/html/learn.fatherlinux.com:ro \
-v /srv/learn.fatherlinux.com/config/LocalSettings.php:/var/www/html/learn.fatherlinux.com/LocalSettings.php:ro \
-v /srv/learn.fatherlinux.com/config/learn.fatherlinux.com.conf:/etc/httpd/conf.d/learn.fatherlinux.com.conf:ro \
-v /srv/learn.fatherlinux.com/config/htpasswd:/etc/httpd/conf.d/htpasswd:ro \
-v /srv/learn.fatherlinux.com/config/root-crontab:/var/spool/cron/root:ro \
-v /srv/learn.fatherlinux.com/data/mariadb/:/var/lib/mysql:Z \
-v /srv/learn.fatherlinux.com/data/images/:/var/www/html/learn.fatherlinux.com/images:Z \
-v /srv/learn.fatherlinux.com/data/skins/:/var/www/html/learn.fatherlinux.com/skins:Z \
-v /srv/learn.fatherlinux.com/data/logs/httpd:/var/log/httpd:Z \
-v /srv/learn.fatherlinux.com/data/backups/:/root/.backups:Z \
--tmpfs /etc \
--tmpfs /var/log/ \
--tmpfs /var/tmp \
localhost/httpd-php
ExecStop=/usr/bin/podman stop -t 3 learn.fatherlinux.com
ExecStopPost=/usr/bin/podman rm -f learn.fatherlinux.com
Restart=always

[Install]
WantedBy=multi-user.target

We run the container with –read-only and –rm just like WordPress, making it ephemera. But, notice that we bind mound code/mediawiki read-only as well. We could have built another layered image and embedded the Mediawiki code into that layer, but we decided to bind mount it instead because many PHP apps use a pattern like WordPress where the code directory is expected to be writable at runtime. This design decision purposefully gives us the option to make the code directory read-only or writable depending on the PHP web application we are putting in a container. The same httpd-php image can be used for all of them, thereby reducing the size of our software supply chain. If we update Glibc, OpenSSL, Apache, PHP FPM, or PHP to fix security issues, all of our PHP applications inherit this when they are restarted. In a perfect world, we would constantly rebuild this httpd-php image in a CI/CD system with a good test harness for continual updates.
The configuration files, like WordPress, are bind mounted into the container read-only at runtime. Again, this is a great security upgrade from a standard LAMP server.
There are more data directories bind mounted into Mediawiki, here’s why:

  • data/mariadb – this is straightforward. The reasons are identical to WordPress.
  • data/images – stores images, PDFs and other files uploaded into the wiki.
  • data/skins – Like WordPress, Mediawiki was designed before containers so they could never know the needs of future technologies like containers. Unlike WordPress, Mediawiki comes with pre-populated skins in this directory which is in the code/mediawiki/skins directory. This is a copy of that data combined with our custom skins. It’s bind mounted read/write so that we can add new skins if we like. In the future, this will likely be solved with a “-v skins:skins:o” overlay option to podman. This will allow us to “overlay” our custom data on top of the existing code/mediawiki/skins data that comes with the initial code download.
  • data/logs – Like WordPress, we want access to our logs outside of the container.
  • data/backups – Unlike WordPress, we must use a cron job to dump the MariaDB database on a schedule. Those backups are put in this directory, then copied off site by the container host.

Service #3: Request Tracker

This service might be the most tricky because both the build and run are fairly sophisticated.

Build

Unlike WordPress and Mediawiki, which run on a single layered image on top of a base image, Request Tracker uses two layers on top of a base image. Let’s look at each one and why we did it this way.
The first Layer is built quite similar to httpd-php image. This adds the basic services needed for a Perl based web application. We include Apache, the FastCGI module, perl MariaDB, cron, and some basic utilities for troubleshooting:

FROM registry.access.redhat.com/ubi8/ubi-init
MAINTAINER fatherlinux <[email protected]>
RUN yum install -y httpd mod_fcgid perl mariadb-server mariadb crontabs cronie iputils net-tools;yum clean all
RUN systemctl enable mariadb
RUN systemctl enable httpd
RUN systemctl enable postfix
RUN systemctl disable systemd-update-utmp.service
ENTRYPOINT ["/sbin/init"]
CMD ["/sbin/init"]

The second layer is where things get pretty sophisticated. Request Tracker uses a lot of perl modules from CPAN. Many of these modules are compiled with gcc and take a long time to install. It also took a lot of work to nail down all of these dependencies to get Request Tracker to successfully install. Historically, we would have captured this in a script somewhere, but with containers, we can have it all in one Containerfile. It’s very convenient.

The next thing you should notice about this file is that it’s a multi-stage build. Podman and Buildah can absolutely do multi-stage builds and they can be extremely useful for applications like Request Tracker. We could have bind mounted in directories like we did with WordPress and Mediawiki, but we chose a multi-stage build instead. This will give us portability and speed if we need to rebuild this somewhere else.
Multi-stage builds can be thought of as capturing the development server and the production server in a single build file. Historically, development servers were actually the hardest to automate. Since the early days of CFEngine in the mid-1990s, developers refused to use version control and would add anything they wanted to development servers to make them work. Often, they didn’t even know what they added to make a build complete. This was actually rational when you had long lived servers that were well backed up, but it always caused pain when Systems Administrators had to “upgrade the dev server.” It was a nightmare to get builds to function on a brand new server with a fresh operating system.

With multi-stage builds, we capture all of the build instructions and even cache layers that are constructed. We can rebuild this virtual development server anywhere we like.

FROM registry.access.redhat.com/ubi8/ubi-init
FROM localhost/httpd-perl AS localhost/rt4-build
MAINTAINER fatherlinux <[email protected]>
RUN yum install -y expat-devel gcc;yum clean all
RUN cpan -i CPAN
RUN cpan -i -f GnuPG::Interface
RUN cpan -i DBIx::SearchBuilder \
ExtUtils::Command::MM \
Text::WikiFormat \
Devel::StackTrace \
Apache::Session \
Module::Refresh \
HTML::TreeBuilder \
HTML::FormatText::WithLinks \
HTML::FormatText::WithLinks::AndTables \
Data::GUID \
CGI::Cookie \
DateTime::Format::Natural \
Text::Password::Pronounceable \
UNIVERSAL::require \
JSON \
DateTime \
Net::CIDR \
CSS::Minifier::XS \
CGI \
Devel::GlobalDestruction \
Text::Wrapper \
Net::IP \
HTML::RewriteAttributes \
Log::Dispatch \
Plack \
Regexp::Common::net::CIDR \
Scope::Upper \
CGI::Emulate::PSGI \
HTML::Mason::PSGIHandler \
HTML::Scrubber \
HTML::Entities \
HTML::Mason \
File::ShareDir \
Mail::Header \
XML::RSS \
List::MoreUtils \
Plack::Handler::Starlet \
IPC::Run3 \
Email::Address \
Role::Basic \
MIME::Entity \
Regexp::IPv6 \
Convert::Color \
Business::Hours \
Symbol::Global::Name \
MIME::Types \
Locale::Maketext::Fuzzy \
Tree::Simple \
Clone \
HTML::Quoted \
Data::Page::Pageset \
Text::Quoted \
DateTime::Locale \
HTTP::Message \
Crypt::Eksblowfish \
Data::ICal \
Locale::Maketext::Lexicon \
Time::ParseDate \
Mail::Mailer \
Email::Address::List \
Date::Extract \
CSS::Squish \
Class::Accessor::Fast \
LWP::Simple \
Module::Versions::Report \
Regexp::Common \
Date::Manip \
CGI::PSGI \
JavaScript::Minifier::XS \
FCGI \
PerlIO::eol \
GnuPG::Interface \
LWP::UserAgent >= 6.02 \
LWP::Protocol::https \
String::ShellQuote \
Crypt::X509
RUN cd /root/rt-4.4.4;make testdeps;make install

# Deploy
FROM localhost/httpd-perl AS localhost/rt:4.4.4
RUN yum install -y postfix mailx;yum clean all
COPY --from=localhost/rt4-build /opt/rt4 /opt/rt4
COPY --from=localhost/rt4-build /usr/lib64/perl5 /usr/lib64/perl5
COPY --from=localhost/rt4-build /usr/share/perl5 /usr/share/perl5
COPY --from=localhost/rt4-build /usr/local/share/perl5 /usr/local/share/perl5
COPY --from=localhost/rt4-build /usr/local/lib64/perl5/ /usr/local/lib64/perl5/
RUN chown -R root.bin /opt/rt4/lib;chown -R root.apache /opt/rt4/etc
ENTRYPOINT ["/sbin/init"]
CMD ["/sbin/init"]

The second stage, in this multi-stage build, constructs the virtual production server. By splitting this into a second stage, we don’t have to install development tools like gcc or expat-devel in the final, production image. This reduces the size of our image and also reduces the size of the software supply chain in network exposed services, potentially reducing the chances of somebody doing something nasty with our container, should they hack in.

We only install the mail utilities in this second stage which defines the second layer of our production image for Request Tracker. We could have installed these utilities in the httpd-perl layer, but many other perl applications won’t need mail utilities.

Another convenience of multi-stage builds is that we don’t have to rebuild all of those perl modules everytime we want to update the perl interpreter, Apache, or MariaDB for security patches.

Run

Now, like WordPress and Mediawiki, let’s take a look at some of the tricks we use at runtime:
[Unit]
Description=Podman container – rt.fatherlinux.com
Documentation=man:podman-generate-systemd(1)

[Service]
Type=simple
ExecStart=/usr/bin/podman run -i --rm --read-only -p 8081:8081 --name rt.fatherlinux.com \
-v /srv/rt.fatherlinux.com/code/reminders:/root/reminders:ro \
-v /srv/rt.fatherlinux.com/config/rt.fatherlinux.com.conf:/etc/httpd/conf.d/rt.fatherlinux.com.conf:ro \
-v /srv/rt.fatherlinux.com/config/MyConfig.pm:/root/.cpan/CPAN/MyConfig.pm:ro \
-v /srv/rt.fatherlinux.com/config/RT_SiteConfig.pm:/opt/rt4/etc/RT_SiteConfig.pm:ro \
-v /srv/rt.fatherlinux.com/config/root-crontab:/var/spool/cron/root:ro \
-v /srv/rt.fatherlinux.com/config/aliases:/etc/aliases:ro \
-v /srv/rt.fatherlinux.com/config/main.cf:/etc/postfix/main.cf:ro \
-v /srv/rt.fatherlinux.com/data/mariadb:/var/lib/mysql:Z \
-v /srv/rt.fatherlinux.com/data/logs/httpd:/var/log/httpd:Z \
-v /srv/rt.fatherlinux.com/data/logs/rt4:/opt/rt4/var:Z \
-v /srv/rt.fatherlinux.com/data/backups:/root/.backups:Z \
--tmpfs /etc \
--tmpfs /var/log/ \
--tmpfs /var/tmp \
--tmpfs /var/spool \
--tmpfs /var/lib \
localhost/rt:latest
ExecStop=/usr/bin/podman stop -t 3 rt.fatherlinux.com
ExecStopAfter=/usr/bin/podman rm -f rt.fatherlinux.com
Restart=always

[Install]
WantedBy=multi-user.target

A couple of simple observations. We did still bind mount in some code into the image for Reminders which is a small, home-grown set of scripts that send emails and generate tickets for weekly, monthly, and annual tickets. Like Mediawiki, all of the config files are bind mounted in read-only giving us a solid upgrade to security. Finally, the data directories are read-write just like our other containers.

Further Analysis

Let’s tackle a few last subjects that aren’t specific to any one of our containerized Linux services.

Recoverability

Recoverability is something we have to consider carefully. By using systemd, we get solid recoverability, on par with regular Linux services. Notice systemd restarts my services without blinking an eye:

podman kill -a
55299bdfebea23db81f0277d45ccd967e891ab939ae3530dde155f550c18bda9
87a34fb86f854ccb86d9be46b5fe94f6e0e15322f5301e5e66c396195480047a
C8092df3249e5b01dc11fa4372a8204c120d91ab5425eb1577eb5f786c64a34b

Look at that restarted services:

podman ps
CONTAINER ID  IMAGE                       COMMAND     CREATED       STATUS                     PORTS                   NAMES
33a8f9286cee  localhost/httpd-php:latest  /sbin/init  1 second ago  Up Less than a second ago  0.0.0.0:80->80/tcp      wordpress.crunchtools.com
37dd6d4393af  localhost/rt:4.4.4          /sbin/init  1 second ago  Up Less than a second ago  0.0.0.0:8081->8081/tcp  rt.fatherlinux.com
e4cc410680b1  localhost/httpd-php:latest  /sbin/init  1 second ago  Up Less than a second ago  0.0.0.0:8080->80/tcp    learn.fatherlinux.com

This is quite useful for making config file changes. We can simply edit the config file on the container host, or use something like Ansible and kill all of the containers with the podman kill -a command. Because we are using systemd, it will gracefully handle restarting the services. This is very convenient.

Tips and Tricks

It can be tricky to get software to run within a container, especially when you want it to run read-only. You are constraining the process in ways in which it wasn’t necessarily designed. As such, here are some tips and tricks. First, it’s useful to install some standard utilities in your containers. In this guide, we installed ip-utils and net-tools so that we could troubleshoot our containers. For example, with Request Tracker, I had to troubleshoot the following entry in /etc/aliases which generates tickets from emails:

professional:         "|/opt/rt4/bin/rt-mailgate --queue 'Professional' --action correspond --url http://localhost:8081/"

The tools curl, ping, and netstat were all extremely useful because we are also using external DNS and CloudFlare.
Next up, is podman diff which I used extensively for running containers as read-only. You can run the container in read-write mode, and constantly check podman diff to see what files have changed. Here’s an example:

podman diff learn.fatherlinux.com
C /var
C /var/spool
C /var/spool/cron
A /var/spool/cron/root
C /var/www
C /var/www/html
A /var/www/html/learn.fatherlinux.com
C /root
A /root/.backups

Notice that podman will tell us which files have changed since the container started. In this case, every file that we care about is either on a tmpfs or a bind mount. This enables us to run this container as read-only.

Moving to Kubernetes

Taking a hard look at Kubernetes is a natural next step. Using a command like podman generate kube will get us part of the way there, but we still need to figure out how to manage persistent volumes, as well as backups on those persistent volumes. For now, we’ve decided that Podman + systemd provides a nice foundation. All of the work that we have done with splitting up the code, configuration and data is requisite to getting us to Kubernetes.

Notes on Environment

My environment is a single virtual machine running at Linode.com with: 4GB of RAM: 2 CPUs, and 80GB Storage. I was able to upload my own custom image of RHEL 8 to serve as the Container Host. Other than setting the hostname, and pointing DNS through CloudFlare, I really didn’t have to make any other changes to the host. All of the important data is in /srv which would make it extremely easy to replace if it were to fail. Finally, the /srv directory on the Container Host is completely backed up.
If you are interested in looking at the configuration files and directory structure of /srv, I have saved the code here in GitHub: https://github.com/fatherlinux/code-config-data

Biases

Like everyone, I have biases and I think it’s fair to disclose them. I served as a Linux Systems Administrator for much of my career before coming to Red Hat. I have a bias towards Linux, and towards Red Hat Enterprise Linux in particular. I also have a bias towards automation and the psychology of how to make that automaton accessible to new contributors.

One of my earliest frustrations as a sysadmin was working on a team with 1000 Linux web servers (doing eCards in web 1.0) where documentation for how to contribute to the automation was completely opaque and had no reasoning documented for why things were the way they were. We had great automation, but nobody considered the psychology of how to introduce new people to the automation. It was sink or swim.
The goal of this blog is to help people get over that hump, while at the same time making it almost self-documenting. I think it’s critically important to consider the human inputs and robot outputs of automation. See also: Bootstrapping And Rooting Documentation: Part 1

Conclusion

This is the first blog entry I have written, published, and now you’re reading, all from within a container (so meta). It seems so easy to move a common service like WordPress into containers, but it’s really not. The flexible and secure architecture outlined in this article marshals the skills of a senior Linux Administrator or Architect to move from a regular LAMP server to OCI compliant containers. This guide leveraged a Container Engine called Podman, but the design decisions could also be used with Docker, while also preparing your services for Kubernetes as well. Separating your code, configuration and data, is a requisite step for moving on to Kubernetes. It all starts with solid, foundational Linux skills.

Some of the design decisions highlighted in this article purposefully challenge some common misconceptions within the container community – things like using systemd in a contain, or only only focusing on the smallest base image you can find without paying attention to the entire software supply chain – but the end product is simple to use and provides a workflow quite similar to a traditional LAMP server, requiring a minimal cognitive load for traditional Linux Systems Administrators.

Some of the design decisions made in this article are a compromise and imperfect, but I made them because I understand both the pressures of a modern DevOps culture, as well as the psychology of Operations and Development teams. I wanted to provide the flexibility to get more value out of containers. This set of services should be useful as a model for how to migrate many of your own services into containers. This will simplify their management, upgrade, and recovery. This not only helps existing Linux admins, but future cohorts who will inherit these services – including the future version of me who will have forgotten all of the details 🙂 These containerized services are essentially self documenting in style that is conducive to a successful DevOps culture.
As always, please, please, please leave comments, questions, or even challenges to my logic below…
Originally published at crunchtools.com: https://crunchtools.com/moving-linux-ser…-into-containers/

15 comments on “A Hacker’s Guide to Moving Linux Services into Containers

  1. I guess the major disparity for me is where to draw the line iro separation of responsibility. Your approach, to me anyway, seem very sysadmin centric. In other organisations (mine included) there is a strong desire to follow a ‘shift-left’ DevSecOps culture (and org structure) that is designed not only to devolve and empower application development teams to build, test and deploy (and maintain) their application stacks but also the leverage their primary skill set and not allow that to expand into areas that are considered less germane to the goal of shipping business revenue generating code. In my experience those teams are highly resistant to going in that direction as are the business product owners. As an example, if I have a team of people writing .Net core business applications ready for deployment into containers and possibly K8s, I don’t see those guys writing docker files that require them to understand Systemd, unit files, et al. I totally get that the simplistic approaches that are often touted miss some of the resilience, scalability and maintainability concerns that your are talking to, but OTOH, rather than regress into acceptance that we need ‘full stack’, ‘multi-skilled’, ‘T-shaped’ .. or whatever the current flavour of pointless labels we want to use to describe the nirvana developer capabilities to be, we should instead be looking to leverage higher abstractions to make at least some of that unnecessary (with a purposeful nod to not expecting K8s or anything else to auto-magically solve all your non-functionals … you still need to pay attention to that regardless).

    So for me, since you were partly talking about what trail we leave behind for others to follow, it’s not really just about a good ‘run book’ or even automation (although both of those are critical if you do want to separate Operational Run Support from Engineering), it’s still heavily influenced by skills you are prepared to pay for and maintain internally, the availability of those skills in the marketplace (and retaining staff that have them or you have funded to get them), the flexibility and agility business absolutely require today to deal with high churn short time to market delivery of features … I could go on. What I haven’t conspicuously mentioned is tech. Listen, I’m a tech guy, but even I accept that a very large part, perhaps even the major part of these decisions isn’t really about tech. I have worked in IT for close to 40 years and done robotics and Enterprise systems at global financial services and quite a lot in-between. I have been considered as mult-skilled as well as a domain specialist lots of times. However, changing up tech and accompanying skills is the only constant 🙂

    I wanted to ask about the red hat ubi images. I can see merit in that and, over the years have heard many a CISO make noises about how it would be beneficial from a security posture perspective to approach containers images in much the same way as we look at the host OS on full-fat VMs. I guess for me the questions are around licensing and scope. Nobody likes to pay for stuff these days, but security can be one of those areas where there is a small appetite. So in your case, did you need to license the use of those images from the Redhat registry ? On scope, clearly DockerHub (and other public registries) have a huge number of images many from trusted vendors. Many orgs implement image assurance and run-time vulnerability prevention policies and processes to avoid being the next headline news of a company that has compromised customer data. Also none of us really want to take ownership and maintain custom images which we can get directly from a primary vendor (think Microsoft, Oracle, HashiCorp, Rancher, whatever). But are those orgs going to publish their images using ubi base ?

    Regards

    Fraser.

    1. So, interestingly, I don’t really disagree with anything in your first two paragraphs. I think you might be reading into the philosophy that I’m embedding in this blog because the arguments seem to be orthogonal. In the context of this conversation, I think the .Net applications you are talking about are more in the category of “Refactor” and/or “Rewrite.” It sounds like these are application written today and targeted towards a cloud native environment. Whereas, I’m really discussing bringing applications along that expect a Posix interface. I would call WordPress, MediaWiki and Request Tracker cloud immigrants, not cloud native 😉

      That said, even with cloud native apps, you want the code, configuration and data to be split up to be managed by container images (code), configuration (Kube config maps) and data (Kube persistent volumes). That doesn’t change. What would change, IMHO is I would never bind mount the code like I did with WordPress. That was a “cloud immigration” move to get wordpress to work. That said, until WordPress gets a rewrite, I’m stuck. That’s true for 100,000s of applications that people all over the world are dealing with, and many of these people still want to move them into containers for better management. I would call this IT Optimization more than Cloud Native. My 2c.

      On your third paragraph, I’m the Product Manager at Red Hat for UBI, so I’m probably the right person to answer that question – also a warning, I’m biased 🙂 I have to be careful how I answer your license question. Red Hat does not license software, all of these images are open source so users must comply with the open source licenses of the code within them just like Red Hat. That means doing things like making the source code available (yes, the world is doing it wrong) for every image stuffed in a registry. Red Hat is making that easier in the container world [1].

      The only rules Red Hat places on these container images is based on Trademarks, and the only restriction is not to distribute to countries which have trade embargoes on the cryptographic components etc. Our FAQ [2]is pretty good. Our EULA [3] is also pretty readable. Feel free to shoot other questions into [email protected].

      [1]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/building_running_and_managing_containers/index#getting_ubi_container_image_source_code
      [2]: https://developers.redhat.com/articles/ubi-faq/#redistribution
      [3]: https://www.redhat.com/licenses/EULA_Red_Hat_Universal_Base_Image_English_20190422.pdf

  2. I’m trying to follow along here – as a nonroot user with the fedora 32 base container image – and I must be missing something. Maybe what I’m missing is that I need to be root… I can build the container image with a Containerfile similar to your example for mariadb. But when I try to launch it, I end up with a Permission denied error:
    ”’
    Failed to create /init.scope control group: Permission denied
    Failed to allocate manager object: Permission denied
    [!!!!!!] Failed to allocate manager object.
    Exiting PID 1…
    ”’

    Is this because I’m trying to run podman as non-root? – If so, is there an easy way to get around this issue?

    1. Hi,

      I just left you a comment about an error I was getting while starting up a mariadb container, similar to your setup here for wordpress. – The issue was with SELinux as described in this Red Hat KB “https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_containers_as_systemd_services_with_podman” The relevant text being:
      “setsebool -P container_manage_cgroup on” — Just thought I’d let you know that I figured it out, in case you wanted to make a note.

      Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *