I manage my daily-driver workstation as a container image. Not a pet, not a snowflake — a Containerfile in a git repo that builds a complete RHEL 10 desktop with everything I need: GNOME Workstation, dev tools, VS Code, Claude Code, Google Chrome, libvirt, rclone backups, and the Red Hat VPN. When I want to change something, I edit the Containerfile, push to GitHub, and a CI pipeline builds and pushes a new image to Quay.io. Then bootc upgrade on my workstation pulls it down.
This is image mode for RHEL, and it’s how I think we should all be managing our systems. Here’s exactly how I set it up — including every workaround I hit along the way.
Why Manage a Workstation as a Bootc Image?
If you’ve ever rebuilt a laptop and spent a day getting it back to where it was, you already know the problem. Even with Ansible playbooks and dotfile repos, there’s drift. Packages get added manually. Configs get tweaked. The system you’re running slowly diverges from anything reproducible.
Bootc changes the equation. Your OS is defined in a Containerfile, built with Podman, and stored as an OCI image in a container registry. The bootc tool on the running system pulls new images and stages them for the next boot. Your workstation becomes a versioned, reproducible artifact — same as any other container image you build in CI.
The benefits are concrete:
- Reproducibility: Blow away the machine, reinstall,
bootc switchto your image, and you’re back. - Auditability: Every change is a git commit. You can diff what changed between versions.
- Automation: CI builds your image on every change. Weekly rebuilds pick up base image security updates automatically.
- Rollback: Bootc keeps the previous image. Bad update? Boot into the last known good version.
The Starting Point: A Containerfile on Disk
Before this project, my image was just loose files in a directory — a Containerfile, some repo configs in etc/, systemd units, and an rclone config. No version control, no CI, no automated builds. I was building locally with podman build whenever I remembered to.
The Containerfile starts from registry.redhat.io/rhel10/rhel-bootc and layers on:
FROM registry.redhat.io/rhel10/rhel-bootc
# Register with RHSM (secrets mounted at build time, never in image layers)
RUN --mount=type=secret,id=activation_key \
--mount=type=secret,id=org_id \
subscription-manager register \
--activationkey=$(cat /run/secrets/activation_key) \
--org=$(cat /run/secrets/org_id)
# Workstation group + dev tools
RUN dnf groupinstall -y --nodocs Workstation && \
dnf update -y && \
dnf install -y --nodocs --nogpgcheck \
/tmp/rpms/*.rpm \
cockpit git vim-enhanced nodejs24 rust cargo \
qemu-kvm libvirt virt-install \
code google-chrome-stable google-cloud-cli \
rclone gh uv ...
# Unregister from RHSM and clean up
RUN subscription-manager unregister && \
rm -rf /var/run /var/log/*.log ...
The goal was simple: get this into a private GitHub repo with CI/CD that builds on every change and pushes to Quay.io.
Step 1: Git Init and the .gitignore That Matters
The directory had a 3.1GB anaconda ISO in bootiso/, build logs, and — critically — an rclone.conf file containing a pCloud OAuth token. The .gitignore needed to exclude all of these:
# Build artifacts
bootiso/
manifest-anaconda-iso.json
image-build.log
# Secrets - must not be baked into images
etc/rclone.conf
# Backup files
Containerfile~
# Local Claude state
.claude/
I initially committed rclone.conf to the repo since it was going to be private. That turned out to be a mistake — more on that below.
Step 2: The GitHub Actions Pipeline
The build workflow triggers on changes to Containerfile, etc/, systemd/, config.toml, or rpms/:
name: Build and Push rhel10-bootc
on:
push:
branches: [main]
paths:
- 'Containerfile'
- 'etc/**'
- 'systemd/**'
- 'config.toml'
- 'rpms/**'
workflow_call:
workflow_dispatch:
A separate weekly-rebuild.yml calls this workflow every Monday at 6am UTC to pick up base image updates:
on:
schedule:
- cron: '0 6 * * 1'
workflow_dispatch:
jobs:
rebuild:
uses: ./.github/workflows/build.yml
secrets: inherit
The Workarounds: What Actually Broke
Here’s where theory meets reality. Every one of these issues required a workaround that you won’t find in the “getting started” docs.
Workaround 1: RHSM Activation Key for CI Builds
GitHub-hosted runners are Ubuntu machines. They can pull from registry.redhat.io with a service account login, but inside the container during podman build, there are no RHEL entitlements. The dnf groupinstall Workstation command fails because BaseOS and AppStream repos aren’t available.
The solution is an RHSM activation key. The Containerfile uses RUN --mount=type=secret to register at the start and unregister at the end. The activation key and org ID are passed via podman build --secret so they never appear in any image layer:
- name: Build image with Podman
env:
RHSM_ACTIVATION_KEY: ${{ secrets.RHSM_ACTIVATION_KEY }}
RHSM_ORG_ID: ${{ secrets.RHSM_ORG_ID }}
run: |
printf '%s' "$RHSM_ACTIVATION_KEY" > activation_key.txt
printf '%s' "$RHSM_ORG_ID" > org_id.txt
podman build \
--secret id=activation_key,src=activation_key.txt \
--secret id=org_id,src=org_id.txt \
--tag $IMAGE_NAME:latest .
rm -f activation_key.txt org_id.txt
You can create an activation key at https://console.redhat.com/insights/connector/activation-keys. Note: the --activationkey flag takes the key’s name, not its UUID — even if the name happens to be a UUID.
Workaround 2: Internal COPR RPMs Bundled in the Repo
My Containerfile installs redhat-internal-NetworkManager-openvpn-profiles from an internal COPR repo at coprbe.devel.redhat.com. This hostname doesn’t resolve from GitHub runners.
The fix: download the RPMs locally and commit them to a rpms/ directory in the repo. They’re tiny (13KB and 10KB), and the Containerfile installs them with a glob:
COPY rpms/ /tmp/rpms/
RUN dnf install -y --nogpgcheck /tmp/rpms/*.rpm ...
Watch out for transitive dependencies — the VPN profiles RPM required redhat-internal-cert-install, which also came from the internal repo. I had to bundle both. Note: since these RPMs contain internal certificates and VPN configurations, make sure your container registry repo is private — otherwise anyone who pulls the image can extract them from the layers.
Workaround 3: Quay.io Robot Account Login
The redhat-actions/podman-login@v1 GitHub Action failed with “invalid username/password” when using a Quay.io robot account, despite the credentials being correct. Replacing it with a direct podman login command worked immediately:
- name: Log in to quay.io
run: |
podman login quay.io \
-u '${{ secrets.QUAY_USER }}' \
-p '${{ secrets.QUAY_PASSWORD }}'
Robot account usernames use the format namespace+robotname (e.g., fatherlinux+github_ci). Make sure you’re using the robot token, not your personal password.
Workaround 4: Runner Disk Space
A RHEL Workstation image with dev tools, Google Cloud SDK, Chrome, VS Code, and libvirt is big. The GitHub runner ran out of disk space committing the image layer — the default Ubuntu runners only have ~14GB free.
The fix is to purge pre-installed software before building:
- name: Free up runner disk space
run: |
sudo rm -rf /usr/share/dotnet /usr/local/lib/android /opt/ghc /opt/hostedtoolcache
sudo docker image prune --all --force
This frees ~30GB, which is plenty.
The Security Lesson: Don’t Bake Secrets Into Images
The original Containerfile had this line:
COPY etc/rclone.conf /etc/rclone.conf
That rclone.conf contained a pCloud OAuth access token. Since the Quay.io image was public, anyone could pull the image and extract the token.
The fix had three parts:
- Remove the COPY from the Containerfile — rclone.conf lives on the host filesystem and persists across bootc upgrades since bootc doesn’t overwrite existing
/etcfiles. - Remove from git and add to .gitignore —
git rm --cached etc/rclone.conf - Rewrite history — The token was in previous commits. Even in a private repo, I used
git filter-repo --invert-paths --path etc/rclone.confto purge it from all history, then force-pushed. - Revoke and regenerate the token —
rclone config reconnect pcloud:issues a new OAuth token, invalidating the old one.
The broader lesson: any file in a container image is readable by anyone who can pull it. This bit me twice — once with an OAuth token in rclone.conf, and again with Red Hat internal certificates bundled in RPMs. Treat container images like public artifacts unless the registry repo is explicitly private. Use build secrets (--mount=type=secret) for credentials, runtime injection for config files, and private repos when the image itself contains sensitive material.
The Full Secret Inventory
The pipeline needs six GitHub secrets:
| Secret | Purpose |
|---|---|
REGISTRY_REDHAT_IO_USER |
Pull base image from registry.redhat.io |
REGISTRY_REDHAT_IO_PASSWORD |
Pull base image from registry.redhat.io |
QUAY_USER |
Push built image (robot account) |
QUAY_PASSWORD |
Push built image (robot token) |
RHSM_ACTIVATION_KEY |
Register with RHSM during build |
RHSM_ORG_ID |
Red Hat org ID for RHSM registration |
Results
The pipeline runs in about 18 minutes total:
- Disk cleanup: ~1 min
- Podman build (Workstation + all packages): ~13 min
- Push to Quay.io: ~4 min
Weekly Monday rebuilds ensure the image picks up base image security patches without any manual intervention. And every Containerfile change triggers a new build automatically.
The whole thing is 20 files in a git repo — a Containerfile, some config files, two bundled RPMs, and two GitHub Actions workflows. My workstation is now a versioned, CI-built artifact. No more snowflakes.
