Request Tracker 4.4 reached end of life in November 2025. I’d been running RT 4.4.4 in a container on one of my Linode servers since 2020, and the upgrade to RT 6.0.2 had been sitting in my backlog for months. It’s the kind of task that’s never urgent until it is. You and I both know, this would have sat on the to-do list for another couple of years. But things have changed…
I decided to use Claude Code as my pair programmer for the entire upgrade. I wanted to see if an AI co-pilot could actually handle a real infrastructure project end-to-end: planning, building container images, debugging build failures, migrating production, upgrading a database through 26 schema versions, and wiring up monitoring, and I wondered how long it would take?
Here’s what happened.
The Starting Point
My RT setup runs as a containerized service on a Linode VM. The architecture is straightforward:
- A multi-stage Containerfile builds RT with all its CPAN dependencies
- GitHub Actions builds the image and pushes to quay.io
- The container runs via podman + systemd with bind-mounted config files
- Cloudflare handles HTTPS termination
- MariaDB runs inside the container with data persisted via bind mounts
The old stack was UBI 8 base image, RT 4.4.4, and a bunch of config that had accumulated over six years. A problem that’s similar to what a lot of Red Hat customers have. I have a lot of empathy from running my own infrastructure. The target: UBI 10, RT 6.0.2, clean CI/CD pipeline.
Step 1: A New Base Image for UBI 10
The first thing I needed was a new base image. My old ubi8-httpd-perl image provided Apache, mod_fcgid, Perl, and MariaDB on UBI 8. I needed the same stack on UBI 10.
Claude Code examined my existing UBI 8 Containerfile and produced a UBI 10 equivalent. The first interesting discovery: RHEL 10 uses Simple Content Access (SCA), which means subscription-manager attach --auto no longer exists. On RHEL 10, registration alone enables content access. No separate attach step needed. The old command just prints a help message and exits with an error code.
This is the kind of thing that burns you in CI/CD. Your local build works (because you might not even use subscription-manager locally in a container if you’re running RHEL on your laptop/desktop, like me), but GHA fails because the secret-mounted credentials trigger the registration path. Claude caught it after the first build failure, understood the SCA change, and removed the obsolete command.
The base image builds in about 90 seconds on GHA with caching. Published to quay.io/crunchtools/ubi10-httpd-perl.
Step 2: Building RT 6.0.2
RT’s CPAN dependency tree is enormous. Roughly 60 modules that need to be compiled and installed. The Containerfile uses a multi-stage build: the builder stage registers with RHSM, installs build dependencies (gcc, make, expat-devel, openssl-devel, mariadb-connector-c-devel), compiles all the CPAN modules, then downloads and builds RT 6.0.2. The runtime stage copies the compiled artifacts into a clean image with just postfix added.
A few build challenges worth noting:
DBD::mysql vs. GCC 14: RHEL 10 ships GCC 14, which is stricter about pointer types. DBD::mysql 4.050 has a my_bool/_Bool pointer type mismatch that GCC 14 treats as a hard error. The fix: PERL_MM_OPT="DEFINE=-Wno-error=incompatible-pointer-types". Not pretty, but it works, and it’s isolated to the builder stage.
mysql_config symlink: mariadb-connector-c provides mariadb_config but DBD::mysql looks for mysql_config. One symlink fixes it.
CPAN layer caching: I split the CPAN installs into four cached layers: core framework deps, web stack, email/crypto, and utilities. This means incremental rebuilds only recompile what changed. A full uncached build takes about 50 minutes; a cached rebuild takes about 90 seconds.
Claude wrote a comprehensive test suite (tests/test-image.sh) with both static tests (file existence, package installation, config verification) and runtime tests (start the container with systemd, wait for MariaDB and RT to initialize, verify the web UI responds). This runs in GHA on every push.
Step 3: The DatabaseType Bug
Here’s where it got interesting. RT 6.0.2’s documentation says you can set DatabaseType to MariaDB for MariaDB backends. So that’s what I did. The container built fine. The tests passed. Then I ran the database upgrade on production and hit: "Not implemented".
The error came from RT::Handle::Indexes. Claude dug into the RT source code and found the problem: RT 6.0.2’s Handle.pm has approximately eight conditional branches that check $db_type eq 'mysql' but the MariaDB type was only added to the config parser, not wired into all the code paths. Indexes, InsertData, DropDatabase, and several other methods simply don’t handle it.
The fix was simple: set DatabaseType back to mysql. This works perfectly with MariaDB server via DBD::mysql. The database doesn’t care what you call it in the config file.
This is the kind of bug that’s hard to catch in testing because it only manifests during specific operations (like a schema upgrade). An AI co-pilot that can read source code and trace execution paths is useful here.
Step 4: The Database Upgrade
RT’s upgrade tool handles all intermediate schema migrations automatically. My database needed to go from 4.4.4 through 26 schema versions to reach 6.0.2. But the upgrade process had its own set of challenges:
Working directory matters: rt-setup-database --action upgrade looks for ./etc/upgrade/ relative to the current directory. Inside a container, you need podman exec -w /opt/rt6 to set the working directory, or it fails with “Couldn’t read dir ‘./etc/upgrade’.”
Interactive prompts in a non-TTY: The upgrade script prompts for a database password, the source version, and confirmation to proceed. In a container exec, there’s no TTY. I had to use --upgrade-from 4.4.4 --upgrade-to 6.0.2 flags and pipe an input file with an empty first line (for the blank password) followed by “y” for confirmations.
Once Claude got past the tooling issues, all 26 schema versions processed without error. This is exactly the kind of tedious, annoying work where Claude shines. It SSH’d into the server, hit the first error, adjusted the flags, hit the next error, adjusted the input method, and kept iterating until the upgrade completed. No frustration, no losing track of which approach it had already tried. Just methodical problem solving on a task that would have had me swearing at a terminal.
Step 5: Post-Upgrade Surprises
Stale Template Cache: RT uses HTML::Mason, a Perl-based templating engine, to render its web UI. Mason compiles templates into Perl code and caches them on disk for performance. After the upgrade, RT threw “Not an ARRAY reference at /opt/rt4/share/html/Elements/Tabs line 608.” Notice the path: /opt/rt4. The cached compiled templates from RT 4 were sitting in a persistent bind-mounted volume (mason_data/) and still referenced the old paths. Clearing mason_data/obj/* and mason_data/cache/* fixed it instantly. A reminder that container upgrades don’t just mean swapping images if you have persistent volumes with cached state.
OOM Kills: After running for a while, RT started throwing sporadic 500 errors. Apache’s error log showed mod_fcgid: error reading data, FastCGI server closed connection. The host’s dmesg told the real story: Memory cgroup out of memory: Killed process (rt-server.fcgi) total-vm:255156kB, anon-rss:234660kB. Each RT 6 FCGI worker uses about 235MB. The old 2GB container limit wasn’t enough headroom. Bumping to 3GB resolved it.
Port Standardization: The old RT setup used port 82 inside the container, a holdover from when RT ran on a shared IP address. Since every other containerized service uses port 80 internally, I standardized RT to match. This triggered a fun chain reaction: the Apache auth config used Allow from 127.0.0.0/8 with Satisfy Any, but podman’s port mapping means requests arrive from 10.88.0.1 (the bridge gateway), not localhost. Had to add the podman network to the Allow directive.
Step 6: Zabbix Monitoring
The existing Zabbix monitoring for RT was all container-level: CPU, memory, network stats from the docker/podman stats API. No application-level checks. I needed to wire up checks for the individual services inside the container and map them to the existing Zabbix service tree.
Claude created six monitoring items using the existing UserParameter framework (which uses podman exec to run checks inside containers): Apache httpd process count, MariaDB status, Postfix status, RT FastCGI process count, web health check (HTTP status on port 80), and CloudFlare HTTPS reachability.
Each item got a corresponding trigger with problem tags (host, service, scope) that map into the Zabbix service tree. When MariaDB goes down inside the RT container, the “MariaDB” service node under “rt.fatherlinux.com” turns red. The Zabbix MCP server runs in read-only mode, so write operations were done via direct API calls.
The AI Co-Pilot Experience
What worked well: Claude Code handled the entire lifecycle: reading existing Containerfiles and config files, producing new ones that followed established patterns, debugging build failures by reading CI logs, SSH-ing into production servers to run migrations, and iterating through problems in real-time. For a couple of hours, it ran with the dangerous permissions safety turned off, executing commands autonomously without asking for approval at each step. When the DatabaseType bug hit, it read the RT source code, identified the incomplete conditional branches, and proposed the correct fix. When OOM kills caused sporadic 500 errors, it correlated Apache error logs with dmesg output and identified the root cause. It treated the whole thing as one continuous engineering session, maintaining context across dozens of files and servers.
What needed human judgment: I had to fine tune the plan in about 3-4 places before we got started, but once the plan was developed, Claude ran to completion. Along the way, it still needed me for architectural decisions (should we use port 80 or keep 82?), risk assessment (is it safe to run the database upgrade now?), and knowing when to stop iterating on a cosmetic issue (the HTML converter warning, since none of the supported converters are packaged in RHEL 10). The AI proposed reasonable defaults but deferred to me on anything that could break production.
The maintenance angle: This is the kind of work that AI hype usually ignores. It’s not greenfield development. It’s not a chatbot. It’s upgrading a six-year-old Perl application from one major version to another, across two OS generations, with a live database, production traffic, and monitoring requirements. And having an AI co-pilot that can hold the full context of a complex migration in its head, read error logs, and iterate through solutions without getting tired or frustrated is valuable.
The entire upgrade, from first Containerfile to production deployment with monitoring, completed after I laid down to go to bed, and ran a couple of hours. I don’t know how long it would have taken without Claude Code, but I know it would have involved a lot more browser tabs and a lot more context switching.
