---
# image mode Gave Me the Confidence to Go Fully Agentic

**URL:** https://crunchtools.com/image-mode-gave-me-the-confidence-to-go-fully-agentic/
Date: 2026-03-12
Author: fatherlinux
Post Type: post
Summary: I’ve been running Claude Code on my RHEL 10 workstation for a few months now, and I have to admit, with some embarrassment, I often run it with the ominous --dangerously-skip-permissions option. It reads and writes files, executes shell commands, installs packages, modifies system configs, all without asking permission first. I’ve been letting an AIContinue Reading "image mode Gave Me the Confidence to Go Fully Agentic" →
Categories: Articles
Tags: AI/ML, Generative AI, RHEL, Security, Systems Administration
Featured Image: https://crunchtools.com/wp-content/uploads/2026/03/image-mode-agentic-thumbnail.png
---

I've been running Claude Code on my RHEL 10 workstation for a few months now, and I have to admit, with some embarrassment, I often run it with the ominous `--dangerously-skip-permissions` option. It reads and writes files, executes shell commands, installs packages, modifies system configs, all without asking permission first. I've been letting an AI agent have more or less free rein over my daily driver machine, and I've slowly been gaining confidence, to where it feels like less and less of a big deal. I know this is counter to the popular narratives, counter to [what I've previously written about](https://www.infoworld.com/article/4119285/ai-agents-and-it-ops-cowboy-chaos-rides-again.html), but the models feel aligned, and image mode (bootc) gives me extra confidence. I would never recommend this for production systems in an enterprise environment, but I think it's indicative of a hidden value proposition in RHEL image mode.

That probably sounds reckless, and I think it would be, if I were running a traditional Linux install. We've all been there, where you remove a package or misconfigure a service and end up spending your evening doing filesystem surgery instead of whatever you were actually trying to accomplish. But my workstation runs image mode for RHEL, and I think that made all the difference in my willingness to let the agent loose.

When you work with AI agents, you naturally gravitate toward tools that have built-in change control, things like git where you can always revert a bad commit, or Google Docs and wikis where there's a version history behind every edit. These tools give you transactions, and that's why they feel safe. Nobody worries about letting an AI draft something in Google Docs because the worst case is rolling back to yesterday's version.

What I keep coming back to is that image mode gives you that same kind of transactional confidence, but for the whole operating system. My workstation image is defined in a Containerfile and [built in GitHub Actions](https://crunchtools.com/ci-cd-for-image-mode-rhel/), so the whole thing gets pushed to a container registry every time I make a change. If Claude Code manages to wreck something in the running system, I reboot and I'm back to a known-good state. I've actually never had to do it, but the fact that I could is the whole point.

I think that safety net is what pushed me down the fully agentic rathole in a way I wouldn't have gone on a traditional system. When the OS itself is under the same kind of change control as your code, you stop treating the machine as something precious and fragile. You start treating it more like a git repo, something you can experiment with and rebuild when it goes sideways. And I think that changes how willing you are to hand the keys to an AI agent.

I'm usually pretty skeptical of the hype in this space, but I have a feeling this is going to be a bigger deal than it seems right now. You can see the trend everywhere, with Apple trying to wire agents into macOS, Microsoft pushing Copilot into everything, and coding assistants getting more autonomous by the month. We're stumbling toward a future where these agents manage a lot more of our computing environment, and the question of what happens when one of them breaks something is a real problem, not an academic one.

Image mode isn't a security framework for constraining agents, and I confess that it doesn't solve for /etc or /var (which do not have transactional guarantees). There are other serious challenges too, like prompt injections, and serious work to solve it, like [Airlock](https://github.com/crunchtools/mcp-airlock), which provides three-layer prompt injection defense for AI tool calls. But image mode is another layer of defense in depth. And I think it might be the most immediately useful one: it's the confidence to let the agent try things, knowing you can always get back to where you started.

---

## Categories

- Articles

---

## Navigation

- [Home](https://crunchtools.com/)
- [Articles](https://crunchtools.com/category/articles/)
- [Events](https://crunchtools.com/category/events/)
- [News](https://crunchtools.com/category/news/)
- [Presentations](https://crunchtools.com/category/presentations/)
- [Software](https://crunchtools.com/software/)
- [Beaver Backup](https://crunchtools.com/software/beaver-backup/)
- [Check BGP Neighbors](https://crunchtools.com/software/check-bgp-neighbors-nagios/)
- [Chev](https://crunchtools.com/software/chev-check-vulnerabilities-script/)
- [Graph BGP Neighbors](https://crunchtools.com/software/grpah-bgp-neighbors/)
- [Graph MySQL Stats](https://crunchtools.com/software/graph-mysql-stats/)
- [Graph Sockets Pipes Files](https://crunchtools.com/software/graph-sockets-pipes-files/)
- [MCP Servers](https://crunchtools.com/software/mcp-servers/)
- [Petit](https://crunchtools.com/software/petit/)
- [Racecar](https://crunchtools.com/software/racecar/)
- [Shiva](https://crunchtools.com/software/shiva/)
- [About](https://crunchtools.com/about/)

## Tags

- AI/ML
- Generative AI
- RHEL
- Security
- Systems Administration