DevOps Toolchain: Problems with Automated Deployment, Data & Workflow


Automated deployment is obviously not new, but until this point, there was not much push in the open source world. Recently, the idea of DevOps or Ops 2.0 is gaining ground. We are starting to think of our deployment and provisioning methods more like software engineers. We are developing tools to help us provision and deploy code, config, content, and data to our web infrastructures. In fact, with cloud our config and content are, our infrastructure.

Circular Data Problem

Automated deployment of code/config/content (CCC) through a tool chain is in common use today. Generally, a developer, systems administrator or designer creates the hand crafted CCC artifacts. When this person is satisfied with the creation of the new artifact they send it off to operations for deployment. This is a clean workflow and can be accommodated for by a tool chain. The problem is with data. Traditionally data is synonymous with database, but in this case I am also referring to service data such as machine names and service descriptions to populate Nagios, Cacti, DHCP, DNS, etc.

The problem occurs when data from the system is used to generate artifacts in the system. For example, a server/machine inventory may be used to populate and generate DNS configuration artifacts. The workflow generally follows the pattern from above.

Server/Service Inventory -> Create Nagios configuration -> Deploy Nagios configuration

This is a natural workflow for phyisical/virtual servers and follows the standard The Qualitative to Quantitative Workflow. The configuration artifact creation phase can be difficult to automate. There will be a different build script for Nagsio, Cacti, DNS, Firewalls, Proxies, Load Balancers, etc. Once a configuration build tool is implemented, the workflow functions for provisioning new servers. When a new server is provisioned, there is by it’s very nature a very qualitative input and adding the machine name and it’s services to an inventory is fine. The extra work adds a finite amount of work to the life cycle of the newly provisioned server.

This workflow can break down if/when a server or website is renamed or worse, re-purposed. This is not a complete breakdown though. Again, by it’s very nature, renaming something is qualitative in nature and the overhead of changing the inventory list manually is acceptable. Automating this process can be difficult because by it’s very nature it is qualitative.

The workflow completely breaks down with cloud. The goal is to provision a new machine for which no information is available yet. This hinders automated deployment and requires significant tooling to work around. The Nagios configuration build tool may have a key to access these new servers that will be provisioned, but information for the full deployment will not be available until the machines are provisioned. A queuing system will need to be put into place and the service configuration build tool will have to be made aware.

This problem is not exclusive to Nagios and may exist for any network service such as DNS, monitoring, data aquisition, switching, routing, firewalling, proxying, and load balancing services. The tools are not here yet to handle self modifying network services. The logical next steps would be to create a standard for this kind of service, perhaps a network service for network service changes with a standard api that many different pieces of hardware and software could conform to.


  • When data in the system is used to modify config artifacts, there will be a problem analogous to self modifying code. This is a common problem with network services.
  • Network services can have complex interactions that require non-trivial tooling to solve
  • The network service inter-dependencies can be so complex that they require resolution with an infinite grammar, such as a programming language
  • Programming is essential to solve these problems and a tool chain can only get you part of the way there
  • It is a common caveat to the “configuration build tool” and the “configuration management/deployment tool”. Conversely, most configuration build tools were not designed to take automated input from a configuration management/deployment tool

Tags: , , ,

2 Responses to “DevOps Toolchain: Problems with Automated Deployment, Data & Workflow”

  1. scott July 8, 2010 at 3:57 pm #

    You can do a lot of this with Puppet or Chef.


  2. DevOps Consult March 14, 2017 at 9:09 am #

    Thanks for sharing this article you have a good command on DevOps. I will follow this blog for the future posts.

Leave a Reply