Source: http://www.hightechinthehub.com/2012/04/virtuous-cycle-of-devops/ |
Each of the benefits described in Joe's article relies on the assumption that you are building on a standardised and consistent base. Providing a standardised configuration at the server level allows you to deliver flexibility at higher layers in the stack because the basic layers are configured the same way and can be treated as a single piece. Standardisation allows you to deliver value added flexibility where innovation is most beneficial to the business; at the application layer. This concept is summed up really well in an article on the SkyDingo blog called DEVOPS: FLEXIBLE CONFIGURATION MANAGEMENT? NOT SO FAST! where the authors claim that by limiting flexibility at the infrastructure level (gratuitous flexibility) that you increase flexibility at a application level (value added flexibility).
Sysadmins: This is not just for developers, having standardised and consistent configurations across your fleet allows you to respond with agility to changing requirements at the infrastructure layer because you know that all the servers are configured the same way and that any action taken should respond uniformally (Murphy will throw you some edge cases from time to time though based on things outside of your control, that's why you need solid automated tests).
There are numerous examples where this sort of approach is applicable, but the main one of benefit to system administrators is the application of security or bug fixes. If your environment is standardised, it becomes a relatively simple exercise to test a new fix in a lab environment and then roll that out to your fleet. On the other hand, if you do not have a standardised environment, the roll out of any fix becomes a configuration by configuration (or worse server by server) exercise. Knowing how your servers will behave to a new configuration requirement is the difference between being able to patch hundreds of servers at a time or handling them one by one.
Ask yourself: If a zero day patch was released tomorrow for SSH how would you handle it? If the answer is roll it out by hand, you already lost. These are the sorts of things that differentiate small scale thinking about individual systems from large scale thinking about an infrastructure ecosystem.
In startups these days, working with a standard configuration is a basic assumption and through the use of tools like Puppet, Chef and Cfengine is becoming more and more mainstream. If you look at some of the names of companies sending people to puppetconf and the upcoming chefconf, these ideas are catching on in very large enterprises and this is a very different space with very different requirements.
In green field environments, it is relatively easy to control configuration drift if you built the environment correctly from the start using configuration management practices. In legacy environments, it is not that easy. You have existing servers, built by different people over a number of years, new operating system releases come out leaving the older ones behind and technical debt piles up if left unchecked. Pulling that all together is really difficult and as we see the adoption of configuration management tools in large enterprises, this is something that should spoken about more openly at conferences. This is not a solved problem, not by a long shot.
I know of one investment bank (not my current employer, but I'd love to work there) that rebuilds its global infrastructure every night to ensure absolute consistency and avoid configuration drift. While I personally think that is a little over the top, to achieve that level of control over and confidence in your infrastructure is really the pinnacle of system administration, regardless of whether you are a startup or a bank that has been around for 200 years.
If you are not using a configuration management system, pick one use it and get on to more interesting things like adding value for your business.
No comments:
Post a Comment