DevOps needs a layered approach – Not only process or automation

With any new, emerging area the tendency is for advocates of each new approach to attempt to invalidate or minimize all earlier approaches. Sometimes this is appropriate, but rarely is progress so clear cut. In this vein, I would like to comment on Phil Cherry’s post on DevOps.com. First off, I appreciate Phil’s addition to the discussion. I think his delineation between automation approaches is very interesting. However, the devil is in the details. Here are the highlights of my views on this:

Package-based automation

As a former BladeLogic guy, I would be remiss if I didn’t correct a few points in Phil’s analysis. Phil may be confusing OS Vendor packages (RPMs, MSIs, etc.) with configuration management packages. Systems like BladeLogic build packages based on some sort of configuration object system. In other words, the server is represented as a set of configuration objects (a registry key, setting in a config file, etc.) in a particular state. The packages are usually represented as desired states for configurations based on that same object model. There is no reason that those packages have to be applied “all in one go”, since packages can be chained and included in larger jobs with decision conditions. That said, I agree that this type of automation is per-server based, for the most part.

Application Understanding

I do agree that Phil’s definition of automation models don’t understand multi-server dependencies or really know what an “application” is. Phil does ignore in this context that there are other automation approaches that do bridge this multi-system approach by building on the automation platforms. In particular, the trends within virtualization and cloud have pushed vendors to create multi-server, application-focused automation platforms. You can find solutions with established vendors like BMC or VMWare, with open-source platforms like Puppet with OpenStack, as well as with startups like ElasticBox. Bottom line, it is vast oversimplification to limit an overview of DevOps-capable automation to automation tools with a server-heritage only. This area of automation is clearly evolving and growing, and deserves a more holistic overview.

How does process fit in?

As John Willis, and others, have said many times before, culture and process are just as much a part of a devops approach as basic automation. So, it appropriate for Phil to end with a process-based approach. Clearly rolling out an application requires an understanding of the end-to-end process, how steps are related, and how services are dependent. I do feel that Phil left out a few key points

Process Management and Deployment Automation are not the same

I feel like Phil blurs the line between managing the process of release, which is a people-process issue, versus managing the deployment of an application. The latter involves pulling together disparate automation with a cross-server/application-focused view. Process management, on the other hand, deals with the more holistic problem of driving the release of an application from development all the way to production. They are both needed, but they aren’t the same thing.

What about coordination

One of the biggest drivers of DevOps is getting Dev and Ops to coordinate and collaborate on application releases. This means driving Dev visibility forward into Ops, and Ops visibility back into Dev. It isn’t just about creating well-aligned deployment processes, but also managing the entire release process from the code repository to production. This means we need encapsulate pre-prod and prod processes, as well as non-system activities (like opening change tickets, etc.).

What about planning

Releasing and managing applications is just about the here and now. It is also about planning for the future. Any process-oriented approach has to allow not only for the coordination of deployment processes, but also needs to allow for the establishment of clear and flexible release processes visible to all stakeholders. In particular, a process management system should provide visibility to the decision makers, as well as the executors. Applications clearly affect non-technical activities like marketing and executive planning, so it is important that business leaders be able to understand where releases are out, and when important features are likely to arrive.

What we need is a layered approach

Bottom line, we need to solve all of the application release issues – process, deployment, and automation. In the spirit of DevOps, those “layers” can be solved somewhat independently with loose coupling between them. We drive the people-process coordinate, encapsulate all of the complexities necessary to deploy an application, and then drive the low-level automation necessary to actually implement the application. All of these work together to create a full approach to Application Release Automation. Any solution that ignores a layer risks solving one problem, and then creating a whole new set of them.

Advertisements

IT Automation Curator for DevOps – Part 2 – Collect and Catalog

This topic is far too interesting and deep to cover in just one blog post. So, I am going to split the discussion into a few sections. I’ll use my proposed “job description” for an IT Automation Curator as a starting point:

  • Collect existing automation, and then Catalog it where others can find it
  • Develop new automation based on requirements from IT
  • Train others on how to use the automated processes
  • Maintain the existing automation

This first step of collect and catalog is where I have seen many automation efforts stumble. The natural inclination of most techies (myself included) is to jump right into developing automation, no matter what is in place. As I learned the hard way, that is a bad idea. So, I will give a few reasons why this step is important:

Reason #1: If you don’t know about all the automation in place, you don’t really understand how your data center is operating

It’s great that you developed that new automated process that auto-magically deploys a set of configurations for you. Are you sure that other scripts or tools won’t change it or corrupt it? Most IT teams have scripts strewn all over the place – some well known , some the detritus of sysadmins past. They may have been scheduled centrally or on individual servers. This is very hard to get a grip on. There are a few tools out there, but it is hard to ensure that you have found all automation spread over all the systems. This is just another reason why you need to control access and even re-build some servers from scratch (hopefully in an automated way).

Most IT operations teams also have multiple automation tools in play. Each silo-ed team has their preferred tool, which they guard jealously. Overall, this is not a good approach. The more tools you have, the harder it is to standardize automation and create efficient end-to-end processes. At a minimum, all of these tools need to documented and managed centrally.

Reason #2: Don’t duplicate work and ignore experience

A lot of the automation in place may not be optimal, but it was most likely built to solve the same problems you will need to solve later. Tossing it out, or just ignoring it, is essentially disregarding the combined experience of the IT team. Even if you rebuild it in a better tool, and in a more efficient way – the lessons learned will be valuable.

There is also an important less here about prioritization. Just because you can make an automated process more elegant or more efficient, doesn’t mean you should. More often than not you will have no end of automation projects to look at. Why spend your time on what already works? What is important is to apply automation judiciously, where it provides the most value for the business.

Reason #3: More sharing will always lead to better results

Fostering a culture of sharing automation, essentially an open-source culture, will ensure that everyone has access to the best work on offer, that they don’t re-invent the wheel, and it will allow for continual improvement. That last point is crucial. The idea is not for the automation curator to control all the automation per se. They should be catalysts for making better automation, whether they do it or not. So, it is important to leave one’s ego at the door, and admit that your automation becomes better when you let others critique it and improve on it.

Bottom line, having a central place to share and continually improve automation is essential. This will most likely affect your choice of automation platforms as well. If you can’t share and improve, then you will be hobbling yourselves.

So, how do you do this in your own environment? Do you have ideas about the best way to go about it? Any success stories?

Is the DevOps community setting the bar too low for automation tools?

I have been in the automation, particularly configuration and application automation, business for a while now. It is very good to see how the current trend of DevOps is pushing IT departments to really and truly embrace automation – and not just for the server. All that said, I am feeling a little like the mainframe guys do about cloud when I see blog posts like this. UrbanCode is now boasting of the fact that configuration-only deployments are the best thing since the first shell script emerged from the primordial ooze of UNIX. Really?! Configuration management systems should boast about being able to figure out on the fly what needs to deployed, and then deploying only that – not forcing their users to figure that out for them and calling that a feature.Have we really taken a step back in the configuration automation industry to a point where boasting about functions that should have been in your product years ago substitutes for substantive contributions? And if this is the new normal, is it working? This kind of relabeling and repacking of old ideas is not new to configuration automation software. Oh wait, now my scripts are “compiled scripts”. Or – My scripts are failing, so I moved from scripts to METAscripts written in a METAscripting language that only a METAcommunity knows :). And now I am rockin’ 25 service environments (who cares that the “last generation” tools can are known to manage 1,000+ service environments). Bottom line, can the current batch of self-styled DevOps automation tools really hang tight in the concrete jungle of enterprise IT operations?

Don’t get me wrong. It is this very willingness to thumb one’s nose at your predecessors, upon whose shoulders you currently stand, that is at the core of innovation. As Picasso said “Good Artists Copy, Great Artists Steal”. So, do we look to the purveyors of software for the solution to the problem?

The simple answer is, No. The reason why a lot of this happens, in my opinion, is because each new group in IT that finds itself tackling a new problem rarely looks backwards, or even in the next cubicle over, for solutions that have already worked. And with DevOps in particular, they are some questions that the Users (IT departments) should be asking the vendors, and themselves. Not every IT department out there will need or want the same solution, but they owe it to themselves to be thorough. So, what does an IT department do to make the right decision.

1) You have to weigh the short term needs of the immediate problem (say small-scale DevOps) against the longer term rollout (DevOps in full production). Many poor IT decisions are made on the basis of “cool feature”-itis, rather the mundane process of choosing what makes the best sense for the business.

2) Use business metrics. Every IT purchasing decision should be made on the basis of sound business metrics (We will save X% in costs, increase revenue by Y%, etc.). That means you need to invite those MBA graduates from the other office over to the team. I know – you don’t want them to bean count you into oblivion. Just realize that they speak the right language to get the project funded. And make them pay for lunch.

3) Hold the vendors (including us) accountable for the statements that we make. We should deliver references and case studies to back up our case. And, if those metrics stand up, you can use them for your business case.

Bottom, set the bar HIGHER DevOps community. You owe to yourselves, and your business, to expect more out of your vendors.

* Image from http://illustration.worth1000.com/entries/95497/bunny-under-a-low-bar

Reposted from BMC Communities