DevOps needs a layered approach – Not only process or automation

With any new, emerging area the tendency is for advocates of each new approach to attempt to invalidate or minimize all earlier approaches. Sometimes this is appropriate, but rarely is progress so clear cut. In this vein, I would like to comment on Phil Cherry’s post on DevOps.com. First off, I appreciate Phil’s addition to the discussion. I think his delineation between automation approaches is very interesting. However, the devil is in the details. Here are the highlights of my views on this:

Package-based automation

As a former BladeLogic guy, I would be remiss if I didn’t correct a few points in Phil’s analysis. Phil may be confusing OS Vendor packages (RPMs, MSIs, etc.) with configuration management packages. Systems like BladeLogic build packages based on some sort of configuration object system. In other words, the server is represented as a set of configuration objects (a registry key, setting in a config file, etc.) in a particular state. The packages are usually represented as desired states for configurations based on that same object model. There is no reason that those packages have to be applied “all in one go”, since packages can be chained and included in larger jobs with decision conditions. That said, I agree that this type of automation is per-server based, for the most part.

Application Understanding

I do agree that Phil’s definition of automation models don’t understand multi-server dependencies or really know what an “application” is. Phil does ignore in this context that there are other automation approaches that do bridge this multi-system approach by building on the automation platforms. In particular, the trends within virtualization and cloud have pushed vendors to create multi-server, application-focused automation platforms. You can find solutions with established vendors like BMC or VMWare, with open-source platforms like Puppet with OpenStack, as well as with startups like ElasticBox. Bottom line, it is vast oversimplification to limit an overview of DevOps-capable automation to automation tools with a server-heritage only. This area of automation is clearly evolving and growing, and deserves a more holistic overview.

How does process fit in?

As John Willis, and others, have said many times before, culture and process are just as much a part of a devops approach as basic automation. So, it appropriate for Phil to end with a process-based approach. Clearly rolling out an application requires an understanding of the end-to-end process, how steps are related, and how services are dependent. I do feel that Phil left out a few key points

Process Management and Deployment Automation are not the same

I feel like Phil blurs the line between managing the process of release, which is a people-process issue, versus managing the deployment of an application. The latter involves pulling together disparate automation with a cross-server/application-focused view. Process management, on the other hand, deals with the more holistic problem of driving the release of an application from development all the way to production. They are both needed, but they aren’t the same thing.

What about coordination

One of the biggest drivers of DevOps is getting Dev and Ops to coordinate and collaborate on application releases. This means driving Dev visibility forward into Ops, and Ops visibility back into Dev. It isn’t just about creating well-aligned deployment processes, but also managing the entire release process from the code repository to production. This means we need encapsulate pre-prod and prod processes, as well as non-system activities (like opening change tickets, etc.).

What about planning

Releasing and managing applications is just about the here and now. It is also about planning for the future. Any process-oriented approach has to allow not only for the coordination of deployment processes, but also needs to allow for the establishment of clear and flexible release processes visible to all stakeholders. In particular, a process management system should provide visibility to the decision makers, as well as the executors. Applications clearly affect non-technical activities like marketing and executive planning, so it is important that business leaders be able to understand where releases are out, and when important features are likely to arrive.

What we need is a layered approach

Bottom line, we need to solve all of the application release issues – process, deployment, and automation. In the spirit of DevOps, those “layers” can be solved somewhat independently with loose coupling between them. We drive the people-process coordinate, encapsulate all of the complexities necessary to deploy an application, and then drive the low-level automation necessary to actually implement the application. All of these work together to create a full approach to Application Release Automation. Any solution that ignores a layer risks solving one problem, and then creating a whole new set of them.

Advertisements

IT Automation Curator – Good for techies, good for business, good for DevOps

Recently my thoughts have been going back to a concept I like in the seminal IT operations book, The Visible Ops Handbook (By Gene Kim, Kevin Behr, and George Spafford). I have been doing a lot of thinking about how Lean, DevOps, Agile, etc. are changing IT culture, or at least pressing for change. Properly leveraged automation is a big part of that change process – which makes me think of the passage in Visible Ops where the authors discuss changing the behavior of senior IT staff:

“Their mastery of configurations continually increases while they integrate it into documented and repeatable processes. We jokingly refer to this phenomenon as ‘turning firefighters into curators’ […]”*

As a former IT techie myself, I get the need to challenge oneself in the often routine and monotonous world of IT. Personally, I think that is a lot of the grass-roots impetus behind the DevOps movement, and the adoption of open-source automation tools. Creating automation is a way of turning the mind-numblingly mundane into something exciting and intellectually challenging. So far so good. Boredom leads to sinking morale and productivity – poor morale is bad for business.

So, what’s not to like? In short, it goes back to focus and sustainability. No, I’m not talking green-energy windmills. How do you sustain and focus the efforts of these budding automation aficionados? Left to their own devices, they will likely create lots of useful, but narrowly directed scripts, packages, etc. All of these will be focused on the problems they face on a daily basis. For the problems outside of the automation guru’s gaze – those problems will most likely remain unsolved.

So, this is where the idea from Visible Ops comes to the rescue. The answer is that we pull these gurus out of their day-to-day grind in the IT trenches,and make them automation curators. Now, I know that many of you hear curator and think of a older man in a tweed jacket, peering over horn rimmed glasses, waxing rhapsodic about the various manufacturer stamps of 18th American chamber pots. So, as interesting as early american port-a-potties may be, let’s look at the definition of curator:

curator – one who has the care and superintendence of something (Marriam-Webster Dictionary)

Clearly tweed is not mentioned. In all seriousness, museum curators do much more than merely talk about old things. Considering the Smithsonian’s own description, curators:

  • Acquire new items for the collection
  • Research the collection
  • Display the collection
  • Maintain the collection

So, if we work off the Smithsonian’s “model”, I suggest that an IT Automation Curators would:

  • Collect existing automation, and then Catalog it where others can find it
  • Develop new automation based on requirements from IT
  • Train others on how to use the automated processes
  • Maintain the existing automation

This kind of role is exactly what I missed someone had offered me early in my career. I would have jumped at it. It would have been a great new challenge for me, I would have been creating value for the business, and IT would have been more efficient. And this isn’t really a new idea. Software developers have long needed to share code snippets and concepts with each other, and they defined the interfaces between code as well. The trick here is that Automation Curator needs to take an active role in both building the best automation and also in promoting the proper use of automation in IT.

One last comment. We might ask if this would be better classified as an Automation Librarian. I think it is good question. At the end of the day, I think having the existence of the position is more important than what you call it. However, in my mind the concept of curator leans more towards the acquisition, development, and training part. The words Library and Librarian in IT seem to lean more towards the maintenance and storage part of the equation (notwithstanding what traditional librarians actually do). Curator is also a cool word.

So, why aren’t more IT shops doing this? What do you think?

This is the first part of a multi-part series. Check out the other parts:

* Kim, Gene; George Spafford; Kevin Behr (2005-06-15). The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps (Kindle Locations 917-919). IT Process Institute, Inc.. Kindle Edition.

Can you do DevOps without automation?

I was reading Tom Parish’s interview Lori MacVittie on the DevOps Leadership Series, and I came across her statement that DevOps and automation are not, in fact, tied at the hip. I think that asking if DevOps needs automation to be successful is a really good question. In fact, it may be one of the essential questions – which comes first? Process or Automation? Lori clearly lands on the process side, and I tend to agree.

I always make the analogy to a hammer and nails. After repairing my fence last year, I discovered what a wonderful thing a nail gun is compared to a hammer and nails. What would have taken me 20-30 minutes, took me 5 minutes – bam, bam, bam. But nail guns are also dangerous. You can’t shoot yourself in the foot with a hammer… I think automation is a lot like that. Just automating something with no firm process in mind is like handing a nail gun to someone who immediately attached their foot to the floor – in record time. And I am not innocent of this. At one point in my career I was fascinated with the power of JScript on Windows when enabled with Active Directory (yes – I am geeking out – bear with me). I could write one script that would execute across 10s of servers in a way that was more difficult in UNIX at the time. But that was the problem. I tested a script on a live environment, and almost took out a live site (we won’t discuss how close I came to doing just that…).

Now, the counter point is that well-tested automation following a good process can’t beaten. I remember back in my BladeLogic days when a sysadmin was intent on convincing me that she could make the same change to 20 UNIX servers, one after another, and be more accurate than an automated tool. Doubtful. But say she could. That only makes sense on the smallest of scales. But if she had automated that, she could have spent her on something actually useful. Imagine that. And isn’t that the point, to make ourselves more productive, so we can make the business (and ourselves) more successful?

That is why applying a methodology like KanBan to DevOps is so promising. Instead of blindingly applying automation with cool tools, you look at your processes holistically, and apply automation systematically where it provides the most value. Which is exactly what manufacturing has done over the last few years.

So, yes. You can do DevOps without automation. But that’s the wrong question. Can you do DevOps without well-understood process and culture change? No. And don’t apply automation unless you tackle that as well.

Is the DevOps community setting the bar too low for automation tools?

I have been in the automation, particularly configuration and application automation, business for a while now. It is very good to see how the current trend of DevOps is pushing IT departments to really and truly embrace automation – and not just for the server. All that said, I am feeling a little like the mainframe guys do about cloud when I see blog posts like this. UrbanCode is now boasting of the fact that configuration-only deployments are the best thing since the first shell script emerged from the primordial ooze of UNIX. Really?! Configuration management systems should boast about being able to figure out on the fly what needs to deployed, and then deploying only that – not forcing their users to figure that out for them and calling that a feature.Have we really taken a step back in the configuration automation industry to a point where boasting about functions that should have been in your product years ago substitutes for substantive contributions? And if this is the new normal, is it working? This kind of relabeling and repacking of old ideas is not new to configuration automation software. Oh wait, now my scripts are “compiled scripts”. Or – My scripts are failing, so I moved from scripts to METAscripts written in a METAscripting language that only a METAcommunity knows :). And now I am rockin’ 25 service environments (who cares that the “last generation” tools can are known to manage 1,000+ service environments). Bottom line, can the current batch of self-styled DevOps automation tools really hang tight in the concrete jungle of enterprise IT operations?

Don’t get me wrong. It is this very willingness to thumb one’s nose at your predecessors, upon whose shoulders you currently stand, that is at the core of innovation. As Picasso said “Good Artists Copy, Great Artists Steal”. So, do we look to the purveyors of software for the solution to the problem?

The simple answer is, No. The reason why a lot of this happens, in my opinion, is because each new group in IT that finds itself tackling a new problem rarely looks backwards, or even in the next cubicle over, for solutions that have already worked. And with DevOps in particular, they are some questions that the Users (IT departments) should be asking the vendors, and themselves. Not every IT department out there will need or want the same solution, but they owe it to themselves to be thorough. So, what does an IT department do to make the right decision.

1) You have to weigh the short term needs of the immediate problem (say small-scale DevOps) against the longer term rollout (DevOps in full production). Many poor IT decisions are made on the basis of “cool feature”-itis, rather the mundane process of choosing what makes the best sense for the business.

2) Use business metrics. Every IT purchasing decision should be made on the basis of sound business metrics (We will save X% in costs, increase revenue by Y%, etc.). That means you need to invite those MBA graduates from the other office over to the team. I know – you don’t want them to bean count you into oblivion. Just realize that they speak the right language to get the project funded. And make them pay for lunch.

3) Hold the vendors (including us) accountable for the statements that we make. We should deliver references and case studies to back up our case. And, if those metrics stand up, you can use them for your business case.

Bottom, set the bar HIGHER DevOps community. You owe to yourselves, and your business, to expect more out of your vendors.

* Image from http://illustration.worth1000.com/entries/95497/bunny-under-a-low-bar

Reposted from BMC Communities