Freedom of choice – Three Ways DevOps is Revolutionizing Enterprise Software

“I’m free to do what I want, any old time”

– “I’m Free” – Rolling Stones

Here comes the Revolution

Anyone reading the tech news today these days can see that something profound is happening with enterprise software. Venture capital money is flowing into enterprise startups (including Sumo Logic). Old stalwarts like Dell and BMC are likely to be taken private in order to rework their business models. The stock of other software titans are being punished for clinging to old, proprietary models. Most of the old school crowd are trying to improve their image by buying younger, more attractive companies – like IBM with Urbancode. So, exactly what is happening?

Some of it is clearly not new. The big fish in the enterprise pond are always gobbling up Looking upthe small innovators, hopefully improving the lot of the larger company. Why then are some calling this the Golden Age of Enterprise Software? Many will point to BYOD, Cloud, Big Data, DevOps, etc. Personally, I think there is a more subtle trend here. More than ever before, software developers have tools at their fingertips that allow them to deliver software quickly and efficiently, and conversely they are also being held more responsible for the performance of that application.

In the best case scenarios, this has led to highly disruptive and innovative practices that shatter the enterprise software model (take Etsy and NetFlix as two prominent examples). Instead of an operations team passively receiving poorly tested code from unengaged developers, you get highly automated, constantly adapting architectures deftly driven forward by highly-skilled, and highly-motivated DevOps teams. I am not saying something new here. What I personally find interesting here is the disruptive effect this is having on enterprise software, in general. Here are three general trends I see:

 

1. The Rebirth of Automation

This is the DevOps trend that most point to, which is dominated by Puppet Labs and OpsCode (Chef). It seems 90%+ of DevOps discussions start here. The deft combination of flexibility, expandability, and community appeal to the development mindset already conditioned by the open source movement. The idea of “Infrastructure as Code” is a natural extension of first virtualization, then cloud computing. It is so easy to create new “servers” now, that there is no excuse not to completely automated the build and maintenance of those servers.

2. The “Re-discovery” of the Customer

The proven theories of lean manufacturing have long stalked IT in the form of concepts like lean software software development and six-sigma. And some of the DevOps community is trying hard to bring these concepts to IT Operations. Underlying this is the trend towards the importance of consumer and user satisfaction. The switching costs are so low, and the modes of feedback so verbose, that companies can no longer afford to ignore their users. This means that the lessons learned by the automotive industry – eliminate everything that doesn’t provide value to the customer – are now essential for the IT industry. This is not good news for legacy software companies associated with the image of uncaring, passive IT departments. It is also fueling the rise of cost-effective solutions like SaaS (are million dollar software solutions gathering dust providing customer value?).

3. Measure Everything

Example Etsy GraphAs the DevOps movement takes on more of Lean thinking, then the importance of measurement rises. In the seminal book on DevOps – “The Phoenix Project” – monitoring is a central theme. We see this in the real world with Etsy’s efforts. They are monitoring thousand of metrics with statsd, providing insight into every part of their application. So, what’s different here? In the old world, what you monitor is dictated to you by software vendors who deliver generic metrics to fit all customers. In the new world order, developers can add metrics directly into their logs, or through a tool like statsd, and monitor exactly what they want to. In the spirit of open-source, it is more important to get what you need (custom, relevant metrics), rather than get it in a pretty package. In essence, this means that the old Application Performance Monitoring (APM) tools may be headed for a rude awakening. Why do you even need an APM tool, if you can pump custom metrics to a generic graphing tool, or a log analysis tool? Well, I am not sure that you do…

These points are only one small part of what is changing, and I don’t claim to know exactly what the future bodes for IT software vendors. What is obvious, though, is that the barriers to entry for innovation are low, and the money willing to chase it is plentiful, so this is definitely a golden age – just not for the old school, perhaps…

* Picture of statsd graph from Etsy’s blog – Code as Craft

Advertisements

DevOps needs a layered approach – Not only process or automation

With any new, emerging area the tendency is for advocates of each new approach to attempt to invalidate or minimize all earlier approaches. Sometimes this is appropriate, but rarely is progress so clear cut. In this vein, I would like to comment on Phil Cherry’s post on DevOps.com. First off, I appreciate Phil’s addition to the discussion. I think his delineation between automation approaches is very interesting. However, the devil is in the details. Here are the highlights of my views on this:

Package-based automation

As a former BladeLogic guy, I would be remiss if I didn’t correct a few points in Phil’s analysis. Phil may be confusing OS Vendor packages (RPMs, MSIs, etc.) with configuration management packages. Systems like BladeLogic build packages based on some sort of configuration object system. In other words, the server is represented as a set of configuration objects (a registry key, setting in a config file, etc.) in a particular state. The packages are usually represented as desired states for configurations based on that same object model. There is no reason that those packages have to be applied “all in one go”, since packages can be chained and included in larger jobs with decision conditions. That said, I agree that this type of automation is per-server based, for the most part.

Application Understanding

I do agree that Phil’s definition of automation models don’t understand multi-server dependencies or really know what an “application” is. Phil does ignore in this context that there are other automation approaches that do bridge this multi-system approach by building on the automation platforms. In particular, the trends within virtualization and cloud have pushed vendors to create multi-server, application-focused automation platforms. You can find solutions with established vendors like BMC or VMWare, with open-source platforms like Puppet with OpenStack, as well as with startups like ElasticBox. Bottom line, it is vast oversimplification to limit an overview of DevOps-capable automation to automation tools with a server-heritage only. This area of automation is clearly evolving and growing, and deserves a more holistic overview.

How does process fit in?

As John Willis, and others, have said many times before, culture and process are just as much a part of a devops approach as basic automation. So, it appropriate for Phil to end with a process-based approach. Clearly rolling out an application requires an understanding of the end-to-end process, how steps are related, and how services are dependent. I do feel that Phil left out a few key points

Process Management and Deployment Automation are not the same

I feel like Phil blurs the line between managing the process of release, which is a people-process issue, versus managing the deployment of an application. The latter involves pulling together disparate automation with a cross-server/application-focused view. Process management, on the other hand, deals with the more holistic problem of driving the release of an application from development all the way to production. They are both needed, but they aren’t the same thing.

What about coordination

One of the biggest drivers of DevOps is getting Dev and Ops to coordinate and collaborate on application releases. This means driving Dev visibility forward into Ops, and Ops visibility back into Dev. It isn’t just about creating well-aligned deployment processes, but also managing the entire release process from the code repository to production. This means we need encapsulate pre-prod and prod processes, as well as non-system activities (like opening change tickets, etc.).

What about planning

Releasing and managing applications is just about the here and now. It is also about planning for the future. Any process-oriented approach has to allow not only for the coordination of deployment processes, but also needs to allow for the establishment of clear and flexible release processes visible to all stakeholders. In particular, a process management system should provide visibility to the decision makers, as well as the executors. Applications clearly affect non-technical activities like marketing and executive planning, so it is important that business leaders be able to understand where releases are out, and when important features are likely to arrive.

What we need is a layered approach

Bottom line, we need to solve all of the application release issues – process, deployment, and automation. In the spirit of DevOps, those “layers” can be solved somewhat independently with loose coupling between them. We drive the people-process coordinate, encapsulate all of the complexities necessary to deploy an application, and then drive the low-level automation necessary to actually implement the application. All of these work together to create a full approach to Application Release Automation. Any solution that ignores a layer risks solving one problem, and then creating a whole new set of them.

IT is War : DevOps & ITIL through the lens of military history

Climbing into the DevOps vs. ITIL debate is like stepping into a minefield, as least from my vantage point. Both have serious-minded proponents, and engender the kind of passion that lesser methodologies only dream of.  But, after this very issue has come up multiple times recently, I really felt compelled to think more about it. Most of the attempts at reconciling the two haven’t resonated with me. So, I have tackled this issue the way that appeals the most to me – using military analogies.

Military history is probably my favorite part of the huge topic of history. I particularly enjoy understanding how advances in weapons, tactics, and organization have influenced the course of events. The evolution of IT over the years has been a lot about adjusting process and tactics to the available tools and competitors. The evolution of military strategy boils down to the same thing, so I think a quick look at the evolution of military tactics over time can shed some real light on the DevOps & ITIL debate.

Discipline and Organization beats Chaos

Success in war usually favors the bold – those who embrace new technology and changemacedonian pike phalanx their tactics to deal with new situations and threats. A few days ago I watched a documentary on the massive defeat of the invading Persian army by the Greek city states in 490 B.C.. What many people don’t know is that it was the technological advantage of iron lances and large shields, and the highly coordinated group maneuver called the Phalanx that gave the Greeks the edge. Moving as one unit, with shield interlocked, spears pointed front, the Greek hoplites were an unstoppable force cutting a swath through the Persians

Fast forward a thousand years. The Europeans, with their endless wars of the 17th, 18th, and 19th centuries made an art napoleon-army-salamanca-spainform of the highly trained foot soldier with a musket. Napoleon represents the pinnacle of that art. Coordinated volleys of musket fire from trained soldiers, supported by well-placed cannon, and fast moving light cavalry, could wipe out less well-equipped and trained troops. Napoleon also worked at a scale unheard before of his time, mastering logistics for hundreds of thousands of troops in the field. With his armies, Napoleon was able to dominate the Europe for 20 years, and his ideas lasted longer.

Tools can obsolete Tactics

The effectiveness of these large scale maneuvers was upset by the advances of the 19th century, particularly during the U.S. Civil War. The vastly increased accuracy of rifled muskets and cannon, and the high rate of fire afforded by caplock (percussion cap), meant that the magnificent infantry charges of the 18th century only made for easier targets and mind-blowing casualty rates. This reached a peak with the mindless violence of trench warfare in World War I.

MarineRaiders

And again, armies adapted. Along with the introduction of the tank, it was small group tactics that won the day and broke the stalemate in World War I France. Instead of ordering breathlessly stupid charges into the face of machine gun fire, small squads of soldiers could adapt to the circumstances and more rapidly advance.  World War II continued the refinement of those tactics. This didn’t mean that large scale coordination wasn’t still necessary. Artillery and air support still needed to be coordinated with the soldiers on the ground. Tanks and mobile infantry could move quickly and pack a powerful punch (which the Germans perfected with Blitzkrieg).

Very interesting, but what’s the point

Other than indulging my need to geek out with talk of weaponry, this rapid flyby of history does have a point. Successful armies over time have adapted their tactics and tools to meet the threats at hand. The ancient Greeks and Napoleon used organization and discipline to overwhelm their enemies. In the face of the devastating weapons of the 21st century, successful armies used more flexible and fast-moving tactics to dominate their slower moving enemies. All of these militaries adapted to their circumstances and made the most of what tools they had.

I don’t think IT is all that different. ITIL made a lot of sense when confronted with the chaos of IT operations, and the need to provide stable services for a business questioning the value being derived from their investment. With well-documented processes and coordination, IT departments could confront and conquer the chaos.

Conversely, DevOps has arisen in the wake of the pressures exerted by a hyper-competitive business environment and hard-to-please users with no end of choices. Just like the soldiers facing rifled carbines at Gettysburg and those facing machine guns nests in French trenches, IT operations teams trying to please the 21st century Internet user can’t march into battle with the highly coordinated, but rigid, maneuvers of ITIL. By the time they perform the service management equivalent of a pivot, the business has lost customers and revenue. In the word of U.S. General George S. Patton of World War II fame –

“A good plan violently executed now is better than a perfect plan next week”.

On the other hand, DevOps teams need a backdrop of coordinated services (e.g. cloud services or automation) to enable their agile methods, just like the U.S. Marines in World War II needed artillery and aerial support.

So, what’s the takeaway? We should never compare methodologies in a vacuum. Any methodology needs to solve today’s problems, not yesterday’s. The whole point of methodology is to provide a way to repeat the successes of the past. So, you need to find the Von Clausewitz that has succeeded where you want to succeed, and follow their lead. The methodology that best helps you meet your goals today is the right one every time.

The Mythical Application Owner – And Why They Matter to DevOps

I have been on both the customer and the vendor side of Application Management over the last decade. So, I was surprised how much trouble my team and I had really defining the Application Owner as a sales target. With our collective backgrounds, I expected it to be a breeze. Of course the CxOs, VP Operations, and others are well know entities. So why is the application owner so difficult to find? One reason is because the definitions are all over the place.

For example, one definition on the IT Law Wiki says:

An application owner is the individual or group with the responsibility to ensure that the program or programs, which make up the application, accomplish the specified objective or set of user requirements established for that application, including appropriate security safeguards.

Snore… I could be in charge of Microsoft Office for my company and fit that technically focused, but inherently logical, definition. NIST has a similarly technically oriented definition. Now in all seriousness, we can see that part of the problem is that I really mean a business application owner, not a technical owner. In that vein, a definition much more to my liking can be found on this blog entry from Nick Spanos on his Lean IT blog. In the spirit of lean methodology, he brings in business/customer outcomes, business processes, etc.

So, who cares? I think anyone who cares about DevOps should care. As DevOps expands from a grassroots movement to a business-changing phenomenon, it needs to continue to develop thought patterns that appeal to those footing the bill. This is where so much good thinking in IT falls down – the inability to articulate the value to the business. That said, I don’t think it has to be complicated. I see two over-riding characteristics:

They live, eat, and breathe application revenue

For every revenue-generating application out there, there is someone being measured on that revenue. For a small company, that might be the CEO. For a Fortune 500 company, there could be dozens of applications, each with an owner. Regardless of industry, somebody goes to sleep at night worrying about whether whatever.com is measuring up to expectations.

They care (or should care) about customer value

Customers pay the bills, so worrying about revenue means obsessing about customers. I think that one of the common themes for companies successfully implementing DevOps is the over-riding need to deliver customer value. This is coincidently very much in line with lean methodology – which is why lean and DevOps are so good together. An application that is always in danger of losing customers and competitiveness is fertile ground for DevOps. If your application isn’t in a competitive space, why would you even bother?!

Pretty much everything else is going to be different by industry. An application owner at SuperCoolStartup dot com will likely be very different than the app owner at BigRetail dot com or the MBA educated person running SomethingOrOtherFinancial dot com. What they should have in common in the commitment to increasing revenue by increasing customer value.

Now, one caveat. Not every application has revenue. I would argue that the lack of the profit imperative makes a wrenching change like DevOps much more difficult, but you can insert “mission” or “fund raising” for most of these points.

So, why does this matter? DevOps practitioners, and IT organizations in general, have an unprecedented opportunity to make IT matter to the business more than it ever has before. IT can become an enabler for revenue, rather than a pure cost center. However, to do that, they need to learn to express the value of DevOps in revenue and customer-focused terms, and begin to make the case directly to the application owner. Over time, I think the pendulum of power is swinging from the CIO and VP operations to the application owners. So, IT needs to make some new friends!