Freedom of choice – Three Ways DevOps is Revolutionizing Enterprise Software

“I’m free to do what I want, any old time”

– “I’m Free” – Rolling Stones

Here comes the Revolution

Anyone reading the tech news today these days can see that something profound is happening with enterprise software. Venture capital money is flowing into enterprise startups (including Sumo Logic). Old stalwarts like Dell and BMC are likely to be taken private in order to rework their business models. The stock of other software titans are being punished for clinging to old, proprietary models. Most of the old school crowd are trying to improve their image by buying younger, more attractive companies – like IBM with Urbancode. So, exactly what is happening?

Some of it is clearly not new. The big fish in the enterprise pond are always gobbling up Looking upthe small innovators, hopefully improving the lot of the larger company. Why then are some calling this the Golden Age of Enterprise Software? Many will point to BYOD, Cloud, Big Data, DevOps, etc. Personally, I think there is a more subtle trend here. More than ever before, software developers have tools at their fingertips that allow them to deliver software quickly and efficiently, and conversely they are also being held more responsible for the performance of that application.

In the best case scenarios, this has led to highly disruptive and innovative practices that shatter the enterprise software model (take Etsy and NetFlix as two prominent examples). Instead of an operations team passively receiving poorly tested code from unengaged developers, you get highly automated, constantly adapting architectures deftly driven forward by highly-skilled, and highly-motivated DevOps teams. I am not saying something new here. What I personally find interesting here is the disruptive effect this is having on enterprise software, in general. Here are three general trends I see:

 

1. The Rebirth of Automation

This is the DevOps trend that most point to, which is dominated by Puppet Labs and OpsCode (Chef). It seems 90%+ of DevOps discussions start here. The deft combination of flexibility, expandability, and community appeal to the development mindset already conditioned by the open source movement. The idea of “Infrastructure as Code” is a natural extension of first virtualization, then cloud computing. It is so easy to create new “servers” now, that there is no excuse not to completely automated the build and maintenance of those servers.

2. The “Re-discovery” of the Customer

The proven theories of lean manufacturing have long stalked IT in the form of concepts like lean software software development and six-sigma. And some of the DevOps community is trying hard to bring these concepts to IT Operations. Underlying this is the trend towards the importance of consumer and user satisfaction. The switching costs are so low, and the modes of feedback so verbose, that companies can no longer afford to ignore their users. This means that the lessons learned by the automotive industry – eliminate everything that doesn’t provide value to the customer – are now essential for the IT industry. This is not good news for legacy software companies associated with the image of uncaring, passive IT departments. It is also fueling the rise of cost-effective solutions like SaaS (are million dollar software solutions gathering dust providing customer value?).

3. Measure Everything

Example Etsy GraphAs the DevOps movement takes on more of Lean thinking, then the importance of measurement rises. In the seminal book on DevOps – “The Phoenix Project” – monitoring is a central theme. We see this in the real world with Etsy’s efforts. They are monitoring thousand of metrics with statsd, providing insight into every part of their application. So, what’s different here? In the old world, what you monitor is dictated to you by software vendors who deliver generic metrics to fit all customers. In the new world order, developers can add metrics directly into their logs, or through a tool like statsd, and monitor exactly what they want to. In the spirit of open-source, it is more important to get what you need (custom, relevant metrics), rather than get it in a pretty package. In essence, this means that the old Application Performance Monitoring (APM) tools may be headed for a rude awakening. Why do you even need an APM tool, if you can pump custom metrics to a generic graphing tool, or a log analysis tool? Well, I am not sure that you do…

These points are only one small part of what is changing, and I don’t claim to know exactly what the future bodes for IT software vendors. What is obvious, though, is that the barriers to entry for innovation are low, and the money willing to chase it is plentiful, so this is definitely a golden age – just not for the old school, perhaps…

* Picture of statsd graph from Etsy’s blog – Code as Craft

Advertisements

3 Reasons why DevOps isn’t changing IT faster

iStock_RobotRaceThe recent article from Luke Kanies (from Puppet Labs) on Wired.com really got me thinking. Similar to Luke, I have had an interesting vantage point to observe the changing nature of systems administration – spending time as one myself – over the last decade or so. From my graduate physics department, to Loudcloud, EDS, BladeLogic, BMC, and now Sumo Logic, I have seen the best and not-so-great IT shops and how they operate. Not all of those IT teams I saw in action were, or are now, adopting best practices like automation and DevOps. The trend that Luke points out means that operations teams that continue to languish in constant firefighting mode, relying on ad-hoc scripts and the sweat off the admin’s brow, are becoming more obvious for being clearly out of step with the direction of the industry.

So, why aren’t all operations teams, and the techies themselves, falling over each other to embrace tools like Puppet, SumoLogic, and other DevOps/Automation tools? Clearly some organizations are embracing them. Why not all of them? I have few of my own ideas here, and I would like to hear yours as well.

1. The move from “Artist” to “Manager” is not natural

Back when I started in IT, most IT admins “owned” a small collection of devices or applications. Server admins owned a handful of servers, database admins a few databases, network admins a few switches and firewalls, etc. They controlled access to their systems jealously, and took personal pride in their operation. They were artists, and their systems, and the way those systems were managed, were an art form. As IT budgets have shrunk, and the load on IT increased, this level of care is impossible.

Yet, you still find many admins jealously guarding their root access privileges, instead of moving to a shared responsibility model with other admins. Why? I think it is the same reason why I feel such satisfaction after cooking a meal, building new shelves in my garage, or fixing a leaky faucet. I did it myself, and it feels good to start and finish something. Participating in automated processes can be deeply unsatisfying. That is why admins need to learn new skills, and find new pride in the quality of their automation or find satisfaction in steadily improving quality.

2. Using Automation may seem like losing control

One conversation from my IT past sticks in my mind more than any other. I was on site with a customer trying to explain the benefits of automation to a group of systems administrators. One system admin floored me by insisting that she could more accurately, and more quickly, make changes to 20 UNIX servers than I could ever do with automation. It was like some modern version of John Henry calling me out to some man vs. machine contest, and subtly decrying the inhumanity of my automation tool. I can’t even remember my answer now, but this perspective is at the root of much of the push-back against automation and DevOps. Instead of looking at the business outcome – better experience and value for the customer – some frustrated system admins see these new ideas as a direct affront to the quality of their work. This is precisely why I think the fundamental shift in DevOps is from a internal IT focus to an external customer focus. That way admins can measure their success by customer impact. Not an easy change to make, but it is essential for IT’s continued relevance.

3. Change seems hard/bad/unnatural/unneeded

Isn’t the root of the resistance here really the natural tendency to resist change? On the other hand, how many times have operations teams been assured that the latest IT fad will reduce their workload and improve quality, only to see the opposite happen? So what’s different about DevOps? I think I could write a whole blog entry just on that, but a few things come to mind. First, the focus is on customer value, which greatly simplifies priorities. Second, it’s all about outcomes, not process for the sake of process. Finally, it is all about continuous improvement driven by the experience of the people on the frontline. This means that admins must be rewarded going forward for doing things that increase customer value, rather than putting out fires or pleasing angry executives.

So, it all comes back to culture – surprise, surprise. I think this will be the primary challenge of DevOps going forward. How do we overcome the IT culture so resistant to change, while providing an attractive way for all of those systems administrators to breathe easily in their new roles?

DevOps needs a layered approach – Not only process or automation

With any new, emerging area the tendency is for advocates of each new approach to attempt to invalidate or minimize all earlier approaches. Sometimes this is appropriate, but rarely is progress so clear cut. In this vein, I would like to comment on Phil Cherry’s post on DevOps.com. First off, I appreciate Phil’s addition to the discussion. I think his delineation between automation approaches is very interesting. However, the devil is in the details. Here are the highlights of my views on this:

Package-based automation

As a former BladeLogic guy, I would be remiss if I didn’t correct a few points in Phil’s analysis. Phil may be confusing OS Vendor packages (RPMs, MSIs, etc.) with configuration management packages. Systems like BladeLogic build packages based on some sort of configuration object system. In other words, the server is represented as a set of configuration objects (a registry key, setting in a config file, etc.) in a particular state. The packages are usually represented as desired states for configurations based on that same object model. There is no reason that those packages have to be applied “all in one go”, since packages can be chained and included in larger jobs with decision conditions. That said, I agree that this type of automation is per-server based, for the most part.

Application Understanding

I do agree that Phil’s definition of automation models don’t understand multi-server dependencies or really know what an “application” is. Phil does ignore in this context that there are other automation approaches that do bridge this multi-system approach by building on the automation platforms. In particular, the trends within virtualization and cloud have pushed vendors to create multi-server, application-focused automation platforms. You can find solutions with established vendors like BMC or VMWare, with open-source platforms like Puppet with OpenStack, as well as with startups like ElasticBox. Bottom line, it is vast oversimplification to limit an overview of DevOps-capable automation to automation tools with a server-heritage only. This area of automation is clearly evolving and growing, and deserves a more holistic overview.

How does process fit in?

As John Willis, and others, have said many times before, culture and process are just as much a part of a devops approach as basic automation. So, it appropriate for Phil to end with a process-based approach. Clearly rolling out an application requires an understanding of the end-to-end process, how steps are related, and how services are dependent. I do feel that Phil left out a few key points

Process Management and Deployment Automation are not the same

I feel like Phil blurs the line between managing the process of release, which is a people-process issue, versus managing the deployment of an application. The latter involves pulling together disparate automation with a cross-server/application-focused view. Process management, on the other hand, deals with the more holistic problem of driving the release of an application from development all the way to production. They are both needed, but they aren’t the same thing.

What about coordination

One of the biggest drivers of DevOps is getting Dev and Ops to coordinate and collaborate on application releases. This means driving Dev visibility forward into Ops, and Ops visibility back into Dev. It isn’t just about creating well-aligned deployment processes, but also managing the entire release process from the code repository to production. This means we need encapsulate pre-prod and prod processes, as well as non-system activities (like opening change tickets, etc.).

What about planning

Releasing and managing applications is just about the here and now. It is also about planning for the future. Any process-oriented approach has to allow not only for the coordination of deployment processes, but also needs to allow for the establishment of clear and flexible release processes visible to all stakeholders. In particular, a process management system should provide visibility to the decision makers, as well as the executors. Applications clearly affect non-technical activities like marketing and executive planning, so it is important that business leaders be able to understand where releases are out, and when important features are likely to arrive.

What we need is a layered approach

Bottom line, we need to solve all of the application release issues – process, deployment, and automation. In the spirit of DevOps, those “layers” can be solved somewhat independently with loose coupling between them. We drive the people-process coordinate, encapsulate all of the complexities necessary to deploy an application, and then drive the low-level automation necessary to actually implement the application. All of these work together to create a full approach to Application Release Automation. Any solution that ignores a layer risks solving one problem, and then creating a whole new set of them.

Kaizen and the Art of DevOps Automation Maintenance

And now we come to the most “boring” part. Right? Maintenance. The death of joy for the innovator. Or is it? I don’t think so. Continuous innovation is at the core of DevOps and Lean methodology. Maintenance is essential to keeping the spirit of DevOps strong, and automation that isn’t improving will grow stale and useless.

So, let’s review the IT Automation Curator’s job description on last time:

  • Collect existing automation, and then Catalog it where others can find it (See Part 2)
  • Develop new automation based on requirements from IT (See Part 3)
  • Train others on how to use the automated processes (See Part 4)
  • Maintain the existing automation

Going back to Lean methodology, we can look to the idea of Continual Improvement or Kaizen. There are 3 main areas from Masaaki Imai‘s 1986 book Kaizen: The Key to Japan’s Competitive Success.

  • Reflection of processes. (Feedback)
  • Identification, reduction, and elimination of suboptimal processes. (Efficiency)
  • Incremental, continual steps rather than giant leaps. (Evolution)

Use Metrics and Reporting for Feedback

How can you improve if you don’t know how far you’ve come and how well you are doing? That’s like going on a diet without ever weighing yourself or even looking in the mirror. Healthy weight loss involves such small changes every week that it would discouraged if you didn’t look at changes over the long term (I speak from experience). So, why do so few companies and automation vendors include metrics and reporting on automation efficiency?! The whole point of implementing automation is to reduce waste, reduce costs, and increase velocity. But you have no context for understand if you have succeeded if you don’t have metrics and reporting (I talked about this in a previous post).

Ruthlessly eliminate sub-optimal processes

The whole point of this process is get rid of wasted effort and time – muda. The hardpart here is that once you agree to continually improve your automated processes, you have to be ruthless in your evaluation of their efficiency. That means there are no sacred cows. Just because somebody smart and dedicated invested hours of their life into creating something, doesn’t mean it can’t be improved or scrapped entirely. The whole team has to be dedicated to, and incentivized towards, efficiency and continuous improvement. This point is important – these kind of improvements come from the grassroots – not the top. If the people in the trenches aren’t bought into continual improvement, it won’t work.

Baby Steps, not Leaps of Faith

I know the hardest part of this approach for me is the gradualism. I like to grandiloquently solve grandiose problems with lofty and visionary solutions. The problem is that most of those involve large amounts of kool-aid, and they are never finished. The truly mature IT organization has to keep their eye on the goals of the business, and relentlessly reduce muda – step by tortuous step. We can refer back to the weight loss analogy. You lose weight through all of the small victories – Do I really need that donut? One serving is good enough. But bringing us back to our first point – small victories only show up as victories when you can measure your long term progress. Otherwise it looks like tentative, timid, risk-adverse behavior.

And so, we reach the last chapter of my IT Automation Curator series. It has been a lot of fun writing it, and I hope that you enjoyed it as well. I am looking forward to continuing to explore how the proven methods of lean and agile can be applied to DevOps and Operations overall.

Training others in the dark arts of DevOps Automation

If you are the Automation hero, why would you EVER share that stage? You are basically reducing your value to the organization by sharing your secrets. Right? Wrong! You are actually doing yourself a lot of harm, as I discussed in the the first blog post. How can you move on to other exciting challenges if you have to maintain your work of automation genius?

That is why the the IT Automation Curator’s job description has training as a core requirement:

  • Collect existing automation, and then Catalog it where others can find it (See Part 2)
  • Develop new automation based on requirements from IT (See Part 3)
  • Train others on how to use the automated processes
  • Maintain the existing automation

I had a great comment from jamesmarcus in the first blog post. Here is what he said:

“As a Director of IT I look to tools that promote easy automation, documentation, andbest practices. I try to design networks and setups with the “if I disappear” rule in mind. Meaning another sys admin of lesser knowledge should be able to look at my work and understand how why we did something in a certain way”.

I think this is a great perspective at the core of why I included training in my job description. Very few programmers, sysadmins, and other IT techies enjoy documenting their work. I don’t either – when it is after the fact. It is so mind-numbing to document your automation after you are already done and want to move on. So, that brings us to our first post.

Build your Automation to be well-documented and re-usuable

While performing amazing feats of scripting judo can impress your colleagues and get you kudos online, it is not a good long-term objective. One thing I learned early as a programmer is that creating incredibly efficient and elegant code seemed great, but it was really bad if even I couldn’t figure out what I had done a year later. That all comes down to great comments while you are writing the code. I know this may seem basic, but I have seen too many IT organizations with automation scripts, packages, etc. that no-one understands anymore. This is essentially a guarantee that the automation in question will be left alone to become outdated, brittle, and even “dangerous”. And if you are the only one that understands it, then it is your burden to bear.

So, basic to our train function of the IT Automation curator. How can you possible train people if your automation is over-complicated, un-documented, and impenetrable to mere mortals? Only with difficulty, and no one (especially you) will enjoy the experience. By documenting your automation very well as you write it, and building it to be as straightforward and simple as possible, you increase your chance of handing it off successfully. Writing well-documented and straight-forward automation has to be part of your process – bottom line.

Find automation disciples, and train them in the dark arts

While I see no reason why you can’t “teach” a class on automation as part of this role, I don’t think that is optimal, or even desirable for most people (stage fright anyone?). I have always envisioned a much more personal approach to automation training. Not every sysadmin, administrator, or IT techie extraordinaire will have an aptitude for, or interest in, designing automation. The right person has a somewhat rare combination of programming know-how, patience, troubleshooting skills, and IT systems knowledge. Obviously, everyone will use the automation, but only a few will write it.

The IT Automation Curator should be a mature, senior IT operator that has an eye for spotting talent. Like Mr. Miyagi in Karate Kid, you can watch for the young IT admin with lots of promise and fire in their belly, but unable to conquer the IT problems with their lousy karate skills. In all seriousness, I think mentoring promising candidates on automation best practices is more enjoyable and effective than the typical shotgun approach. The best part is that you can let the young upstart take care of the boring automation bits, while you save the best for yourself!

So, in summary:

  1. You can’t pass on automation that impenetrable to anyone but yourself
  2. One-on-one mentoring is a much more effective way to pass on your automation skills and knowledge

So, here is a parting challenge for all of you out there that actually remember Karate Kid. What might the IT Automation equivalent be of Mr. Myagi’s “catch a fly with chopsticks” trick?

Developing Automation with Lean Methodology

So, now we are on what I consider the most fun part of the IT Automation Curator’s job description ( see the first blog post ) – developing new automation. Let’s first review the job description as I outlined it:

  • Collect existing automation, and then Catalog it where others can find it (See Part 2)
  • Develop new automation based on requirements from IT
  • Train others on how to use the automated processes
  • Maintain the existing automation

There is lots of great technical material out there about creating automation, and I won’t try to duplicate it here. And I also won’t push a particular toolset, though I have my opinions, of course. I am convinced that most automation project fail, not because of problems with the toolset, but rather due to problems with the approach. That doesn’t mean that one tool isn’t better than another, quite the contrary. The difference is just how successful can you be when you do it right.

To Automate or not to Automate – that is the question.

I think choosing what to automate is much more important to how you choose to achieve the automation. The best model we have for that, in my opinion, is Lean Methodology, and Lean Software Development, in particular. The Poppendiecks, creators of Lean Software Development, have their first principle as Optimize the Whole. The whole point of this step is to eliminate waste, known as muda in Lean methodology. Muda is “anything which does not provide customer value”. This is such a simple, yet revolutionary, concept. What if every sysadmin asked himself whether that script he was working on added customer value? Would they even know how to answer the question.

So, how do you answer that question? You look at the whole, end-to-end, process. This means no silos, no team-centric thinking, no “that isn’t my job”. What does it take to deliver a service to the end user/customer, and where are we wasting time and resources? In Lean Methodology you figure this out by drawing a value stream map. In the context of manufacturing, the source of lean methodology, it meant going from factory to factory, supplier to supplier, and putting together the complete picture of how a product is built and delivered.

So, what does that mean to our Automation Curator? Typically lean, or agile, software development stops at the delivering of the product to IT Operations. In the spirit of DevOps, that needs to stop. IT organizations need to start looking at the delivery of a service to a customer from requirements all the way through to production. That is the only context where it makes any sense to uncover muda. If you view IT Operations in isolation, you could very well create a highly efficient IT operation that doesn’t good services.

The IT Automation Curator has to focus on the automation projects that provide value to the customer – delivering new applications and features faster, restores services faster, provides better customer experience, etc.

What does success look like?

Once you have automated a process, how do you even know that you have achieved your goal of eliminating waste? That where measurement and reporting come in. If you have no way of measuring the effects of your hard work, how do you even know if you have been successful? You don’t. That means part of developing any automation has to be about measuring the impact on the customer. For example, you just automated the full-stack build of an application from scratch. How much more quickly can you deliver updated applications, or recover from problems? So, you have automated the application release process. Did that reduce downtime and error during new releases?

Once you have those reports – advertise them. Don’t be shy about showing how you are increasing value for the end customer. So many IT departments have been forced to show value by how much they can cut out of their budget. How much more would the business appreciate an IT department that demonstrably increases value for the customer?

I think this approach is so much more effective for determining what to automate and how. This singular customer and business focus is bound to make the IT Automation Curator a valuable member of the team. A lot more valuable than the “IT Hero” that corrects a preventable problem with heroic effort. I am also hoping to see this result-focused approach take over from the tools-focused approach so prevalent in DevOps today. If DevOps is to be relevant for more than just a few years, it must encourage behavior that adds value to the customer. Why? Because the customer is King!

IT Automation Curator for DevOps – Part 2 – Collect and Catalog

This topic is far too interesting and deep to cover in just one blog post. So, I am going to split the discussion into a few sections. I’ll use my proposed “job description” for an IT Automation Curator as a starting point:

  • Collect existing automation, and then Catalog it where others can find it
  • Develop new automation based on requirements from IT
  • Train others on how to use the automated processes
  • Maintain the existing automation

This first step of collect and catalog is where I have seen many automation efforts stumble. The natural inclination of most techies (myself included) is to jump right into developing automation, no matter what is in place. As I learned the hard way, that is a bad idea. So, I will give a few reasons why this step is important:

Reason #1: If you don’t know about all the automation in place, you don’t really understand how your data center is operating

It’s great that you developed that new automated process that auto-magically deploys a set of configurations for you. Are you sure that other scripts or tools won’t change it or corrupt it? Most IT teams have scripts strewn all over the place – some well known , some the detritus of sysadmins past. They may have been scheduled centrally or on individual servers. This is very hard to get a grip on. There are a few tools out there, but it is hard to ensure that you have found all automation spread over all the systems. This is just another reason why you need to control access and even re-build some servers from scratch (hopefully in an automated way).

Most IT operations teams also have multiple automation tools in play. Each silo-ed team has their preferred tool, which they guard jealously. Overall, this is not a good approach. The more tools you have, the harder it is to standardize automation and create efficient end-to-end processes. At a minimum, all of these tools need to documented and managed centrally.

Reason #2: Don’t duplicate work and ignore experience

A lot of the automation in place may not be optimal, but it was most likely built to solve the same problems you will need to solve later. Tossing it out, or just ignoring it, is essentially disregarding the combined experience of the IT team. Even if you rebuild it in a better tool, and in a more efficient way – the lessons learned will be valuable.

There is also an important less here about prioritization. Just because you can make an automated process more elegant or more efficient, doesn’t mean you should. More often than not you will have no end of automation projects to look at. Why spend your time on what already works? What is important is to apply automation judiciously, where it provides the most value for the business.

Reason #3: More sharing will always lead to better results

Fostering a culture of sharing automation, essentially an open-source culture, will ensure that everyone has access to the best work on offer, that they don’t re-invent the wheel, and it will allow for continual improvement. That last point is crucial. The idea is not for the automation curator to control all the automation per se. They should be catalysts for making better automation, whether they do it or not. So, it is important to leave one’s ego at the door, and admit that your automation becomes better when you let others critique it and improve on it.

Bottom line, having a central place to share and continually improve automation is essential. This will most likely affect your choice of automation platforms as well. If you can’t share and improve, then you will be hobbling yourselves.

So, how do you do this in your own environment? Do you have ideas about the best way to go about it? Any success stories?

IT Automation Curator – Good for techies, good for business, good for DevOps

Recently my thoughts have been going back to a concept I like in the seminal IT operations book, The Visible Ops Handbook (By Gene Kim, Kevin Behr, and George Spafford). I have been doing a lot of thinking about how Lean, DevOps, Agile, etc. are changing IT culture, or at least pressing for change. Properly leveraged automation is a big part of that change process – which makes me think of the passage in Visible Ops where the authors discuss changing the behavior of senior IT staff:

“Their mastery of configurations continually increases while they integrate it into documented and repeatable processes. We jokingly refer to this phenomenon as ‘turning firefighters into curators’ […]”*

As a former IT techie myself, I get the need to challenge oneself in the often routine and monotonous world of IT. Personally, I think that is a lot of the grass-roots impetus behind the DevOps movement, and the adoption of open-source automation tools. Creating automation is a way of turning the mind-numblingly mundane into something exciting and intellectually challenging. So far so good. Boredom leads to sinking morale and productivity – poor morale is bad for business.

So, what’s not to like? In short, it goes back to focus and sustainability. No, I’m not talking green-energy windmills. How do you sustain and focus the efforts of these budding automation aficionados? Left to their own devices, they will likely create lots of useful, but narrowly directed scripts, packages, etc. All of these will be focused on the problems they face on a daily basis. For the problems outside of the automation guru’s gaze – those problems will most likely remain unsolved.

So, this is where the idea from Visible Ops comes to the rescue. The answer is that we pull these gurus out of their day-to-day grind in the IT trenches,and make them automation curators. Now, I know that many of you hear curator and think of a older man in a tweed jacket, peering over horn rimmed glasses, waxing rhapsodic about the various manufacturer stamps of 18th American chamber pots. So, as interesting as early american port-a-potties may be, let’s look at the definition of curator:

curator – one who has the care and superintendence of something (Marriam-Webster Dictionary)

Clearly tweed is not mentioned. In all seriousness, museum curators do much more than merely talk about old things. Considering the Smithsonian’s own description, curators:

  • Acquire new items for the collection
  • Research the collection
  • Display the collection
  • Maintain the collection

So, if we work off the Smithsonian’s “model”, I suggest that an IT Automation Curators would:

  • Collect existing automation, and then Catalog it where others can find it
  • Develop new automation based on requirements from IT
  • Train others on how to use the automated processes
  • Maintain the existing automation

This kind of role is exactly what I missed someone had offered me early in my career. I would have jumped at it. It would have been a great new challenge for me, I would have been creating value for the business, and IT would have been more efficient. And this isn’t really a new idea. Software developers have long needed to share code snippets and concepts with each other, and they defined the interfaces between code as well. The trick here is that Automation Curator needs to take an active role in both building the best automation and also in promoting the proper use of automation in IT.

One last comment. We might ask if this would be better classified as an Automation Librarian. I think it is good question. At the end of the day, I think having the existence of the position is more important than what you call it. However, in my mind the concept of curator leans more towards the acquisition, development, and training part. The words Library and Librarian in IT seem to lean more towards the maintenance and storage part of the equation (notwithstanding what traditional librarians actually do). Curator is also a cool word.

So, why aren’t more IT shops doing this? What do you think?

This is the first part of a multi-part series. Check out the other parts:

* Kim, Gene; George Spafford; Kevin Behr (2005-06-15). The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps (Kindle Locations 917-919). IT Process Institute, Inc.. Kindle Edition.

Let’s talk about maturity in automation

I encourage all of you to check out John Allspaw’s new blog series on automation. I am excited about changing the conversation in our industry from tool-happy discussions to how IT can mature in its use of automation. It doesn’t matter if you are pushing DevOps, Lean, ITIL, whatever. The point is that the IT industry needs to grow up and sit at the adults’ table. Too much of our economy rides upon the success of what we do for a living to let ourselves get distracted by the coolness of what we do, and forget the seriousness of what it means.

Push on Mr. Allspaw. Now, I will delve into those massive papers you referred to in your blog. I feel like I am in graduate school again. Too bad there isn’t any Jolt Cola in the house.