Wednesday 6 August 2014

Why Continuous Delivery doesn't go far enough


My team “owns” (ensures that they are supported, provides best practises) a number of different tools and technologies in the DevOps space including:

  • Puppet Enterprise
  • IBM Urbancode Deploy
  • Jenkins
  • Teamcity
  • TFS
  • SVN
  • Artifactory
  • Selenium
  • Octopus Deploy

...and numerous others.

That’s quite a long list for a Continuous Delivery tool chain (made worse by natural Java and .net divides).

Tools that don’t integrate become silos of their own and encourage a divide between different disciplines (such as Dev and Ops).

It’s important to see the aggregate efficiency of your delivery pipeline and having a thousand different tools involved makes this difficult.

  • How many regression tests do we do a day?
  • What is the ratio between bugs captured in CI against those in SysTest?
  • Is there a relationship between the speed of a build and the number of deployment we do in a sprint?
  • Etc etc

Only by capturing these metrics can we improve our efficiency by highlighting and attacking bottlenecks.

Continuous Delivery is a process to improve the delivery of software change into production. 

For me that doesn’t go far enough.  Are we saying we don’t care about it once our changes have gone live?

We have this nice, shiny Continuous Delivery pipeline to deliver value to the business – usually in the form of increasing revenues somehow.

So, ultimately it’s not really about builds, testing and deployments.  It’s about what impact our change team makes to the bottom line.

For example...
  • What is the relationship between regression tests carried out in Systest and the number of errors that appear in production?
  • What is the relationship between production errors and revenue collected?
  • What is the relationship between how many story points we put in a sprint and the sentiment of our customers?
For me, this is the ultimate expression of DevOps.  We are aligning our delivery capabilities and processes in order to meet shared goals (in this example increased revenue).

I’m currently designing a proof of concept with Splunk to collate all these different types of metrics from the many  environments and tools we have in the delivery pipeline.   We should then be able to understand the different relationships between the capabilities in the team and the decisions we make.

Isn’t it powerful to have data which says to the business “stop putting too much change in a sprint”?  

Or that prioritising new features over technical debt is counterproductive to the bottom line?

I don’t expect that this will be an easy thing to accomplish and we can only infer these types of relationships.  However, the potential benefits are huge.

1 comment:

  1. Very nice article Joff. At least the pain point has a name now but it's been something plaguing the industry for years. Traditional IT is the barrier IMO. I believe to remedy it and remedy it quickly one needs to partner with industry experts, like Avanade (excuse the punt :)). Co-source, outsource or even outsource for a period and then transition back when your team is operating efficiently. As you say, once the benefits are realised and the bottom line is affected we might finally have an organisation where all of our priorities align.

    ReplyDelete