Thursday 30 October 2014

Spiderman and Azure


I've been really impressed by Azure at the TechEd conference in Barcelona this week.  We are starting our Cloud journey in 2015 so it's been great to see how far things have moved on in the last year.

Because I'm a geek I signed up to the free Azure trial (http://azure.microsoft.com/en-us/pricing/free-trial/) and had a play around for an hour.

In 1 hour I created a load balanced pair of web servers and a geo-located SQL cluster (i.e. one node in Europe and the other in the US).

1 hour.

That's ridiculous.

I'm not sure we even have the technical skills to do something like that little lone the time it would take (literally months).

1 hour from a cold start with no real experience of doing it before.

So that's all very good.

However, earlier in the week the excelled David Chappell (http://www.davidchappell.com/) talked about the Cloud and how with great power comes great responsibility (like Mr Spiderman).  Couldn't agree more.

If its that easy to do and if you aren't careful, your company will be spinning up machines all over the shape and things will quickly get out of control.

Just because you can spin infrastructure quickly doesn't necessarily mean you should do...


One of way of getting around this might to be to put some kind of dashboard over the top of the Azure capabilities (I think System Centre can do something like this) which automates builds and adds in things like approvals.  I really don't like having to nanny individuals but the reality is that if you let people build servers and environments manually (as well as install applications) things will get out of sync pretty quickly.

And your credit card bill will be substantial (ooops I forgot to turn off those 100 test servers I built last year).


Automating Azure


I'm not speaking from a point of great experience so my thoughts on Microsoft's automation approach might be entirely misguided.  Apologies Microsoft is this is not the case but  I'm not convinced their automation approach for Azure is right.

I'm a big fan of idempotence, infrastructure as code and desired state.

I can talk about the above 3 for quite some time so I won't bore you here (if you aren't already!).

Microsoft's approach is heavily reliant on PowerShell for Azure automation jobs.  These jobs are basically script and therefore contain logic and without a very skilled developer they are not idempotant.  Moreover, its difficult to version control these scripts as it seems you enter the PowerShell code directly on the Azure portal (probably wrong here!).

Being a procedural script its difficult to see what the impact on script v1.1 has over script v1.0.

I would turn to Puppet but it doesn't have the maturity to hook into the vast Azure API.

So I'm going to be a little stuck with PowerShell for the time being.

PowerShell is very powerful but really I just want to define the state of an environment/server and let the software worry about how to get there (a la Puppet).

In order to make the migration from on-premise to Cloud (or use Cloud for DR) we also aren't just talking about VM's.  What about VLAN's, F5 load balancers and all the other stuff that makes up an application stack?

Puppet can define these and if I could potentially use the same manifests for on-premise and the cloud this would be very exciting.  The ability to abstract away implementation into a XML manifest is the dream.

I'm not sure if that's a deficiency in Puppet or Azure (or both).  Either way there is another opportunity for Microsoft here.

So please Mr (or Mrs) Microsoft extend Desired State Configuration so it can cover more than just servers and allow those same manifests to cover private-cloud and public-cloud.  It needs to be agnostic of whether you are using VMware, Hyper-V, F5, Alteon etc.

Write once, run everywhere.
   

Tuesday 28 October 2014

GatesOps

Love-hate
It might be super-unfashionable to say but I have to admit to being somewhat of a Microsoft-phile over the years. 

I've made a decent living from consulting around their technologies over the last 16 years for starters (thanks for paying the mortgage Mr Gates!).

In any dealings with them I've had they have been nice individuals and they have gone out of their way to make me feel welcome.

Not to say I don't have any criticisms.  Quite the opposite.

Anything to do with licensing and Microsoft then just forget it. 

I also hate how far they are behind the curve nowadays - always picking up what others have already started;  GUI operating systems, Cloud computing and Tablets are just a couple of examples.

DevOps?
This brings me on nicely to DevOps.

I'm currently in Barcelona at their annual TechEd event and took part today in their DevOps focus group.  Surprisingly for such a "focus group" that I was really the only one around the table that had any kind of matured DevOps culture at work (and really we are still only beginners tbh). 

Whether this is representative of their customers I don't know. What it is says about their customers I also don't know...

Anyway that is a bit disappointing, I'm always keen to hear from other adopters especially ones that are further down the path than we are.

I digress.

It strikes me that Microsoft are in a fantastic position to sow the market place right up.  They have tools and technologies that range from cradle to grave, from requirements analysis, through to development, testing, provisioning, release, support etc...

Add in that they also cover on premise, hybrid and cloud solutions then you have the vast majority of the application life-cycle and deployment scenarios.

I can't think of any other companies that are in the same privileged position.

However, they aren't embracing this and they are missing a big opportunity.  If they want to be the premier Cloud player then they need to make it easy for people to get from idea in someone's head to the production space in a fast, repeatable and easy fashion.  Anyone can provide a load of compute power.  The real differentiator is being able to facilitate the whole pipeline; laptop to production.

There are a lot of different tools along their chain - the vast majority of which are very Microsoft-centric meaning its difficult (or at least harder) to integrate your favourite technologies if they aren't from Microsoft.

Tools that don't integrate become silos of information and reduce visibility and transparency across teams;  ultimately leading to reduced collaboration.

The sheer number of different tools and solutions makes it difficult to know where to start. 

Do I use TFS or GIT?  Or both? Should I put System Centre in before and get the Ops and provisioning bit right first?  Oh hang on I want to use the Azure pack but I use VMware?

Prompt head explosion.

My boss said something today and he's quite right.  If we knew were to begin with all this stuff and it arrived out of the box then we'd probably buy everything Microsoft have.  It's just too darn hard and too darn complex.  Microsoft seem to expect that you are Microsoft only and are on the very latest everything.  Not realistic.

I hate this term but if there was a "single pain of glass" across your SDLC with the ability to be agnostic of technologies then Microsoft would be onto a winner.

With a Microsoft "something" orchestrating the life-cycle we could integrate what we have right now and eek out the non-MS stuff over time rather than having to blow our data-center up and start again.

Credibility

If Microsoft were really embracing DevOps then they wouldn't be as late to the game as they are and so uninsipring in some of the their technologies. 

For example the release tools in Visual Studio 2013 are so behind the likes of UrbanCode Deploy it's a bit embarrassing.  PowerShell DSC (Desired State Configuration) is so far behind Puppet its redic (as they say in Essex).  Why would I just want to configure my Windows servers when Puppet can control network devices and a whole range of other stuff too?

I think its the same as a personal trainer. 

I wouldn't ask a fat personal trainer to train me.

If you don't live and breathe it then you can't be credible for trying to sell it.

P.S. yes Microsoft I am available to help you put this right in exchange for a small island or something like that (I hear the Bahamas are nice)  :)



Monday 6 October 2014

Service virtualisation

You may have heard of Service Virtualisation (SV) recently.  It's an idea that seems to getting some traction especially in the Continuous Delivery space.

Not to be confused with Operating System virtualisation technologies such as Hyper-V and VMware.

My personal definition of SV is the productionisation of stubbing (granted this is a bit simplistic).

Technically SV can mock too but I didn't really want to get into a discussion of what is a mock and what is a stub :)

Imagine you have an application that is a composite of multiple other systems you have in your company.  It retrieves data from a web service, reads a file from an FTP site and runs a SQL query against a database from a different application and imports the data.

That's potentially problematic as all these systems needs to be available.

What happens if the people running the web services want to take it down for maintenance?  Does that mean you can't develop and test your application?

What happens if you don't have all these interfaces all the way through your environment stack?

What happens if you are reliant on a change that isn't going to be ready for another 10 weeks in one of these interfaces?  Are you going to be blocked from development for 10 weeks?

These are all the kinds of questions that Service Virtualisation hopes to address.

It provides an abstraction of other systems (such as SOA platforms) or even resources such as flat files you might have not have access to.

It can record the interactions between your code and the other system so that it's easy and quick to create the stubbed interface.  Commonly SV applications also provide an interface so you can change the values in these recorded stubs easily through the GUI.

We are looking at implementing something over the next couple of months for one of our key systems.  In a complicated SOA environment with logic managed by different teams and in different geographic locations it's really killing their pace of change.  They are unable to test until a full stack is in place (system test) which can be days/weeks until after they have made the original change.

You really want very short feedback loops between dev and test.  Ideally none at all and the developer can do a lot of testing using automated regression tools on his/her local developer machine.

Service Virtualisation should help them accomplish this rather than the dev, deploy, test cycle that they are currently going through.  It should really increase the pace of change and expect some big cost and efficiency gains.