The bugbear of being Back Office

The recent hoohah over the WannaDecrypt / WannaCry ransomware debacle, and the subsequent shamefaced admittance from a number of institutions that they have not been maintaining and /or future-proofing their systems properly, has once again brought one of my personal ‘bafflements’ into sharp focus.

My background in the IT space has most often been supporting the backroom admin functions. You know? The un-sexy necessities of any large-scale organisation; systems which pay the employees or suppliers, which keep records, that calculate tax, book leave or track the never-ending annual appraisal cycles. These are systems that frequently have to run regression-based Waterfall methodologies due to heavy customisation and monolithic architecture. The ones that look longingly at DevOps Agile approaches, sigh melodramatically, and then pragmatically just get on with the job. Giving credit where it is due, some companies are slowly, slowly! moving towards modernising these unwieldy applications as an inability to migrate these type of customised services into the Cloud highlights some pretty deep cost inefficiencies.

Maintenance of these type of older systems is easily brushed under the carpet; ‘we haven’t been hacked so far, why should we feel any urgency now?’ Or in some cases it is not brushed under the carpet and it is scoped, but then it has to ‘wait’ for a suitable opportunity to test it before it can be put live.

I am not taking a Systems Thinking course for nothing though – so I thought it would be an interesting thought experiment to step away from my own perspectives and look at some others.

In the private sector, budgets are shaped by the bottom line – what is going to MAKE the business money. The company has a finite amount of money to invest and it wants to do so with the biggest return it can get. In this environment, back office systems that handle internal data and files are inevitably going to be low on the pecking order when up against the survival image of the business in its sector. Customer systems are going to get a lot of TLC because of the absolute necessity of a good customer experience. I find this a bit of a catch 22 situation – you need the customers to make the money, but how good will the customer experience be if all of your staff disappear due to a payroll error.

In the public sector the problem is bigger than just back office systems; the constant squeeze from government leaves everyone competing for a piece of an ever shrinking pie. (An ever shrinking pie that seems to pay some pretty incredible sums of money to contractors for their IT systems as well I might add. I have been witness to a few public sector ‘contracts’ and colour me utterly bamboozled by the procurement process!) Taking the beleaguered NHS Trusts as a prime example, choosing between replacing some antiquated systems that seem to be working okay, or paying to keep the lights on and patients moving through their appointments appears to be a no-brainer (especially in view of the ever tightening hospital waiting list targets). But this approach just defers the problem. And defers it. And defers it. And then something goes …ka-boom!  Also – who on earth would hack a hospital… amiright?

Still – the kerfuffle will bring some much needed attention to these darkened corners. No bad thing. However the cynic in me asks – will we learn from it? Or after the buzz has died down, will those bad habits start creeping back in? I would be interested in hearing some other opinions and thoughts about this mess – feel free to post if you feel so inclined.

Take-away thoughts for DevOps / Agile practices

I attended the Computing DevOps Summit last Wednesday in London. It was a rather polished affair at the Grosvenor Square Marriott in Mayfair; well attended, well presented and with a broad lineup of Keynotes all roundly advocating DevOps.

It was a valuable day overall, but one speaker in particular resonated with me, as his choice of topic – ‘Unicorns and Elephants’ – is something quite familiar. The speaker, Rick Allan, Head of Delivery Capability at Zurich Insurance, spent his time waxing lyrical on the myriad challenges encountered by complex, multi-faceted organisations that have gone through significant organic growth. and the unexpected barriers this can create when adopting agile practices.

This is a scenario that I have seen in several organisations over the years – where do you even start when you have a estate that is riddled with legacy infrastructure and applications that have, over time, been ‘bolted’ onto each other, integrated with third parties and heavily customised. Automation in this landscape is a veritable minefield of problems.

My main take-aways from the event were:

  • Know what you actually have got, what it does, and what it communicates with.
  • Agree a vision with the business – implementing the changes needed to achieve that vision could have some serious cost and usability implications and so their buy-in and support is critical.
  • Pick off simpler applications first. Chipping small, tactical chunks from the monolith is going to be easier at first than trying to brute force it. Prove your method works.
  • Move away from project and start thinking by product and value stream (again – something the business really needs to get on-board with to enable this to work as this has budgeting implications.) This helps to break down velocity-sapping silos and move away from a transaction based culture.
  • Stand firm in the face of large, expensive programmes of change – they need to follow the rules too.
  • Implement a default tool-bench – a set of approved tools that are authorised and licensed for use. Migrate services over to them and shut off unauthorised tools behind you. Make sure your developers and engineers actually know about them, know what they are for and how to use them.
  • Use metrics to understand and prove where bottlenecks exist. This information can then be leveraged to determine a strategy to overcome the problem. Whatever the final option, the business case will need to be proven, and those metrics can help to establish that mandate.

In order to enable true continuous delivery, the ultimate goal needs to be consistent, converged and scalable architectures, ideally managed by smart software – and that is a long, and probably quite expensive, path for organisations with significant amounts of legacy applications and architecture. This type of change requires buy-in from the very top, and the investment, time and willpower to carry it through. Good intentions can only go so far.

What does Environments Management and a spiderweb have in common?

There is a look that you learn to identify in the early stages of taking on an Environments Management role. It’s somewhere between bashful, hopeful and slightly desperate. Occasionally it is preceded by a slightly nervous cough. And without fail it means a project has underestimated their requirements and they need you to magic something up… ideally within the next hour. Tomorrow will do.

One of the biggest challenges facing Environment Managers today is the speed that project environment requirements change. As projects become more agile, so the assumption is, should the environments that support them. However, no cash-conscious IT department will run extra environments ‘just in case’ – it doesn’t matter whether the applications are based on-premise, in the cloud or a hybrid of the two – this is just burning money. Also not every type of environment can be spun up at the drop of a hat – those with multiple integrations and specific connectivity requirements will take time if they are not part of a standard automated rig. At any given time an Environment Manager has to run the leanest ship possible, but remain flexible enough to deal with a certain ‘unspecified amount of changing requirements.’

So what is the best way I have found to handle this?

I will use the analogy of a  spiders web. In order to use the web to best effect the spider has to sit in a place where it can feel any single string vibrating at any time. The same goes for an Environment Manager. Except in our case the strings are radiating lines of communication.

Communication is probably the single biggest asset in an Environment Managers predictive arsenal. If you can build solid working relationships with people at every stage of a Release lifecycle – from the Business Analyst that shapes the projects, to the Project Managers that will run them, to the Technical resources that build them, the Testers that check them and to the Release team that puts the whole package live, you will have a far better chance of hearing or seeing about new or changing requirements before they are formally asked for. Nothing ever comes completely out of the blue.

It does take time to  build up your spiderweb (and you may get some odd looks at first from people who have never even heard of a Environment Manager, let alone spoken to one)- but I assure you -its worth it. The only downside is that after a while you will probably get a bit of a reputation with your Project teams for predicting the future.

Personal Development: The gift of time

I read an interesting article this morning in the Harvard Business Review.  It is a few years old now – but I think its just as true today as it was when it was published. The premise is that while leaders strongly advocate personal development and learning, they do not really allow any time for their employees to actually do any.

In a results driven environment, the long-term gains of credible personal learning are overshadowed by the short-term needs of constant delivery.

Continue reading