The bugbear of being Back Office

The recent hoohah over the WannaDecrypt / WannaCry ransomware debacle, and the subsequent shamefaced admittance from a number of institutions that they have not been maintaining and /or future-proofing their systems properly, has once again brought one of my personal ‘bafflements’ into sharp focus.

My background in the IT space has most often been supporting the backroom admin functions. You know? The un-sexy necessities of any large-scale organisation; systems which pay the employees or suppliers, which keep records, that calculate tax, book leave or track the never-ending annual appraisal cycles. These are systems that frequently have to run regression-based Waterfall methodologies due to heavy customisation and monolithic architecture. The ones that look longingly at DevOps Agile approaches, sigh melodramatically, and then pragmatically just get on with the job. Giving credit where it is due, some companies are slowly, slowly! moving towards modernising these unwieldy applications as an inability to migrate these type of customised services into the Cloud highlights some pretty deep cost inefficiencies.

Maintenance of these type of older systems is easily brushed under the carpet; ‘we haven’t been hacked so far, why should we feel any urgency now?’ Or in some cases it is not brushed under the carpet and it is scoped, but then it has to ‘wait’ for a suitable opportunity to test it before it can be put live.

I am not taking a Systems Thinking course for nothing though – so I thought it would be an interesting thought experiment to step away from my own perspectives and look at some others.

In the private sector, budgets are shaped by the bottom line – what is going to MAKE the business money. The company has a finite amount of money to invest and it wants to do so with the biggest return it can get. In this environment, back office systems that handle internal data and files are inevitably going to be low on the pecking order when up against the survival image of the business in its sector. Customer systems are going to get a lot of TLC because of the absolute necessity of a good customer experience. I find this a bit of a catch 22 situation – you need the customers to make the money, but how good will the customer experience be if all of your staff disappear due to a payroll error.

In the public sector the problem is bigger than just back office systems; the constant squeeze from government leaves everyone competing for a piece of an ever shrinking pie. (An ever shrinking pie that seems to pay some pretty incredible sums of money to contractors for their IT systems as well I might add. I have been witness to a few public sector ‘contracts’ and colour me utterly bamboozled by the procurement process!) Taking the beleaguered NHS Trusts as a prime example, choosing between replacing some antiquated systems that seem to be working okay, or paying to keep the lights on and patients moving through their appointments appears to be a no-brainer (especially in view of the ever tightening hospital waiting list targets). But this approach just defers the problem. And defers it. And defers it. And then something goes …ka-boom!  Also – who on earth would hack a hospital… amiright?

Still – the kerfuffle will bring some much needed attention to these darkened corners. No bad thing. However the cynic in me asks – will we learn from it? Or after the buzz has died down, will those bad habits start creeping back in? I would be interested in hearing some other opinions and thoughts about this mess – feel free to post if you feel so inclined.

Review of: The Phoenix Project 

5170sr05QALTitle

The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win

Author

Gene Kim, Kevin Behr and George Spafford

It is not often that you hear of a novel which is part of some ethereal ‘required reading’ list for IT professionals. This is apparently one of those rare beasts. I was dubious. I mean who reads a NOVEL about DevOps and software?! Putting aside my cynicism, I downloaded it onto my Audible.

While obviously a somewhat caricatured scenario (well… in most cases…) this book does a remarkable job of putting the software delivery life-cycle under the spotlight while actually remaining readable, engaging and entertaining.

The story revolves around a newly promoted Operations Manager, a failing company, and its flagship ‘super-project’ all set against the familiar backdrop of a highly competitive market. Hampered by disassociated business, project and IT teams, single points of failure, a chronic blame culture, and significant technical debt, the ‘heroic’ Operations Manager and his team embark upon a journey of Agile and DevOps self discovery under the (slightly bizarre) tutelage of a ‘IT whizz’ who seems a strange cross between ‘Master Shifu’ and a rich, aging hipster.

It turns out I enjoyed this book. I found I was identifying with the protagonist; cringing at the situations I recognised (countless!), laughing at the jokes and nodding sagely at the thought processes (and occasional absolute confusion) of the main character. Everyone who has ever worked in or alongside IT Delivery would recognise these characters; everyone knows someone a bit like Brent, or Sarah or John.

If you can suspend your disbelief at the apparent ease of new process adoption in this company (only ONE team actively trying to get around the rules?… yeah right!), the astonishing lack of panicked attempts to backtrack on the new processes when the going gets tough, and a somewhat exaggerated ‘mentor’ – this is valuable book. It takes the idea of ‘telling a story’ to its ultimate endpoint – a full novel. Instead of a pseudo-rule-book of buzzwords tied together with dry examples of a finished product, this book takes you on a journey to show how you could get there… and even more importantly, WHY you should.

As someone whose entire role revolves around one of those ‘bottleneck workstations’ so accurately described in the book, I found the authors managed to succinctly put their fingers right onto about 80% of my daily challenges. I also found myself giving mental high-fives when I identified initiatives that I have introduced or been part of in my own career, and rubbing my chin thoughtfully over possible future opportunities.

This book touches on most of the main challenges and drivers of modern day software life-cycle management, alongside practical descriptions of various Agile, Lean, Devops, Continuous Delivery, Kanban, Continuous Improvement, and a whole host of other practices in between. It identifies the common misconceptions between IT and the business, and it demonstrates -albeit in a somewhat exaggerated way – the benefits to the whole business if everyone starts working as part of a cohesive system.

I thoroughly recommend this book for anyone that works as part of – or relies upon – business IT as part of their job. Ir might explain a lot and it could have some useful ideas for your own practice.

What does Environments Management and a spiderweb have in common?

There is a look that you learn to identify in the early stages of taking on an Environments Management role. It’s somewhere between bashful, hopeful and slightly desperate. Occasionally it is preceded by a slightly nervous cough. And without fail it means a project has underestimated their requirements and they need you to magic something up… ideally within the next hour. Tomorrow will do.

One of the biggest challenges facing Environment Managers today is the speed that project environment requirements change. As projects become more agile, so the assumption is, should the environments that support them. However, no cash-conscious IT department will run extra environments ‘just in case’ – it doesn’t matter whether the applications are based on-premise, in the cloud or a hybrid of the two – this is just burning money. Also not every type of environment can be spun up at the drop of a hat – those with multiple integrations and specific connectivity requirements will take time if they are not part of a standard automated rig. At any given time an Environment Manager has to run the leanest ship possible, but remain flexible enough to deal with a certain ‘unspecified amount of changing requirements.’

So what is the best way I have found to handle this?

I will use the analogy of a  spiders web. In order to use the web to best effect the spider has to sit in a place where it can feel any single string vibrating at any time. The same goes for an Environment Manager. Except in our case the strings are radiating lines of communication.

Communication is probably the single biggest asset in an Environment Managers predictive arsenal. If you can build solid working relationships with people at every stage of a Release lifecycle – from the Business Analyst that shapes the projects, to the Project Managers that will run them, to the Technical resources that build them, the Testers that check them and to the Release team that puts the whole package live, you will have a far better chance of hearing or seeing about new or changing requirements before they are formally asked for. Nothing ever comes completely out of the blue.

It does take time to  build up your spiderweb (and you may get some odd looks at first from people who have never even heard of a Environment Manager, let alone spoken to one)- but I assure you -its worth it. The only downside is that after a while you will probably get a bit of a reputation with your Project teams for predicting the future.

Technical Debt – having a vision of what should be

Technical debt is a problem for most IT divisions that have been running for a while. The drive for quick delivery to market forces projects to run as fast as possible in order to meet business expectations – and unless a company is very disciplined from the start- this can sometimes come at a cost of shortcuts or ‘acceptable’ defects being introduced.

On a project by project basis this probably isn’t a huge problem – so long as you are within risk tolerance, some projects can go live with known defects or workarounds – but when you start thinking that a lot of major enterprises will be running hundreds of potential projects in a year, and some applications may be several years old, you could see how this could add up if a strategy isn’t in place to handle it.

One of the sanest ways to combat this is to move to a methodology that bakes quality into the process; agile methodologies are very keen on this. However for those companies that are transitioning to agile methodologies this means they could be facing a mountain of technical debt before they even start.

Where do you even start with something like that?

I read a very entertaining blog by Jergen Moons which makes some solid suggestions about tangible ways to approach handling technical code debt. The biggest take away I got from it is to start small and make consolidated targeted improvements as part of a long term vision that works alongside future development. I recommend having a look.

My take on 3rd Annual DevOps Virtual Summit #dovs17

I have never attended a virtual summit before. It was an interesting experience – like a more impressive webinar – but it is probably not a format for the easily distracted. I am on GMT and the event was EDT – so it is in the evening for me (childrens dinnertime + software methodologies = a mess!)

Other than the sweetcorn on the floor though, I was pleasantly surprised.

I stumbled across a invitation for this event on Twitter about a week ago – it is hosted by CA Technologies. It was free to sign up, and they had some interesting keynotes from businesses that have trodden the path of moving from Waterfall to Agile and beyond United Airlines, CNN, GM Financial and a few other (the full list is on the CA website here.)

The most interesting keynote for me was GM Financial as this one showed a real work in progress transformation – quite unusual for this sort of event – in my experience delegates would only see the polished final article and not the journey to get there.

The best speaker of the event was Silvia Prickel, Managing Director at United Airlines. She brought a refreshing no-nonsense, senior business stakeholder attitude to the table – again a really useful insight into how differently senior managers think to the developers and testers for a IT project of this size.

Delegates that signed up came away with some useful whitepapers and a copy of a DevOps book that I have yet to have a really good look at (so I cannot really comment on it yet). 

Here is the link to the recorded sessions if you want to check them out yourself.

Would I attend another one? Yes… I probably would so long as I could could guarantee some quiet time to do so. You don’t get the Networking experience that you would with a ‘in-person’ event – you don’t really see or hear from other ‘attendees’ bar the Q&A chat box, but if you can give it your undivided attention I think you can focus more on the information being given by the keynotes. 

That and standing on sweetcorn in socks is just nasty.

Alright then. What is a IT Environment?

Leading on from my first post, the next question to clear up is ‘well….what actually IS a environment?’ To paraphrase IF4IT:

A…repeatable Configuration or set of Configurations that…act as a contained, bordered or surrounding operational context and that allow one or more Entities such as Resources or Systems to perform one or more controlled functions or activities.

…phew!… Lets try putting that into something a bit more manageable shall we?

Imagine a big car insurance company – pick any you want. Lets pick a system they might hypothetically use – lets imagine a website to sell their policies. What does a website need to run? A database, some applications, a server or two? OK. But is that ALL it needs? No. It probably needs to connect to a bank for payments right? Do they connect to other insurers to verify No Claims Discount? Who is doing the premium calculations? How about that extra legal cover they are offering you? What looks simple and seamless from the outside can actually be made up of tens – or even hundreds – of configurations, components and integrations. To the business – this whole entity is the Production environment for this website.*

But wait! The insurance company wants to add or change something on their website – maybe they want to offer Home Insurance? No matter how much faith they have in their technical teams, no business is just going to let them tinker about with the Production environment.. What if they break it? No website = no new policies being bought = no money = someone is getting fired!

So you need to make a copy of all (or at least the relevant parts) of the functionality of the Production environment so that the Project team tasked with adding Home Insurance functionality to the website can construct, verify, and promote controlled changes through the various stages of readiness before they are considered ready for the Production environment. How big should those copies be? What data will they use? What other systems do they need to push/pull information to/from?

In  businesses that have a broad or complex IT systems or are running significant IT change programs,  an Environment Manager (or a team of them) are needed to keep a handle on building these ‘copies,’keeping track of what they are used for, what is in them, who is using them, and what to do with them once a Project has finished.

*For any technical bods out there that are shuddering in horror at my vast generalisations I apologise. I find it easier to ‘tell a story’ when trying to explain concepts, (and I would assume that you all would know what a Non Production Environment is anyway!)