Many organisations know about this thing called IT Service Management and hear consultants claiming to have the come up with THE silver bullet to take away all your pain, or the new shiny, washes whiter than white approach, but ...this is article is about the things that can go wrong and the successes in the real world.
I would like to discuss some areas where things can go right and be worthwhile, where things can and have gone wrong (speaking from experience) and what lessons can be learnt.
The initial areas are:
Resistance to Change
Resistance to any type of change can come about through many different ways.
1. – Cultural changes - This can range from the implementation of new team structures or shifts to new management coming in or a reduction in team sizes. Many different areas cover cultural changes and can lead to resistance. We’ll cover this is a bit more detail further on.
2. – Resistance by customers or users. Change can be seen by customers or users as just another way to make life harder for them, or to remove their favourite support technician from their grasp.
This is a key area that needs to be addressed as early in the change process as the team members being affected. If you get this wrong at the start, it can take a lot longer to put right later.
3. - Something I have seen but did not expect was around changes to Service Management toolsets. One of these issues was partially of my own doing and one was at a client site before I started there.
I’m sure we have all worked somewhere where the main Service Management toolset has not been to everyone’s liking. Either it’s not intuitive, there’s too many fields / not enough fields, reporting is cumbersome, the screens doesn’t look pretty enough…..
Yet I have been in organisations where changes of toolset have brought about resistance from the most unexpected areas. Many people just don’t like you to change the tool that they like to complain about. The best way around this resistance is to engage as many people as you can.
Key users will, of course, be involved in the process of understanding what was wrong with the old one and what is trying to be fixed. These people will be the main inputs into your requirements gathering exercise. (You are doing that, aren’t you?) However, don’t forget to ask the team that use the tool occasionally. Their input is just as important to the success of the project, as anybody elses.
Also, don’t expect that just because people sign off on a new toolset and agree that it will deliver all of their requirements, that they will be happy. We are dealing with people here. Of the holy trinity of People, Process and Tool, People are the hardest ones to get right. But also the most important.
The pain of this resistance to change; what can go wrong?
Poor team spirit
If this isn’t handled correctly, one person’s negative view of a change can bring down the rest of the team’s view. Too often I have seen a minority, change the majority’s mind-set through continual complaining and negativity.
This poor team spirit has the ability to also be concentrated on the management or leadership of a team. “Why are you allowing this to happen to us?” It may have one positive aspect, of pulling the team together, but if it is still in a negative way then that helps no-one; you still have resistance to the change and that can take months to put right.
If you discover a negative aspect to somebody’s attitude, address it a quickly as possible. This does not mean that you crush them, but rather you understand the issues and address them in the best way possible. This might require improved communication or training, or a complete re-evaluation of the approach and change of direction.
Potentially the worst of these areas of resistance, however, is a breakdown between the teams and the users or customers.
This is generally for a couple of reasons:
Your customers won’t talk to or work with the team because of the negative attitudes or disruptive ways of working (see above), or the customer will ignore new ways of working because they “liked the other way”.
If customers won’t change and they continue to do things like ring their favourite person direct, then that is harder to address.
Many years ago I worked for an organisation where support across multiple sites was being centralised through a single service desk and the on-site support technicians were only supposed to respond to calls through the system. Of course, all users were told what was happening and why, but for some it made no difference. As a technician, it became difficult because our raison d’etre was to help people. Eventually after few weeks of new processes being ignored by some users and senior IT managers complaining that we were bypassing processes to help these people, we discovered a way forwards. Each user that approached us, we would go and help, but ask them to log the call while we were there. Next time, we would advise them that they would be there shortly, could they log it. Then a few minutes later, go and help. The next time this happened, we would say the same thing, but as soon as they left to log the call, we would pre-warn the Service Desk and give them hints on likely fixes / knowledge articles. This enabled the Service Desk to fix the call while the user was on the phone, creating a good impression. Within a few short weeks of this type of approach, we managed to get over 95% of “walk-ups” to go via the Service Desk first. The other 5% took months to train properly!
Good ways to prepare for this are through good planning and considering all ways that the teams might either be, or feel affected.
Now, the pleasures of these changes that people can be so resistant to.
If you get your changes right, and bring people, teams and users, along on the journey it can be so fulfilling. You have improved ways of working which the majority of people within your teams and user or customer base will acknowledge.
You will get respect from your customers, because they can see what you are doing and why. If they understand and respect what you are doing, it will become so much easier to keep the momentum up. You will be able to demonstrate the improvements.
Implementation of Event & Availability Management
These two are often a pair. It’s unlikely you will do availability reporting without event management. They provide visibility of what is happening, or going to happen, to support teams and if set up in such a way, to customers as well.
It can also provide assistance to support teams, to enable them to see what else may be affected in an incident or change.
The pain of not having effective event management is something that I would like to guess, we have all been through. As a support team member – Service Desk, technician, problem manager, change manager, IT manager, - you need to know what is going on. If you don’t have those tools, it can be a nightmare.
The phone rings and a users asks if something is wrong with web access. Straight away you are on the back foot. Service Desk agent tries it and it works. Another one tries it and it doesn’t. What’s going on? Is it the site? The computer? The switches? The web monitoring application? The server? Firewall? (we all like to blame the firewall).
I have seen and been part of teams where it has taken what felt like hours to diagnose the issue before the resolution could be considered.
You have no idea what is happening. What went wrong? The correct SME (Subject Matter Expert) is not around, because they were working late last night doing a change, The majority of the support team are investigating so that they know where to go to fix the issue.
No communication can be sent out because nobody knows the full extent.
You are cuffed and blindfolded.
As for availability management or reporting, how can you tell your customers how wonderful you are if you can’t prove it?
Get it right however, and it just feels, Good!
Firstly you need to know how you want to do it, and then get your tool in place. Spend the time to define a process and then spend more time getting the tool set up right and it will be worth it. It is worth noting that it isn’t all about the big expensive shiny tools. There are some very good, cheap and easy to set up ones which deliver just as much for an awful lot less outlay. However, as with anything, you need to understand your requirements before you buy. Free or cheap may suit you, but another organisation may find that they have a need for a large enterprise wide, multi-national tool.
This will enable you to be prepared – most of the time – because your tool should be able to tell you, somehow, when something untoward is starting to occur. Is it a hardware component? Event log? Traffic between 2 points of the network? You will be able to see it before the users, if all goes well. If you get really good at it, you can even investigate and resolve BEFORE anybody even realised it.
Maintaining them is another. They do not come without an overhead. Change Management / Projects will need to consider monitoring thresholds, licensing, etc. prior to handing over to Operations and somebody will need to maintain it to ensure that the information received is accurate.
But when done right, they are fantastic.
Major Incident Management
To my mind the three areas, outside of event management, that really improve the way that major incidents are handled, are communication, learning from the incident and being transparent.
In IT, we are not an island. We are a core part of most organisations now, whether they and we like it or not. We have to share what is happening, what we have learnt from the incident and what is going to happen to reduce the chance of it happening again. Otherwise, how can we even hope to be trusted?
I have seen IT teams embrace the opportunity to operate, during major incidents, with heads down in a locked room keeping information secret thereby increasing the chance of recurrence.
The trouble with being like this is that the wider organisation starts to accept that this is the way it is. So they don’t complain formally, they just moan to each other.
And in fairness to IT, how often do you see a review of a failed marketing campaign? Or a full and frank review of a failed pay run (apart from to blame IT; who ever heard Finance say that they should have had business continuity plans in place because they accepted that the system was not resilient when it was implemented?)
Get it right however and the pleasure is shared amongst all concerned parties.
Communicate with your customers and users. Let them know what is going on, even if that communication is just to say that you are still working on it. Set up lines of communication. Make sure that the technicians working on the fix are left alone to get on with it, but have one person who gets updates. If you commit to telling users every 30 minutes or hour, get an update 10 mins before. Keep telling people what is happening. AND keep the Service Desk updated.
When the service is restored, carry out a full review. What went well and what didn’t. Keep to the facts. No emotions. What infrastructure components failed. Can resilience be done better? Should the service be resilient if everyone in the rest of the business was jumping up and down? Should processes inside or outside of IT be amended? Set actions. Set timeframes. Follow up on them.
Share the outcome with as much of the rest of the business as possible. If there has been a failure and you have a plan to mitigate the chances of this happening again, tell everyone. If you have a piece of work in place and it is being progressed, but the issue reocurrs sooner than you can finish the remediation, less people are going to complain than if you had done nothing.
Use the incident as a driver for good.
Service Management Life Lessons Learnt
Some key lessons that I have learnt that I would share with you on removing some of the pain from Operations and Service Management.
Don’t be afraid to bring in consultants, but find ones with real world experience. You probably know what needs to be done, but do you have the time to think it through and implement the change, while also doing your day job? Probably not.
Don't be afraid to take advice and use a mentor. Other people's thoughts are sometimes the clarity you need when the trees are hiding the woods.
Don’t forget to ask the people on the ground what is wrong or what can improve. They have good ideas; often the best ideas.
Ask a lot of questions before starting.
Don’t try and do too much too quickly. Keep going for the “low hanging fruit”. It’s always there and it’s always easier to do that than try and pick all fruit in one day. Look up Tipu by Rob England.
As a consultant, remember, you don’t know it all. You can learn from each client. Admit it to them and yourself. Also, use your peers. Accept that other people can sometimes do things better, and generally will help.
ITIL™ is a trade mark of AXELOS Limited, used under permission of AXELOS Limited. All rights reserved.