Enterprise software is in a crisis, one that is self-imposed and, frankly, has been a long time coming. Failure to fix the problem will be disastrous, and yet, from where I sit, disaster is exactly where the market is heading.
Hyperbole? I don’t think so. Vendors in the cloud need customers to renew, or said vendors will be excoriated by their investors. And the flaying is all set to begin.
The crisis is simple: the historic failure rate for on-premise software implementations – up to 2/3 of projects fail to deliver their expected value – is repeating itself in the cloud market. It’s not too surprising if you think of it. One of the key parties responsible for messing up on-premise implementations for decades – those global SIs who helped propel enterprise software to a multi-billion, volume market in the latter part of the 20th century, and in the process, created a culture of failure and mediocrity that somehow everyone was okay with – are now every vendors’ “strategic partners” in charge of the burgeoning growth in cloud implementations.
And these “partners” are performing in the cloud just like they did in the on-premise world: poorly.
To be fair, it’s not just about the SIs. The major Sis, and many minor ones too, are aided and abetted by two complicit parties: The customers, who must bear some responsibility for, at a minimum, not holding the SIs’ feet to the fire for failing their responsibility as the “adult supervision” in these projects. And the vendors, too many of whom are “okay” with watching their projects turn into slow-motion train wrecks, mostly because they’re also scared to call the SIs out and equally reluctant to push their customers into changing how they staff and manage these projects.
But, considering the global SIs are usually the ones with account control – these companies tend to do much more business with a given customer than the vendors, and they have proven to be collectively opposed to anyone or thing that would truly hold them accountable – I’m going to focus most of this post on them.
Finding the smoking gun in the implementation failure “blame game” is an exercise that requires some real sleuthing and an always-on bullshit meter. Outside the public sector market, where freedom of information requests can lay bare the trail of tears that typify all too many projects, failure is not just an orphan: he’s blind, deaf, and dumb, and locked away where no one can find him. Considering the billions that are wasted every year, the veil of secrecy is understandable – if the world really knew not just how often enterprise software projects go south, but how preventable so many of these failures could be, heads would fly. Or explode. Or both.
What I do know is that, instead of removing critical points of failure, the cloud is upping the ante. Delivery execs keep telling me that the majority – and in at least two cases the totality – of escalations during cloud implementations come from partners. And I know that two years ago, when SuccessFactors tried to force partners to check in with the company at regular intervals during an implementation, they were completely shot down by these so-called partners. And I know that PaaS vendors like Amazon AWS are stepping up their own professional services offerings, including performing various forms of health checks on on-going projects, precisely because too many are running into trouble. And I keep seeing SAP mentor and critic Jarrett Pazahanick excoriate SuccessFactors SIs (under the glorious hashtag #wildwest) for their obvious lack of knowledge about implementing in the cloud, much less their lack of certified cloud resources.
Most importantly, I know that all too many project managers on both SI and vendor service provider teams are still proceeding as they did in the on-premise world, fighting against transparency and accountability with every weapon they have at their disposal.
I know this last piece of information because for the last two years I’ve been running a startup called ProQ.io. We created ProQ (as in project quality) in the wake of yet another poorly reported implementation failure in which the vendor, SAP in this case, took all the blame for a mess-up that was clearly the primary responsibility of the service provider. That service provider, in this case as in many others, was good old Deloitte, which has a rap sheep a mile long. (For fun, trying searching “SAP failure Deloitte” and see how many hits you get. If you’re surprised it’s only because you haven’t been paying attention.)
ProQ has some unique characteristics, not the least of which is its ability to scare the pants off of SIs and project managers eager to perpetuate the culture of mediocrity that permeates this market. Take CAP Gemini – a company with some unfortunately spectacular failures under its belt, like the $160 + million disaster recently visited upon the Scottish National Health System. After some interest regarding ProQ from senior management, one of the execs in charge of delivery for North America put the kibosh on even considering ProQ – “not necessary” was the excuse, despite the absolute necessity of having something to mitigate an unfortunate legacy of project failure. That delivery exec’s “not necessary” was, for the record, said to me well before the Scottish NHS and before a Dutch journalism team actually did a documentary on yet another spectacular CAP Gemini failure. Rinse and repeat.
This is a typical pattern. ProQ tends to get high marks from senior management across the board for its ability, in a simple and relatively painless way, to report out from the hidden recesses of a project how well, or poorly, the client and their service provider are working together as a team. But ProQ typically gets the thumbs down from project managers or their enablers in the field whenever they are given the option to say “yes or no” to using ProQ.
I see it this way: looking at the raw numbers about project failure, if you’ve done three projects in your career, two of them have “failed to deliver their expected value”, a euphemism we use at ProQ to open up the possibility that abject failure is rare, but mediocrity is the norm. Regardless, after your involvement in those projects that didn’t deliver, were tied up in endless delays, or went to court, were you ever held accountable? Did anyone get fired or demoted? Did the brand of the SI in question suffer? While heads have rolled in some more spectacular cases, most of the time accountability doesn’t really happen. Anyway, it’s vendor’s name that gets dragged through the mud, but not the SI. So what’s the purpose of transparency or accountability? In a world without consequences, why not just call it a day and move on to the next project?
I call this the enterprise software culture of mediocrity – only because I’m trying to positive and not just call it what it most deserves to be called: a culture of failure. One that costs literally billions a year in wasted money, time, and reputations. And one that shows no sign of abating as the market moves from on-premise to cloud implementations.
Which brings us back to the renewal problem. I mean disaster-in-process.
The fact that the on-going train wreck in the world of enterprise software implementations keeps rolling down the track is why I think the requirement to boost renewals – the only really relevant success metric in the enterprise software cloud market – is going to be really hard to fulfill. You renew – and that includes renewing for the seats you paid for but haven’t yet implemented – because you’re happy. You’re happy if, ideally, your implementation was on time and on budget, though most customers will settle for “achieving expected value,” an acceptable bottom line.
But if you’re unhappy, while switching costs make it unlikely you’ll simply throw out the software altogether, you’re going to think twice about renewing those unused modules, adding those seats that you were planning to add as the rollout expanded to other geographies or lines of business, and that shiny new cloud thing from your vendor that theoretically adds a ton of value to the existing, mired-in-mediocrity, cloud thing you’re none too happy about.
This renewal game is doubly important for vendors like Infor, Microsoft Oracle, Salesforce,com, SAP, Workday, and every other SaaS vendor: First of all, there’s the threat to revenues from non-renewal. Unlike the on-premise perpetual license world, where vendors got paid for the full value of the contract pretty much up-front, in the cloud world the vendor needs many years of subscription payments to earn the full value of the contract – five on average. So if a customer doesn’t renew, or renews fewer seats than they initially paid for, the vendor’s revenues and profits are hugely impacted.
The other problem with the renewal game is the problem of where that mediocre implementation is supposed to live. If it’s in the vendor’s cloud, they get to own the inefficiency and, often, the cost of remediation to bring the implementation up to industry standards. And if it’s running in the cloud of a vendor’s PaaS partner, while ownership of the problem may be the responsibility of the cloud provider, if too many of these lousy implementations show up at the PaaS vendor’s doorstep, as they are much more expensive to run and therefore less profitable for the PaaS vendor, the partnership will begin to sour. Crapping up the PaaS partner channel at a time like this isn’t going to make it any easier to get the job done.
Can this mess be fixed? Senior management across the industry – delivery execs, C-suiters, and the like – all understand they’ve got a problem, and many of them are pushing hard to solve it. But not hard enough. Too much control is given to the SI partner as well as the project manager on the job. And these two very powerful stakeholders generally feel compelled to scupper any attempt to have real transparency and accountability for the success of these projects. Big SIs are genuinely scared – as well they should be – that they might finally have to account for their historic inability to do a high quality job and accept responsibility when they don’t. And project managers – the one’s who push back at transparency and accountability like a bull with its ass caught in an electric fence – are in an understandable CYA exercise as well.
And then there’s the customer. I keep hoping they will ride to the rescue of their own projects – you think it would be obvious, as of course they have the most at stake. While the customer is also complicit in the culture of mediocrity and failure, and, while many are probably outgunned when it comes to going toe-to-toe with a top tier SI and vendor over the management of a complex project, it still boggles the mind that CIOs and other C-suiters aren’t up in arms about this mess.
Of course, without the right oversight, they might just be tempted to believe it when Sally Project Manager and Jim Engagement Manager tell them that the project is going “just fine.” After all, in the classic on-premise world, by the time the project has really gone to hell in the proverbial handbasket the big bucks have largely been spent. Meanwhile, someone, usually the SI, has made a killing, literally. Take the ongoing disaster at the municipality of Anchorage, Alaska. This city of 300,000 souls has spent $80 million – a $260 “tax” on every citizen of the city – on a failed project led by two wayward SIs. Despite the clear evidence that the SIs, and Anchorage, were truly at fault – SAP has been on site for two years trying to clean up the mess – SAP is left holding the bag. One can assume that as the project ballooned from its initial budget of $9 million, the bulk of the other $71 million was in services – or disservices, to be more appropriate. There’s potentially more money to be made by failing, so it would seem. Nice work if you can get, particularly because I’m still trying to find any mention of the SIs who screwed this one up and left SAP with a monstrous mess to clean up.
Not to mention the damage to the SAP brand.
The Anchorage project is an on-premise project. In the cloud, this deal could have unfolded very differently. Instead of “rewarding” failure by continuing to pursue the big bang that screwed it all up, the customer would also have the choice to start trimming things back at renewal time. And, boy would a little haircut have been in order in Alaska: It’s hard to imagine that, had this been a cloud project instead of an on-premise project, Anchorage would have kept renewing at the full rate between the starting date in 2011 and the time in 2015 when a new mayor (elected, in part, because of the magnitude of the project’s failure) was trying to figure out how the project had grown by a factor of only five and still didn’t work.
More likely, at a minimum, the threat of non-renewal would have, could have, should have forced someone in the “partnership” between vendor and SI to stop the bleeding. Or else. And the old mayor, would have been facing a challenger in 2014 who might still have called out what a mess the project was, but it most likely would not have been the $50 million disaster that helped show the old mayor the door.
There’s lot of reasons why a company or public sector entity wouldn’t want to renew other than impending failure, and lots of reasons why even a little mediocrity might not get in the way of a healthy renewal. But the culture of mediocrity is a genuine threat to the financial aspirations of vendors trying to sop up as much of the cloud burst now taking place in the market as possible. Winning deals used to be the only metric that counted. Now a vendor has to win a deal and then keeping winning over the customer – again and again and again. Fixing the culture of mediocrity would go a long way towards making good on the vendors’ promises to their investors, and, most importantly, the vendors’ promises to their customers as well.
I know that no CIO who shows up in the morning looking for an IT project to screw up, nor does anyone who works for her. Nor does any vendor’s senior executives, at least not the ones I know. Any yet here we are, in 2018, still dancing the dance of mediocrity and failure. And like a children’s game of Musical Chairs, it all looks pleasant until the music stops, and then someone loses.
I think it’s time for CIOs to step up to the challenge, and stop enabling mediocrity to be the norm and the threat of non-renewal to be their only point of leverage. That means a major culture change, and the implementation of quality tools like ProQ. Their partners, the vendors, could also stand to get serious about the problem and start pursuing a culture change that helps protect both their brand and, in the age of renewals, their bottom line as well.
The SIs? I don’t expect them to come voluntarily, in particular as the renewal problem doesn’t really concern them. But I have to imagine they wouldn’t dare say no to a CIO who demanded real transparency and accountability.
What excuse could they possibly offer?