Dreaming the Analyst Dream: How to Run a Great Analyst Event

Salesforce.com again kicked off the New Year with an Analyst Summit that was good but not great. Which got me thinking about what does make for a great analyst event, assuming of course that the goal is to impart essential information to the analyst community so that we can in turn advise our clients accurately about various vendors’ strategies. This is a goal that I think is the sincere belief of most analyst relations teams – whether it is shared by their executive spokespeople is another issue all together.

The recent Salesforce.com analyst event was definitely intended to meet that lofty goal, and I think in general the execs were on board. That doesn’t mean they necessarily succeeded as well as they could have or I would have liked. While the overall sense was that the Salesforce.com juggernaut continues to move forward apace, and the analyst who recommends Salesforce.com for a broad swath of enterprise needs probably wouldn’t be wrong doing so, there was a lot of important information that wasn’t proffered that would have added clarity to a more nuanced analysis. Which would be my preferred take-away from an analyst event every time.

So in the interest of better analyst events overall, here’s my list of the main lacunas from Salesforce’s summit, presented here as much to educate other analyst teams on how to do a great job as to prod Salesforce.com to do a better job next time.

 

The Information Gap

One clear goal of any analyst event is to provide us with information we either have troubling getting a hold of or are potentially misinformed about. In this regard Salesforce missed out on three key issues: field sales strategy, partner strategy, and renewals.

Field sales strategy: Josh’s first rule of enterprise software, all the great ideas in marketing go to the field to die, makes contrasting field execution with marketing messaging essential. Most companies struggle to execute in the field, and, if you’re not struggling to rationalize an increasingly complex portfolio with the often very narrow interests of your customers, I’d like to hear why. If you are struggling we’ll probably have heard about it anyway, so hearing from the head of sales about the challenges in the field and how they’re going to be overcome would be preferable to letting us continue with our preconceptions and misconceptions unchallenged. In this regard Salesforce.com whiffed – not a word about how things are actually going on the front lines. Too bad. There’s a lot of questions about the sustainability of their sales model in light of a number of industry issues – renewals, partner problems, lack of synergy between the different product lines – that should be answered forthrightly that we can give our clients the fullest picture possible.

Partnership strategy: If you’re not tweaking your partner strategy every year, you’re not serious about your partner strategy. So tell us about it, and in particular not how many press releases your company signed (IBM Watson seems to be a popular “partner” for every vendor to mention without any concrete success info or even criteria), but how you’re going to market with the partners, how you’re sharing responsibilities, and, most importantly, what you’re doing to keep your partners doing the right thing and prevent them from making your customers mad. An unhappy customer will of course blame you, not the partner, and the headline will always have the vendor’s name in it and rarely the partner who probably bears at least half the blame. To Salesforce’s credit, Amazon AWS was on stage to discuss partnering with Salesforce. A good start, but only a small corner of the overall partner story.

Per my recent post, what I’m hearing is that the implementation partners across the SaaS market are doing a horrid job implementing in the rapidly growing cloud market. They lack the skills and experience needed to do lots of impeccable implementations, and as such the partners are responsible for an increasing number of escalations and poorly running implementations. Is this also the case with Salesforce.com? Or am I misinformed about this one? I don’t think so, and as an industry-wide problem it might be good to hear that Salesforce.com cares about it and either they’ve been immune or they’re also having this problem and are doing something to remediate the situation.

Renewals: If there’s one thing we should be hearing a ton about, it’s renewals. The above partner issues are part of this: in the cloud-centric world we now live in, an unhappy customer is a customer that doesn’t renew. Importantly, everything about the cloud is “finished but not done,” especially the customer relationship. A SaaS vendor no longer gets to make the sale, recognize its full value, and walk away like in the old on-premise days. In the cloud, if  the vendor excels at sales but the customer doesn’t renew or up the number of seats, all that sales execution is for naught. So, as a vendor, if you’re hitting renewals out of the park, tell us. And if you have a way to do a better job at than you’ve done in the past, regardless of whether you’re fouling out or hitting well, tell us.

In this regard, Salesforce.com didn’t even try, but then again, they don’t tell the Street much about renewals other than to say how great they are or will be, depending on the quarter. It’s too easy to read too much into this vacuum, but one has to assume if there was something to boast about the not-shy folks at Salesforce.com would have regaled us with data. They should have and they didn’t.

Integrated vision: One of the true signs of market maturity is how well a company tells the story of how 1+1+1=5 or 6 or 7. M&A activity has bolstered every vendor’s portfolio, and in pretty much every case the idea was that the acquisition would have a synergistic, accretive effect on sales. Right…. The fact is that one of the biggest problems plaguing the entire enterprise software community today is the problem of telling the story of the integrated portfolio. I’ve written and railed about this issue a lot, and for the most part most vendors can tell a good accretive value story to investors – and even sometimes to us analysts – but they can’t actually put the pieces together in a cohesive, synergistic fashion that field sales and partners can understand and run with.

This is hyper-important in a maturing cloud market where the C-suite is waking up to a cloud silo hangover and is looking for integration and synergies to bolster the usability of their increasingly complex cloud portfolios. Real digital transformation is usually a cross-functional, pan-enterprise undertaking, and the ability to sell an integrated vision is becoming increasingly important as company’s start planning the future of their strategic software initiatives.

Salesforce.com definitely suffers from this integration dysfunction, as evidenced by their violation of Josh’s second rule of enterprise software: the biggest mistake vendors make is that they try to sell product the way they build it, not the way the customer consumes it. Salesforce.com loves to talk about its clouds – Sales, Commerce, Community, Marketing, and Service – as though this is what customers want to buy. We had most of the heads of the Salesforce.com clouds on stage at the summit, and their strategies and roadmaps looked pretty good. But how they leverage one another and really make that 1+1+1=5 argument was missing from their discussion.

I asked about this, and was told that there has been a pivot to a solutions sale strategy based on industries, and that integration was important, and there was an uptick in deals that include aspects of this pivot… and that was that. No real details to go on. And, importantly, as Salesforce.com has no way to do solutions pricing – in response to a question from another analyst – Salesforce.com is probably, like most of its peers, floundering in rationalizing its acquisitions and other properties into a single, cohesive sales, technical, and marketing strategy. Or I may be reading too much into what I didn’t see at the summit. Either way more attention to the integrated vision is needed.

 

 

Who Are You Really?

While we’re on the topic of how to make for a great analyst summit, I want to add four ideas to the pile, only one of which was at play during the Salesforce.com Analyst summit.

What’s in your org chart? I would really really like to know how a company is organized. There’s often an implicit assumption that we analysts actually have an idea of who does what, and for the most part that’s false. Understanding the internal org chart and the responsibilities of senior executives and managers would be really helpful in looking beneath the surface. How serious a new initiative like IoT or ML is can often be inferred by knowing who is running it and where they sit in the organization. Knowing the answer to the question can also give some serious hints as to how well organized a vendor’s efforts are. I went to an SAP Leonardo event late last year where I was told by at least four executives that, when it came to Leonardo, they were ultimately, and individually, responsible for Leonardo’s success. I came away with the distinct impression that the effort was a bit of a mess, which I think is a largely accurate impression.

So maybe just thinking about telling a room full of analysts what your org chart looks like may make for an interested exercise. Does this org chart make sense? Can you rationalize its existence? Does it help that sometimes baffled and confused field sales person or partner do a better job on your behalf?

Salesforce.com and other vendors might want to try this as an internal exercise in messaging, because I think it would help refine their sense of self and mission. It may not matter that they have five “clouds” if somewhere in the org chart this stuff comes together into a cohesive whole. Inquiring minds would like to know.

Where is your heart? This is something that Salesforce.com does particularly well, and they talk a lot about ohana and put amazing people like Tony Prophet, their Chief Equality Officer,  in front of analysts. SAP has also been doing this in the area of diversity, autism awareness, and other initiatives. Many other vendors do as well. Having a heart isn’t just a nice to have – it personalizes a business relationship into a partnership based on shared values and not just shared business outcomes. If you ascribe to the notion that doing business is first and foremost about people – which I do – then telling analysts about this part of your company is worth the effort.

 

Shaking the Tree

My last two points are more about aligning the concept of an analyst summit with modern pedagogical thinking, which is desperately needed in an industry in which the analyst summit is typically a day long sit down, shut up, and listen forced march. We know that doesn’t work when it comes to training and education in the “real world”, why assume it’s a model worth repeating in the analyst world, especially when the ones doing the talking are a vendor’s most valuable talent. Why risk wasting their time?

Give us something to do: If your products are so cool, user-friendly, and leading edge, don’t just tell us, let us at them. I’ve been to a couple of analyst events where we got to build, configure, test, or otherwise play with a new tool, product, or capability, and they’re pretty much the most memorable ones I’ve been to. Most of us analysts are pretty technical – or should be – and I know we would learn more with our hands on the mouse than sitting in a room with the lights low, the HVAC system murmuring, the nth speaker droning on and on…zzzzzz.

Salesforce.com did this with the analysts at one of their Trailhead conferences, and it was, if not a hit, then at least highly memorable. Some warts appeared – it turns out that, at the time, we sometimes left the modern Lightning UX and entered the dark world of “legacy” Salesforce.com – but it helped hit home the many concepts – usability, gamification, democratization of training – that are essential parts of the Trailhead story. I wish Salesforce.com had done something similar this time: maybe play with an Alexa interfaced to Einstein, or try some advanced Service cloud technology in a simulated scenario. Gosh, this could be even fun.

Give us more Q&A: I always wonder why vendors feel like there’s a need for a moderator to kick off the Q&A before letting the analysts have at it. I assume that’s because you’re worried that you’ll slot all this time for Q&A and then have a bunch of dead air (unless you’re actually using the moderator to provide a smokescreen so that the majority of the question as the ones you want to answer.) My take on that is that a silent analyst corps is usually a bored analyst corps that has been stupefied into inaction, which in turn means that the content is too drab and repetitive or predicable. Hopefully, you’ve solved that by following some of the ideas above. And when all else fails, ask us the questions: we love showing off, most of us have big mouths, and a surplus of opinions.

 

I don’t pretend that most firms will break out of the sit down and shut up model of analyst event, any more than I feel most firms will try to break away from having back-to-back 90-minute keynotes at hteir user events starting at 8 am with a 120 decibel band “enticing” attendees to their seats. But if you do make some changes to your analyst events, I promise to lead the charge in getting my fellow analysts to actually pay attention to what’s happening on stage, instead of booking flights, checking the news, counting the likes and re-Tweets on our latest snarky Tweet, and otherwise ignoring the content on the stage. That’s the worst of it: too many times the quality of the content is matched by the degree of attention paid to it. For all of our sakes, we should try a little harder on both accounts.

 

Mired in Mediocrity: Renewals are the New Imperative, But Can the Enterprise Software Market Meet the Challenge?

Enterprise software is in a crisis, one that is self-imposed and, frankly, has been a long time coming. Failure to fix the problem will be disastrous, and yet, from where I sit, disaster is exactly where the market is heading.

Hyperbole? I don’t think so. Vendors in the cloud need customers to renew, or said vendors will be excoriated by their investors. And the flaying is all set to begin.

The crisis is simple: the historic failure rate for on-premise software implementations – up to 2/3 of projects fail to deliver their expected value – is repeating itself in the cloud market. It’s not too surprising if you think of it. One of the key parties responsible for messing up on-premise implementations for decades – those global SIs who helped propel enterprise software to a multi-billion, volume market in the latter part of the 20th century, and in the process, created a culture of failure and mediocrity that somehow everyone was okay with – are now every vendors’ “strategic partners” in charge of the burgeoning growth in cloud implementations.

And these “partners” are performing in the cloud just like they did in the on-premise world: poorly.

To be fair, it’s not just about the SIs. The major Sis, and many minor ones too, are aided and abetted by two complicit parties: The customers, who must bear some responsibility for, at a minimum, not holding the SIs’ feet to the fire for failing their responsibility as the “adult supervision” in these projects. And the vendors, too many of whom are “okay” with watching their projects turn into slow-motion train wrecks, mostly because they’re also scared to call the SIs out and equally reluctant to push their customers into changing how they staff and manage these projects.

But, considering the global SIs are usually the ones with account control – these companies tend to do much more business with a given customer than the vendors, and they have proven to be collectively opposed to anyone or thing that would truly hold them accountable – I’m going to focus most of this post on them.

Finding the smoking gun in the implementation failure “blame game” is an exercise that requires some real sleuthing and an always-on bullshit meter. Outside the public sector market, where freedom of information requests can lay bare the trail of tears that typify all too many projects, failure is not just an orphan: he’s blind, deaf, and dumb, and locked away where no one can find him. Considering the billions that are wasted every year, the veil of secrecy is understandable – if the world really knew not just how often enterprise software projects go south, but how preventable so many of these failures could be, heads would fly. Or explode. Or both.

What I do know is that, instead of removing critical points of failure, the cloud is upping the ante. Delivery execs keep telling me that the majority – and in at least two cases the totality – of escalations during cloud implementations come from partners. And I know that two years ago, when SuccessFactors tried to force partners to check in with the company at regular intervals during an implementation, they were completely shot down by these so-called partners. And I know that PaaS vendors like Amazon AWS are stepping up their own professional services offerings, including performing various forms of health checks on on-going projects, precisely because too many are running into trouble. And I keep seeing SAP mentor and critic Jarrett Pazahanick excoriate SuccessFactors SIs (under the glorious hashtag #wildwest) for their obvious lack of knowledge about implementing in the cloud, much less their lack of certified cloud resources.

Most importantly, I know that all too many project managers on both SI and vendor service provider teams are still proceeding as they did in the on-premise world, fighting against transparency and accountability with every weapon they have at their disposal.

I know this last piece of information because for the last two years I’ve been running a startup called ProQ.io. We created ProQ (as in project quality) in the wake of yet another poorly reported implementation failure in which the vendor, SAP in this case, took all the blame for a mess-up that was clearly the primary responsibility of the service provider. That service provider, in this case as in many others, was good old Deloitte, which has a rap sheep a mile long. (For fun, trying searching “SAP failure Deloitte” and see how many hits you get. If you’re surprised it’s only because you haven’t been paying attention.)

ProQ has some unique characteristics, not the least of which is its ability to scare the pants off of SIs and project managers eager to perpetuate the culture of mediocrity that permeates this market. Take CAP Gemini – a company with some unfortunately spectacular failures under its belt, like the $160 + million disaster recently visited upon the Scottish National Health System. After some interest regarding ProQ from senior management, one of the execs in charge of delivery for North America put the kibosh on even considering ProQ – “not necessary” was the excuse, despite the absolute necessity of having something to mitigate an unfortunate legacy of project failure. That delivery exec’s “not necessary” was, for the record, said to me well before the Scottish NHS and before a Dutch journalism team actually did a documentary on yet another spectacular CAP Gemini failure. Rinse and repeat.

This is a typical pattern. ProQ tends to get high marks from senior management across the board for its ability, in a simple and relatively painless way, to report out from the hidden recesses of a project how well, or poorly, the client and their service provider are working together as a team. But ProQ typically gets the thumbs down from project managers or their enablers in the field whenever they are given the option to say “yes or no” to using ProQ.

I see it this way: looking at the raw numbers about project failure, if you’ve done three projects in your career, two of them have “failed to deliver their expected value”, a euphemism we use at ProQ to open up the possibility that abject failure is rare, but mediocrity is the norm. Regardless, after your involvement in those projects that didn’t deliver, were tied up in endless delays, or went to court, were you ever held accountable? Did anyone get fired or demoted? Did the brand of the SI in question suffer? While heads have rolled in some more spectacular cases, most of the time accountability doesn’t really happen. Anyway, it’s vendor’s name that gets dragged through the mud, but not the SI. So what’s the purpose of transparency or accountability? In a world without consequences, why not just call it a day and move on to the next project?

I call this the enterprise software culture of mediocrity – only because I’m trying to positive and not just call it what it most deserves to be called: a culture of failure. One that costs literally billions a year in wasted money, time, and reputations.  And one that shows no sign of abating as the market moves from on-premise to cloud implementations.

Which brings us back to the renewal problem. I mean disaster-in-process.

The fact that the on-going train wreck in the world of enterprise software implementations keeps rolling down the track is why I think the requirement to boost renewals – the only really relevant success metric in the enterprise software cloud market – is going to be really hard to fulfill. You renew – and that includes renewing for the seats you paid for but haven’t yet implemented – because you’re happy. You’re happy if, ideally, your implementation was on time and on budget, though most customers will settle for “achieving expected value,” an acceptable bottom line.

But if you’re unhappy, while switching costs make it unlikely you’ll simply throw out the software altogether, you’re going to think twice about renewing those unused modules, adding those seats that you were planning to add as the rollout expanded to other geographies or lines of business, and that shiny new cloud thing from your vendor that theoretically adds a ton of value to the existing, mired-in-mediocrity, cloud thing you’re none too happy about.

This renewal game is doubly important for vendors like Infor, Microsoft Oracle, Salesforce,com, SAP, Workday, and every other SaaS vendor: First of all, there’s the threat to revenues from non-renewal. Unlike the on-premise perpetual license world, where vendors got paid for the full value of the contract pretty much up-front, in the cloud world the vendor needs many years of subscription payments to earn the full value of the contract – five on average. So if a customer doesn’t renew, or renews fewer seats than they initially paid for, the vendor’s revenues and profits are hugely impacted.

The other problem with the renewal game is the problem of where that mediocre implementation is supposed to live. If it’s in the vendor’s cloud, they get to own the inefficiency and, often, the cost of remediation to bring the implementation up to industry standards. And if it’s running in the cloud of a vendor’s PaaS partner, while ownership of the problem may be the responsibility of the cloud provider, if too many of these lousy implementations show up at the PaaS vendor’s doorstep, as they are much more expensive to run and therefore less profitable for the PaaS vendor, the partnership will begin to sour. Crapping up the PaaS partner channel at a time like this isn’t going to make it any easier to get the job done.

Can this mess be fixed? Senior management across the industry – delivery execs, C-suiters, and the like – all understand they’ve got a problem, and many of them are pushing hard to solve it. But not hard enough. Too much control is given to the SI partner as well as the project manager on the job. And these two very powerful stakeholders generally feel compelled to scupper any attempt to have real transparency and accountability for the success of these projects. Big SIs are genuinely scared – as well they should be – that they might finally have to account for their historic inability to do a high quality job and accept responsibility when they don’t. And project managers – the one’s who push back at transparency and accountability like a bull with its ass caught in an electric fence – are in an understandable CYA exercise as well.

And then there’s the customer. I keep hoping they will ride to the rescue of their own projects – you think it would be obvious, as of course they have the most at stake. While the customer is also complicit in the culture of mediocrity and failure, and, while many are probably outgunned when it comes to going toe-to-toe with a top tier SI and vendor over the management of a complex project, it still boggles the mind that CIOs and other C-suiters aren’t up in arms about this mess.

Of course, without the right oversight, they might just be tempted to believe it when Sally Project Manager and Jim Engagement Manager tell them that the project is going “just fine.” After all, in the classic on-premise world, by the time the project has really gone to hell in the proverbial handbasket the big bucks have largely been spent. Meanwhile, someone, usually the SI, has made a killing, literally. Take the ongoing disaster at the municipality of Anchorage, Alaska.  This city of 300,000 souls has spent $80 million – a $260 “tax” on every citizen of the city – on a failed project led by two wayward SIs. Despite the clear evidence that the SIs, and Anchorage, were truly at fault – SAP has been on site for two years trying to clean up the mess – SAP is left holding the bag. One can assume that as the project ballooned from its initial budget of $9 million, the bulk of the other $71 million was in services – or disservices, to be more appropriate. There’s potentially more money to be made by failing, so it would seem. Nice work if you can get, particularly because I’m still trying to find any mention of the SIs who screwed this one up and left SAP with a monstrous mess to clean up.

Not to mention the damage to the SAP brand.

The Anchorage project is an on-premise project. In the cloud, this deal could have unfolded very differently. Instead of “rewarding” failure by continuing to pursue the big bang that screwed it all up, the customer would also have the choice to start trimming things back at renewal time. And, boy would a little haircut have been in order in Alaska: It’s hard to imagine that, had this been a cloud project instead of an on-premise project, Anchorage would have kept renewing at the full rate between the starting date in 2011 and the time in 2015 when a new mayor (elected, in part, because of the magnitude of the project’s failure) was trying to figure out how the project had grown by a factor of only five and still didn’t work.

More likely, at a minimum, the threat of non-renewal would have, could have, should have forced someone in the “partnership” between vendor and SI to stop the bleeding. Or else. And the old mayor, would have been facing a challenger in 2014 who might still have called out what a mess the project was, but it most likely would not have been the $50 million disaster that helped show the old mayor the door.

There’s lot of reasons why a company or public sector entity wouldn’t want to renew other than impending failure, and lots of reasons why even a little mediocrity might not get in the way of a healthy renewal. But the culture of mediocrity is a genuine threat to the financial aspirations of vendors trying to sop up as much of the cloud burst now taking place in the market as possible. Winning deals used to be the only metric that counted. Now a vendor has to win a deal and then keeping winning over the customer – again and again and again. Fixing the culture of mediocrity would go a long way towards making good on the vendors’ promises to their investors, and, most importantly, the vendors’ promises to their customers as well.

I know that no CIO who shows up in the morning looking for an IT project to screw up, nor does anyone who works for her. Nor does any vendor’s senior executives, at least not the ones I know. Any yet here we are, in 2018, still dancing the dance of mediocrity and failure. And like a children’s game of Musical Chairs, it all looks pleasant until the music stops, and then someone loses.

I think it’s time for CIOs to step up to the challenge, and stop enabling mediocrity to be the norm and the threat of non-renewal to be their only point of leverage. That means a major culture change, and the implementation of quality tools like ProQ. Their partners, the vendors, could also stand to get serious about the problem and start pursuing a culture change that helps protect both their brand and, in the age of renewals, their bottom line as well.

The SIs? I don’t expect them to come voluntarily, in particular as the renewal problem doesn’t really concern them. But I have to imagine they wouldn’t dare say no to a CIO who demanded real transparency and accountability.

What excuse could they possibly offer?

 

 

 

Being Really Stupid about AI: What Is Intelligence Anyway?

I’ve been enjoying the debate about when our robot masters will take over the world for quite some time. And despite the fact that really smart people like Ray Kurzweil are convinced the singularity will take place in our lifetimes, I have to disagree. Vehemently. For the simple reason that we still don’t know what intelligence actually is, neither its baseline nor its limits, regardless of whether we’re talking about humans, animals, or machines.

There’s an equally simple reason why we don’t know what human intelligence is – we’ve been looking for it in all the wrong places for centuries, using the wrong tools, the wrong measures, and the wrong assumptions. And this false quest continues, as far as I can tell, in the current frenzy about AI. We’re still acting pretty stupid when it comes to understanding the concept of intelligence. Consistently stupid, if that’s any comfort at all.

One of the best ways to understand what little we know about human intelligence is to look at how humans understand animal intelligence. The quick answer to whether we grok animal intelligence is found in the title of a very engaging book by an eminent primatologist and cognitive scientist, Frans de Waal: Are We Smart Enough to Know How Smart Animals Are?

In posing this as a question, you can tell that de Waal plans to answer it in the negative. In the process of doing so de Waal exposes a huge gap in how humans – in the form of cognitive scientists, behavioral scientists, and the like  – define human intelligence. As you may imagine, many of the idiotic studies of animal intelligence cited by de Waal started with embarrassingly false assumptions about what makes humans intelligent, and then went on to “prove” that animals, based on this false metric, were infinitely inferior to humans.

A simple example was the oft-repeated claim that animals are incapable of facial recognition for the simple reason that even higher-order primates can’t tell one human face from another. Which is of course, the wrong question to ask: why would any animal know how to recognize human faces? The reverse is certainly true. Humans suck at recognizing non-human faces: Unless you’re an experienced  animal behaviorist, you’re also going to have trouble telling different members of a non-human species from one another (other than domestic animals, which live and work with us.)

But if you ask – experimentally – an ape, a monkey, a crow, even a wasp, to identify the members of its pack or herd or nest or “murder of crows,” these and other highly social animals are not just extremely adept at it, they are able to use facial recognition exactly as we do, as a means to manage an individual’s position in the social hierarchy (who is she relative to who am I and what should I do or not do about that in light of the current situation) and otherwise interact with the group.

How about that “unique” exemplar of human intelligence, tool use? In the late 1950’s it was assumed that only humans made tools, now we know that corvids and many other species, are not just tool makers, they’re meta-tool makers and users. In other words, they can make a tool that manipulates another tool in order to accomplish a particular task, usually involving acquiring food. It turns out crows can do this better than monkeys, which are capable of tool use but are not so good at understanding that meta-tool use involves a sequence of actions that start with finding the meta-tool and then using it to manipulate the original tool in order to get the object. Corvids, by this important measure of intelligence, are smarter than monkeys.

And both species, by the meta-tool use standard, are much smarter than human children, which can start using tools at 12-24 months, but need several more years to master meta-tool use.

I can go on and on. Peter Wohlleben writes about intelligent trees and forests in his book, The Hidden Life of Trees.   It turns that trees of different species cooperate to share resources – water, nutrients, and even access to sunlight – amongst one another in order to maintain the collective health of the forest. This notion of altruism – yet another form of intelligence that was once the exclusive domain of humans – is mediated by a third party, a ubiquitous fungus called Rhizopogon . The filaments produced by this truffle-like fungus permeate the root structures of the forest trees, and become the conduits of the nutrients and water that are shared by the trees. (There’s an article from Scientific American that explains this phenomenon here.)

Wohlleben – and the author of the article above – describe what is clearly altruistic, collaborative behavior on the part of different species of trees in order to reach common goals, like closing a hole in the forest canopy that can dry out the forest floor and lead to the invasion of parasites and plants that could endanger the existing trees. Their altruism can be measured by the differential flow of nutrients along the filament paths laid down by the fungus, a flow that scientists have been able to quantify and correlate with activity that is clearly in the category “all for one and one for all.”

Altruism is a clear sign of intelligence – it takes varying degrees of planning, foresight, self-awareness, and inference to do it right – and either the forest is able to mechanistically respond to threats or it does so through some form of intelligence. Either way, it’s another form of intelligence once deemed to be the sole purview of humans that, in this case, exists in the interplay between a fungus and a group of trees.

Even more interesting is the visualization of this network: lay this forest network (on the right) side by side with a visualization of a human social network, such as the one I selected that shows the interlocking board memberships of major US corporations (on the left), and the similarities are obvious.

(source: Corporate network: http://plutocratsandplutocracy.blogspot.com/2016/05/the-power-elite.html. Wood wide web network: https://blogs.scientificamerican.com/artful-amoeba/dying-trees-can-send-food-to-neighbors-of-different-species/)

Assuming there is intelligence driving our network of the corporate world, can we surmise that there is a form of intelligence at work in the forest? And if we do, where does the intelligence come from in the forest network, which is referred to, tongue in cheek, by researchers as the wood wide web? Is it in the trees? The fungus? Both? Neither?

Many scientists, philosophers, and ethologists are tempted to assume that this wood wide web must be mediated by a mechanism of chemical imbalances, osmosis, and other non-intelligent forces. This mechanistic view of the animal world, promulgated by Descartes in the 17th century, certainly makes it easy to ignore or dismiss Wohlleben’s premise. But, like the case of animal face recognition and meta-tool use, Occam’s razor isn’t the best tool to use when looking at the intersection of behavior and intelligence: the simplest solution turns out to be too simplistic.

Considering how literally every passing year brings us more and greater revelations about how non-human intelligence is encroaching on our sacred perception of an anthropocentric world of human masters and animal automatons, we should try to apply a little skepticism about what the advent of so-called artificial intelligence is really about. Call it automation, call it the new process efficiency, anthropomorphize it with names like Einstein, Leonardo, Claire, or Screaming Jay Hawkins, but don’t call it intelligence. Not until we understand what human intelligence really is.

This is why I take great comfort in the response by the most intelligent Grady Booch when I asked him about the singularity on stage last year. “We will never see it coming,” Booch opined, “…because we ourselves will have co-evolved.” Whatever our notion of intelligence is today, it will be different, and continue to be different, as we evolve. I kind of like that idea, it gives me hope for the human race.

So when I see headlines like this one, Computers are Getting Better at Reading than Humans, I wince. The article imputes intelligence where there isn’t, and impugns human intelligence in the process. One example of this machine that is supposed to be smarter than humans is that it successfully scanned an entry in Wikipedia on Dr. Who and then correctly identified a discrete piece of information from the article, in one example the name of Dr. Who’s ship. In no way does this justify the breathlessly hyperbolic tone of the headline.

The reason such headlines exist is precisely because we hardly know what human intelligence really is, and so we mistake gimmicky behavior on the part of a machine as a sign of superior intelligence. Having a machine best a human at chess or the game of Go sounds impressive, and it is on a certain level. But, as de Waal reports, a highly trained chimp was able to beat a human memory champion at a memorization“ game” with relative ease. Should we assume that the chimp is more intelligent than the human? It depends on the task at hand, doesn’t it?

And is winning at Go more impressive than the dog that, running at full tilt across broken ground, calculates the precise speed and trajectory of a ball and leaps at the precise moment to catch it in its mouth, and then nails a perfect four point landing? It takes some seriously furry calculus to pull that trick off.

If you’re not impressed, watch the scene in Hidden Figures when the “human computer”, Katherine Goble (a black woman played by actress Taraji Henson, and please remember what much of society thought then about the intelligence of non-white races,) goes to the blackboard and computes the exact landing point of the Mercury capsule in front of an astonished room full of space program officials. The scene, though apocryphal, signifies the moment when human intelligence evolved to where it could model something that every spit-covered tennis ball-loving dog does as a matter of course. Ms. Goble’s on-the-fly calculation is truly a tour-de-force of human intelligence, and obviously, Fido can’t calculate on a chalkboard where and when and how high she needs to jump to snag the ball. But, then again, she doesn’t need to, she just does it “naturally”.

So what is intelligence? Beats me. But what I do know is that it’s easy to think we know something when we don’t. Danny Kahneman, a cognitive psychologist, was awarded a Nobel prize in Economics for proving that we humans are hardly as intelligent as we think we are, and, looking around me, I’m definitely with Kahneman on that one.

What I do know is that we really need to dial back on what we think so-called AI can do. I like the conceptualization that my friend Trevor Miles, the resident deep thinker at supply chain software vendor Kinaxis, uses for AI. The goal of AI, Trevor likes to say, is to “take the robot out of the human,” as opposed to replacing humans with robots. Works for me. So let’s cut out the nonsense about superior intelligence, that’s really not what’s at play in the world of AI. Nor is it necessary to even think in those terms. If all we do is remove the robot from the human and reduce the drudgery and risk of error that comes with trying to handle too much information to quickly, it will be a huge win.

I’ll finish my rant about what is intelligence with a personal note. My youngest sister, Susannah, is developmentally disabled, what we used to call in pre-PC times “retarded.” It was assumed 50-plus years ago when she was born that people like her, just like people on the autism spectrum, the visually impaired, Helen Keller, people of color, and anyone else who is seen as the “other”, are inherently stupid and sub-human. We now know better. Much better. Susannah is able to lead a relatively rich life, certainly not the institutionalized one that we were told was our only recourse when she was born, and in the process has grown to demonstrate traits like sympathy, a sense of humor and a sense of wonder, and other forms of intelligence that people like her weren’t supposed to have.

All this is to say that we when we get all a-flutter about the march of AI towards the inevitability of a singularity that will leave us as a bunch of neutered idiots – to join, no doubt, those benighted fools, idiots, cretins and retards of a previous era – facing a purposeless future, or we’re tempted to bow and scrape at the prospect of a machine that “reads better than a human,” we need to stop and ask ourselves the following question:

Are we smart enough to know what artificial intelligence is?

Nope.

Salesforce.com and Innovation – Are Trailhead and Einstein Enough?

You know you’re on to something as a vendor when people show up to a keynote and give your speaker a raucous standing ovation when she walks on to the stage. It’s even more significant when you’re ramping up a populist developer program and your audience of developers act like they’d happily march off a cliff, if only their leader would tell them to do so.

This was the reaction when Sarah Franklin, who heads developer relations at Salesforce.com and is the GM of the company’s pioneering Trailhead developer engagement program, hit the stage at Dreamforce for the first-ever Trailhead keynote. Her applause was well-earned – Trailhead has emerged as the most energetic and engaged community of developers in the enterprise software space, particularly among Salesforce.com’s enterprise platform competitors, the likes of Infor, Microsoft, Oracle, and SAP, among others.

But being first in energy, enthusiasm, and even hyperbole – Salesforce.com loves to brag about the near trillion dollar potential impact of Trailhead and its ability, potentially, to create 3.3 million jobs by 2020  — doesn’t mean Salesforce.com has its innovation strategy problems licked. On the contrary. What all this enthusiasm and interest in Trailhead has exposed is a fundamental weakness in the overall Salesforce.com platform/ecosystem strategy that needs to be fixed. Or Trailhead and its Trailblazers will be relegated to enabling a mere slice of the vast innovation potential that exists in the enterprise.

The weakness? A too-narrow focus on CRM for Trailhead and the company’s foundational platform and development technologies that limits who will be using Salesforce.com’s innovation technology and what that technology can be applied to. It’s a weakness that is, so far, too deeply entrenched in the DNA of the company – starting with its stock ticker symbol, CRM – to be easily remedied. But, without a remedy, Salesforce and its Einstein and Trailhead initiatives will fail to reach their potential, and that won’t be good for the company and its partners. Customers, on the other hand, probably won’t care – which is all the more reason the problem needs to be solved soon.

The problem of who develops innovation is part of the industry-wide shift to using technologies such as AI, ML, and IoT – though these three technologies are really proxies for all the coming net-new innovation in the enterprise. The “who develops” problem is about the fact that innovation is being led, or at least partially led, by experts in lines of business who are being tapped to define and help develop the next transformational business process or app. These experts are gravitating to a combination of design thinking workshops and citizen-developer tools as a way of embodying innovation in new apps: Pretty much everything I see that is transformational is coming from this grass-roots effort inside the enterprise. IT is no longer the default starting place for innovation, though IT is generally taking a seat at the table when it’s time to do some of the plumbing and inside wiring that is needed to move from concept to working app.

From a vendor standpoint, the formula for success in this new paradigm of innovation starts with having a IaaS/PaaS platform, and some great developer tools, and Salesforce.com has this down pat. You also need a decent and always growing innovation platform, and Einstein is arguably as good or better than most. And you need access to existing data and business processes that, at a minimum, can be used as a starting point for building new killer apps and processes. Derivative and additive are perhaps the best way to describe many of the newly emergent apps coming out of digital transformation efforts: they’re build on top of a combination existing and newly available data and processes. Very little transformation starts from a truly blank slate.

And this is where being the best CRM platform in the industry – a point co-founder Parker Harris insists on being one of the polestars of the company – and having the best engagement model for CRM developers, starts to fall short.

The seeds of digital transformation have to come from across the enterprise, and the resulting apps will span the enterprise as well. One of the most basic starting points for digital transformation is the breaking down of functional silos and the creation of cross-enterprise capabilities, and that means no single line of business will be in the lead all the time. For CRM-related transformations, those Salesforce admins who make up the bulk of the Trailhead membership could be the ones taking point, but they won’t necessarily be able to digitally transform the warehouse, the shop floor, finance, logistics, and other domains. That transformation will need input and support from experts in those LOBs, not Salesforce.com’s CRM admins.

To be fair, not all Trailhead developers are Salesforce admins, and not all developers look to Trailhead for guidance and inspiration. Heroku, a Salesforce platform, is used by a lot of startups and developer groups for building innovative apps that have nothing to do with CRM, and the company is increasingly opening up Trailhead and other developer resources to embrace the professional developer class.

Nonetheless, professional developer tools aren’t enough of a solution to the limits of a CRM-only focus. You need citizen developer tools in the hands of LOB experts – or at least wielded by teams that include these LOB experts – to truly realize the potential of  home-grown digital innovation. And for those apps that potentially span multiple LOBs, even if the transformation is skewed heavily towards CRM, a CRM platform isn’t necessarily going to be the go-to platform that bridges those silos: The experts in those silos will have their own most-favored platform – which won’t be a CRM system – and they’ll be wanting to use their own platform tools to build their innovative apps.

Remember that account control is a myth across the enterprise software market. Virtually all medium to large-size enterprises have multiple enterprise software systems – the overlap between Salesforce.com and Oracle or SAP is huge. And they increasingly have multiple platforms as well, including Azure, AWS, Google Cloud Platform, and others that are also competing for the hearts and minds of developers. The result is that winning – as in getting a customer to build digital transformation apps on a given platform – isn’t going to be about which vendor has the best technology stack. It’s about how many developers you can bring to a given platform vendor’s digital transformation party.

To be fair, the problem of enticing and connecting with future digital transformation developers is shared equally by Salesforce.com’s platform competitors. SAP has a huge presence in many LOBs in many companies, but there are plenty of LOBs that don’t use and may not even like SAP. Same with Microsoft: tons of presence all over the enterprise, but not every LOB is ready to use Azure as its dev platform. As proof of how no one has a lock on innovation, I have seen the same elevator company logo on presentations about distinct digital transformation apps from both SAP and Microsoft. Different parts of the company follow different polestars and therefore use different technologies to advance them towards their innovation goals. That’s going to be what the evolution of digital transformation will look like for the next few years, if not forever.

But at least SAP and Microsoft (and Infor) are present in multiple LOBs. Salesforce.com is another case altogether: I think it’s safe to say you won’t find a Salesforce user, much less a Salesforce admin-cum-developer, hanging out in non-sales and service LOBs ready to tackle digital innovation projects. They may be at the table – and probably should be – when the transformation at hand touches CRM, services, marketing, or any of the target LOBs that Parker’s “best CRM platform” encompasses. But will they be able to direct the development effort towards Einstein and Lightning, instead of SAP’s Leonardo and Fiori, or Microsoft’s PowerApps and Azure, or IBM’s Watson or Infor’s Coleman for an app that isn’t primarily focused on enhancing a core CRM function? It’s not likely if they’re using a CRM platform and CRM-focused tools.

This scenario is why I say that the customer won’t care one way or the other. They’ll get their apps built no matter what, using what someone in the target LOB thinks is the best platform for the job. Only their vendor will care… And the IT department, which has to make sure the new app makes technical sense and may try to influence this choice, though their influence will be limited.

What’s the solution to Salesforce’s innovation problem? That’s going to be tricky – developing a presence in the LOBs not touched by Salesforce.com won’t be easy. There are some partners that can help – FinancialForce is a good example of an LOB-focused product set that can help Trailhead and Einstein reach other LOBs, like finance and professional services. But that probably won’t be enough to make a huge difference without some air cover from the mother ship.

Salesforce.com could definitely take a page out of SAP’s Leonardo book and showcase as wide a range of examples of innovation as possible with Einstein and other tools, emphasizing, as is possible, the applicability of the Salesforce approach outside its core LOB. I was recently at SAP’s latest Leonardo Live event, and it was the first time Leonardo was presented by showcasing a number of truly amazing new apps, instead of just going on about the theory of what Leonardo could do. Assuming Salesforce can prove its mettle in non-CRM lines of business, this will be a credible first step towards a broader developer base.

The company could also buy its way into other LOBs, though the existing candidates that could make a difference are becoming increasingly hard to find. Or it could make a more concerted push to build use cases for its platform outside CRM, and try to show by example why a new warehouse management system should be built on a Salesforce platform, instead of something else. Maybe there’s even a way to put more CRM into the warehouse that would make Salesforce.com’s tools the logical choice. Maybe.

Of course Salesforce could just hunker down and focus on being the very best CRM platform provider – and there’s a wealth of opportunity out there in CRM-land. But I’m doubtful the company’s leadership would be content stopping there, and it’s not really a good idea to cast so narrow a shadow. The universe of potential transformative apps and processes that touch CRM is huge, but it would only be huge enough if Salesforce.com can expect to capture a decent percent of that potential. My concern is that those apps won’t necessarily be seen as CRM apps, any more than a next-gen asset maintenance app won’t necessarily be seen as an ERP app. And Salesforce.com’s big developer bet will hit a wall it can’t get through.

It would be a shame to see the best developer engagement program in the industry hit this kind of wall, but without some effort to be more than just the best CRM platform in the industry, I believe it will. Sarah Franklin will probably still get the applause, and the job creation stats for Trailhead will probably still be impressive, and the examples of how CRM can be extended using AI, ML, and IoT will grow at a decent pace . But if Salesforce.com’s platform play is going to be the vehicle for expanding outside of the company’s CRM base, and growing net new revenue, and otherwise challenging its big competitors in this new digital transformation battlefield, then something will have to change. CRM as an acronym doesn’t really tell the story of where digital transformation has to go. And having the best and most engaging CRM developer story simply won’t be enough. Salesforce.com, Trailhead, Einstein, and the rest will have to figure out a way to do more. Or settle for less.

The Enterprise Software Synergy Effect, Part II: How Acquisitions Fail To Realize Their Potential

In last week’s post I began a tirade on why the book I want to publish when it’s time to retire, Josh’s Extremely Thin Book of Successful Acquisitions, would really be a very thin book. The subplot? It’s about vendors not being able to leverage the synergies in their portfolios to make acquisitions synergistic. The problem – embodied in my favorite aphorism “all the great ideas in marketing go to the field to die” – is that upselling and cross-selling newly acquired assets is hard, and it’s not all the vendor’s fault.

 

A big part of the problem upselling and cross-selling acquired software alongside existing assets is about getting the right decision-makers in the room when the pitch is made: Many prospective customers are too siloed or lack the vision necessary to buy into these visions of digital transformation, especially the business network vision. It’s hard to find that vp of business networks. And the CFO and CHRO aren’t always aligned in the right way to buy into, literally, a strategic, synergistic, vision of 1+1=3 in their own organizations.

Customers are also often internally siloed by vendor: the SAP crowd inside the typical heterogeneous customer (which is the majority of upper midmarket and large enterprises: single vendor account control is a myth) doesn’t necessarily hang with the Oracle crowd, the ones running older Baan manufacturing systems don’t talk to the people using Microsoft Dynamics, Salesforce admins don’t necessarily talk to the procurement people, and on and on.

So alignment and pitching to the right people is as essential as it is elusive. But this isn’t a chicken and egg paradox, each side waiting for its yin to yang. The vendor is both chicken and egg, and before those things can change, the vendor’s vision needs to be marketed, shouted from the rooftops, and otherwise evangelized. Companies in the global economy need to start thinking about business networks, and companies looking to transform need to think strategically about new ways to manage their workforce. These seeds desperately need planting, and if vendors don’t plant the seeds, nothing will grow.

My beef is that pretty all serial M&A vendors fall short in planting these seeds, despite the fertile soil at their disposal. That fertile soil starts with the keynote stage, a place where synergistic visions should hammered on at every opportunity.

By the way, just because in part I singled out SAP in part I doesn’t mean this is just an SAP problem: Microsoft has been guilty of the synergy problem in the past. To their credit, the recent Ignite/Envision conference featured some decent synergistic messaging in Satya Nadella’s keynote about what LinkedIn can do for CRM and HR functions and how the Dynamics “suite” (which, other than Dynamics’s CRM functionality, came to Microsoft via acquisitions) can power process-driven, modular composite apps.

Oracle suffers mightily from the synergy problem – the lack of integration of NetSuite is the latest case in point – and IBM and HP have basically made the synergy problem a core competence. Only Infor seems to be relatively immune – their recent acquisition of GT Nexus and the synergy of having a global logistics network tied to Infor’s B2B strategy was front and center at their user event in July. Of course, CEO Charles Phillips’ tenure at Oracle – and his involvement in the Fusion fiasco – has had a lot to do with Phillips’ determination to do integrated, synergistic software the right way. Nevertheless, Infor has to turn a well-articulated strategy into real revenue – despite the fact that elements of the company’s cloud strategy are still works in progress . The execution side of Infor’s synergistic strategy is something I’ll be watching out for closely in the coming year.

This is the problem with the synergy effect: its looks good on paper, and in presentations to analysts. But if the vendor’s reflexes aren’t tuned to delivering a message about how 1+1=3 or more, then the acquisition may plug a revenue hole, but it won’t be able to realize anywhere near its potential.

This means it’s time to evoke another one of my favorite aphorisms: the biggest mistake enterprise software vendors make is that they try to sell software the way they build it, not the way the customer consumes it. The synergy problem is a version of this: in the effort to maintain the acquired brand and its customers and sales organization, acquiring companies tend to go overboard in maintaining silos instead of stressing synergies. The problem becomes baked into a company’s strategic messaging, such that during big annual customer events like SAP SAPPHIRE, or Oracle OpenWorld, the frenzy generated by each internal product or strategy stakeholder fighting tooth and nail to get their two slides into the CEO’s presentation practically guarantees that the keynotes – and therefore the primary messages – become siloed.

By pitching a product strategy that was the result of a team of rivals duking it out for keynote real estate, any hope that a real story about how to consume all the different parts of the product portfolio in a comprehensive, synergistic way gets lost in the loud music and overly enthusiastic “I’m so excited to be here” exclamations from the stage.

Without enough seeds, and enough air cover at the top about synergy, field execution becomes not just a bottleneck, but another example of the all great ideas in marketing go the field to die problem. Even if incentive programs are set up right, how can sales execs sell to a broader audience if there isn’t one, and their bosses aren’t trying hard enough to cultivate one? And even if the air cover is there, many of these individuals know how to sell either to IT or the line of business, but not both. The result is a million war stories about the one that got away, lost because a competitor could tell a more compelling but limited story or because the losing party’s more compelling story doesn’t have the audience of influencers it needs to make the sale.

It’s tempting to say that if the problem may be too entrenched to resolve, which may be why it remains unresolved within so many companies. But I’m an optimist: I think these messages can and should be told at every opportunity. How difficult can it be? Apparently very. Sometimes I wonder if the other career-ending book I  should write is Josh’s Extremely Thin Book of Successful Keynotes, but I’m willing to wait a few years on that one. It can’t be that hard, can it?

 

 

The Enterprise Software Synergy Effect: How Acquisitions Fail To Realize Their Potential (Part I)

The problem with acquisitions is that they’re always meant to add revenue and drive synergies with existing lines of business, but all too often the acquisition falls short of its original goals. Some turn out to be bad deals, or even worse: Ask HP about Autonomy (or Compaq, or Palm, or EDS, for that matter). Or Microsoft about Nokia. Or SAP about Sybase. Or Oracle, which just recently finally killed off Sun, closing a chapter on one of its biggest, and dumbest, acquisitions.

I always joke that when I’m ready to retire I’ll first write a book entitled Josh’s Extremely Thin Book of Successful Acquisitions. It’s going to be really thin and it’s going to piss off a lot of people. In truth, I’ve been writing that book my whole career, as I have watched too many deals enrich investors and shareholders (usually a select few)  and none others – not the employees, not the partners, and certainly not the customers. And that’s before there’s any attention paid to whether the acquisition is accretive, as in being worth more than its cost, as well as synergistic, as in acting as a revenue multiplier when sold in conjunction with other software and services.

And while there are definitely acquisitions that are accretive, or at least not too decremental, by all objective measures most acquisitions, even the ones that seem to go well, are plagued by the synergy problem. Sure, apply a little accounting legerdemain and the acquisition looks like its driving a healthy revenue stream to the bottom line. But the reality of what’s happening in the field, where all the great ideas in marketing go to die, (the title of another career ending book I want to write) isn’t all that healthy. The fact is that most vendors struggle to achieve the synergistic potential for their strategic acquisitions, especially the potential that was in the marketecture slides when the deal was announced.

Let me pick on SAP for a minute, though they are hardly alone in this lack of synergistic success. SAP has made some pretty big cloud bets recently, and two of them, unfortunately for SAP, its customers, and partners, are exemplars of the synergy problem. True, the two acquisitions have been successful by most measures, but their synergistic value – that’s a different story.

SAP closed the Ariba and SuccessFactors acquisitions 2012, and the synergistic potential for both was huge. Ariba was one of the biggest of the indirect procurement vendors, but, more importantly, it was the heir to a dotcom vision of the global business network (remember the term net markets?) that would have leveraged SAP’s enormous customer base and built an interconnected commerce market that spanned the globe.

The Ariba acquisition, on paper, meant that SAP could take a procurement network with thousands of suppliers, many of them already SAP customers or active suppliers to SAP customers, and bundle the whole lot into an online, interactive, global commerce network where everyone could source, buy, supply, ship, track and trace, and otherwise move B2B commerce from its relatively dumb 20th century EDI origins into the 21st century’s vision of a connected global economy.

SuccessFactors was an even more straightforward opportunity: SuccessFactors HR + SAP Finance = competitive wins and major synergistic value. The acquisition was and remains a big deal for SAP: SuccessFactors was a pioneering HRMS SaaS company that not only injected some vital cloud DNA into a moribund on-premise culture, but also provided a perfect complement to the finance functionality built into SAP’s core ERP product line. And as that ERP line moved forward into the cloud via S/4 HANA – noch besser.

A half-decade later, and the contribution of these two acquisitions to SAP’s revenues is indisputable – though there are signs that SAP management thinks they could do better. But has either brand really stepped up and optimized their synergy with the rest of the company’s products? The answer is no.

And that’s gotta hurt. For SAP, like every on-premise vendor transitioning to the cloud, any unrealized potential for cloud revenue growth is like a slow acting poison. Companies like SAP – companies that started on premise and moved to cloud – have an unhealthy reliance on maintenance revenue from their on-premise products, and the cloud has to become the place where revenue growth can be about more than getting customers to upgrade their on-premise systems. If these transitioning vendors can’t move their customers to the next shiny new thing – in most cases a synergistic collection of newly acquired assets and pre-existing or nearly developed in-house products –  the poison eventually erodes the confidence of investors and customers, and the next IBM or HP is born: a pile of disconnected, disjoint assets that look connected and synergistic on a slide or two, but aren’t successfully sold that way, to the detriment of the vendors’ revenues and market clout.

That’s why it’s so important for these acquisitions to be truly synergistic: the HR and talent functions in a company should be connected to finance, and the rest of what we still call ERP, as a prerequisite to any effective digital transformation – and the more customers do this the better it is for all, customers and SAP alike. As to the vision of a global business network: making Ariba the hub for the myriad transactions and data that constitute the backend of B2B commerce would represent another huge digital transformation potential for customers, and a huge revenue uptick for SAP.

If only…

The reason why this kind of synergy isn’t happening, both for SAP and others, is complicated. This is the problem with the synergy effect: its looks good on paper, and in presentations to analysts. But if the vendor’s reflexes aren’t tuned to delivering a message about how 1+1=3 or more, then the acquisition may plug a revenue hole, but it won’t be able to realize anywhere near its potential.

I’ll explore the reasons why the synergy effect isn’t happening for major enterprise software vendors, and where companies can start looking for solutions, in next week’s post.

 

Artificial Intelligence and Artificial Expectations: Enterprise Software Enters the AI/ML/IoT Morass

This is the year of hyping artificial intelligence, machine learning and the internet of things (IoT). Any vendor with any vision, which is everyone, is blanketing customers and partners with pronouncements and keynotes that highlight an increasingly large roster of products, platforms, and technologies loosely organized under the AI/ML/IoT rubric. The result is that these acronyms and the products they represent are everywhere, singing, and dancing their way to our hearts.

But not our wallets. At least not yet.

While it seems as though the primary issue at hand is how to link AI/ML/IoT to the digital transformation wave that has gripped the market, the bigger question centers around whether the revenue predictions these technologies are being associated with will ever even remotely come true? Some of these predictions seem a little hyperbolic: I’ve seen revenue predictions ranging from $20 billion to almost $40 billion over the next eight years or so, and more than one enterprise software vendor CEO has told me and his customers that these technologies will account for the lion’s share of revenues in the near future.

The likelihood of this happening is small, at least from where I sit, and the answer to the $20 – $40 billion question lies somewhere between no way and kind of/sort of. Every time I hear about billions of dollars of sales coming down the pike for these three technologies I start wondering how those numbers will ever obtain without some highly creative, budgetary gerrymandering that shifts existing spending on things like analytics, operations, and app development into the AI/ML/IoT category. Yes, lots could be spent on AI/ML/IoT, but will that really be net new spending, imparting net new growth, or will it be another revenue shell game, hopefully making investors happy but not really yielding massive net growth?

The distinction is important, because more and more big enterprise software companies, even those that are cloud natives, are living off the fumes generated by what is effectively maintenance or renewal revenue: an annuity revenue stream based on maintaining the existing, rather than moving forward to the net new. That simply cannot go on forever, particularly as core enterprise software functionality (such as ERP, HRMS, CRM, etc.) commoditizes – what we like to call these days “fit to standard” – and starts heading to the cloud. In the upper atmosphere those fumes are just going to get thinner and thinner. And In their place, if the vendors are to keep their investors happy, some new, bright shiny thing has to show up to generate billions in net new revenues from thousands of net new customers.

(As an aside, the maintenance stream is so powerful that it papers over lots of transgressions, omissions, and just plain sloppiness: it often seems that it really doesn’t matter whether a deal is a good deal, or an implementation is a good implementation, or a customer is even a happy customer, as long as it produces a steady annuity that means, effectively, that every four or five years the vendor brings in 100 percent of the original deal’s value – at a huge margin. That’s where the real profitability – for those vendors that need to show profits – is in enterprise software is today, and will be for a long time.)

So the shiny new things called AI, ML, and IoT – with snappy brand names like Einstein, Watson, Leonardo, Coleman, and others – are the latest attempt to find an innovation revenue stream that can rival what core enterprise software was able to deliver for the last few decades.

So far, I’m not sure this is the panacea the industry has been looking for.

Let’s start by making one thing very clear – AI, ML, and IoT have been around for years, decades actually, and are themselves neither new nor any easier to actually put to work today than they were when I started my tech career in the 1980s (more on that in a minute, and I don’t mean about how old I must be.)

What’s new is the raw processing power available, firehose-style and in the cloud, from the likes of Azure, AWS, Google Cloud, and others: An absolute necessity considering the underlying need to consume and process the enormous analytical models that underlie AI/ML/IoT functionality.  Also new are the quantities of data available to be applied to AI, ML, and IoT: Large datasets are needed in order to use complex statistical algorithms with any hope of statistical validity, and the sensor revolution, the growth of consumer internet data, and the increasing footprint of technology in all aspects of our personal and business lives are yielding a rich pallet of new data sources for use in AI/ML/IoT.

But….the issue of knowing what to do with these technologies, and doing the right/valid things with them, is still a massive challenge. The proofs of concept are piling up, and some of them are pretty impressive. Microsoft is doing really cool things helping elevator company ThyssenKrupp with its elevator maintenance,. And SAP is using components of Leonardo – among other technologies – to help its elevator customer, Schindler, transform their installation process.

The Schindler example is a good one: SAP’s Data Networks group worked closely with Schindler to build the Live Install app that was showcased at last spring’s SAPPHIRE user conference. That work was highly consultative in nature, and, while also highly successful, isn’t necessarily scalable to other companies (such as, one could assume, snarkily, ThyssenKrupp, though they are also an SAP customer): building an app like Live Install, with all the net new digitized processes behind it (including modeling and virtual reality visualization of what the final install will look like) can’t be done out of the box. At least not yet.

This isn’t a criticism of Data Networks, on the contrary: their mandate is to pioneer these kinds of creative use cases that are based on data already available to a customer, and Schindler is a perfect example of this. It’s just that while one can assume SAP made a profit on the project overall, and while it’s clear that there’s a tremendous amount of learning to be had by an undertaking like Live Install, projects like Live Install won’t necessarily yield standardized products that can be included as a line item in a customer contract any time soon.

And that’s because when you combine the knowledge and understanding that customers have about potentially transformative processes or apps (which is limited) with the emerging status of these technologies (which are very nascent), you end up with something that by definition has to be very consulting heavy and relatively light on the packaged, repeatable software side.

It’s the nascent status of these three technologies that poses the greatest threat to vendors looking for something to lead them to the next wave of big projects and big paydays. With pretty much every branded entry (Leonardo, Watson, Einstein, among others) existing primarily as a set of APIs to be used by developers to build highly customized apps, the question of which AI/ML/IoT “product” set to use all too often boils down to a question of which vendor the developer knows best.

And who are these developers? In general, they could be anyone: so-called citizen developers, partners, and hard-core internal coders, among others. While often very disparate in their skill sets, a deep line of business expertise is becoming de rigeur for using these technologies successfully. This expertise is fundamental to the opportunity at hand: killer apps in the AI/ML/IoT market space are by definition very LOB-focused, which is the opposite of the old-school, IT-focused developer audience of yesteryear. IT certainly gets involved, hopefully on a regular basis, but any company engaging in a design-thinking workshop about coming up with a cool, transformative AI/ML/IoT app is going be leaning very heavily on LOB staff to come with the ideas, validate them, and, increasingly, roll up their sleeves and help build a prototype. IT may step in to make sure the backoffice integration is done right, but I expect the LOB to take the lead on a majority of these projects.

This is the meta-transformation that these technologies are bringing to the enterprise: new skills are needed to figure out how to leverage AI/ML/IoT. These are skills that have been there all along, but until now they haven’t been in the room when new technology adoption is being formalized, because the people with those skills traditionally haven’t been involved in new apps development at the initial stages of the process.

Nor are they in the room when an incumbent enterprise software vendor’s shiny new technology tools are being entered, usually by their proponents in the IT department, into the “build my transformational app” sweepstakes. These vendors have always struggled to break out of their IT focus and work within the LOB organizations, even after they acquire LOB-specific vendors, and their field sales staffs tend to have a surfeit of IT connections and a dearth of LOB connections. This leaves the vendors, via their field sales staff, trying to sell tomorrow’s message to last year’s audience. Not really the best way to go-to-market with a new strategy intended to be the next ginormous thing.

Which brings us to the real problem for the enterprise software vendors looking to break new ground in AI/ML/IoT: if you’re an old guard ERP vendor, chances are you’re either not well known to, or highly unpopular with, the LOBs. They either never used your software, used it and hated it, or have just heard about how crappy traditional enterprise software vendor implementations have gone, and want none of it. So when it comes time to build a cool new transformative app, the reflexive move in the LOB is not necessarily to look at the old guard vendors’ new AI/ML/IoT tools. If they even get to hear about them. It’s much easier to start by considering the tools or platforms that are already in the LOB first. If they come with a cool new desktop experience or mobile app that LOB users are familiar with, when it comes time to look into building AI, ML, or IoT apps, the reflexive move will be to the LOB vendor, not the one that the IT folks like.

Newbie cloud-native companies have this problem in reverse: while they are beloved by their users, who usually occupy a specific LOB (sales and service, HRMS, etc.), the rest of the company’s users aren’t necessarily at all familiar with the cloud native company. Nonetheless, when a new app needs to be built that’s exclusively within the domain of the LOB, chances are the LOB cloud vendor’s tool will be used – Salesforce.com’s Trailhead developer engagement platform is a perfect example of this: the last Trailhead conference I went to in June was replete with Salesforce admins and other LOB users avidly upgrading their skillsets in AI/ML/IoT and mobbing the presentations and demo stations with an eagerness that still makes Trailhead the best new developer outreach program in the industry.

But even Trailhead or other LOB vendor offerings have distinct limits. Could a LOB vendor expect the asset maintenance folks at a company like ThyssenKrupp to choose their LOB-focused tool when they’ve had no exposure to it? Not likely: they’ll go with what they know and are familiar with – it’s human nature to default to the known quantity whenever possible. Indeed, in the Salesforce.com world, the fact that Salesforce has repeatedly said it’s going to provide the best CRM cloud in the business would tend to shut out developers (and ISVs) from going with Salesforce for something completely outside the CRM domain.

It’s important to note that against this backdrop of a commoditizing tools and platform approach by most vendors, Infor has taken a different tack, and I’m curious to see how this works out. Their approach is to productize their AI/ML/IoT dreams, and go to market essentially with an apps, not tools, approach. This can’t be done without working very closely with customers as well, and that of course means that Infor will have to finesse the problem of who owns the IP that goes into the finished product. That won’t necessarily be easy.  But it does represent a way in which the results of Infor’s early forays into AI/ML/IoT will yield a repeatable, scalable products business instead of more risky consulting-driven business.

Regardless of the approach, the bottom line is that it won’t be easy to convert an installed base that is familiar with an enterprise software backoffice product into the advocates for massive, enterprise-wide AI/ML/IoT projects based on that backoffice vendor’s toolset. The IT folks who know enterprise software aren’t necessarily taking the lead on these new projects, that responsibility is more and more residing in the LOB, many of which are disconnected, disaffected, and/or estranged from the IT side of the business (those of you who have ever tried to bridge the divide between IT and, for example, shop floor operations, or HR and ERP, know what I’m talking about.)

Neither will it be easy to make LOB prototypes or even first generation production apps the harbingers of massive, enterprise-wide sales either: the LOB influencers who can get approval for a POC or even that killer LOB app don’t necessarily have the clout to enforce an enterprise-wise AI/ML/IoT tool or platform standard on the company. Their counterparts in other LOBs will likely have their own tools in mind. And so the morass continues.

What may be more common is that as the POC morphs into a production app it will continue to use the same toolset/platform that it started with, providing an upsell/cross-sell path for the lucky vendor with an inside track in the LOB. Which is why it is incredibly important to be in on these early deals as much as possible in order to plant the seeds for the evolution of these POCs into full-fledged production systems. But that doesn’t mean there will be a single corporate AI/ML/IoT standard to emerge: large enterprises are incredibly heterogeneous, and much of that heterogeneity is due to the fact the LOBs have had the leeway to pick what they see as best of breed apps. I see no reason why the LOBs won’t continue to exercise this independence. And leave it to IT to clean up the mess.

Hence my skepticism about net new revenues in the tens of billions of dollars any time soon. There will certainly be some decent revenue from early POCs as they convert to production apps, and hopefully examples like those at Schindler and Thyssenkrupp will yield upsell opportunities for their respective vendors. But to date I don’t necessarily see a path from there to massive enterprise-wide deployments worth hundreds of millions of dollars. Not of  the scale to eventually supplant the aging systems now driving all that maintenance revenue.

Which why I call this a morass: AI/ML/IoT are clearly among the shiniest, newest things around, and as these technologies demo well and make for compelling case studies, it’s easy and fun to showcase the early customer wins. But it’s going to be a long time before these technologies become major factors in their respective companies’ revenue streams.

The most hopeful scenario, which indeed is beginning to play out, is that every vendor – cloud native and traditional backoffice – is poised to reap enormous benefits from what I call the transition to transform opportunity. Companies running older versions of their enterprise software – and that’s usually a majority of any vendor’s customer base – will at a minimum need to move to a new backoffice platform as a means to get the ball rolling on digital transformation and the application of AI, ML, and IoT.

Those transitions could be lucrative – more will be reimplementations than upgrades, in my opinion –  and there will be net new customers coming on board as well. But transition projects are only buying time, not the future. The future isn’t in the backoffice, that much we know. Where it lies from a revenue standpoint for vendors, and what is going to induce customers to engage in the next generation of massive, high-priced projects, remains to be seen. AI/ML/IoT will have to play a role, but those technologies alone won’t be enough. Hype can only take a market so far.

 

 

 

Infor Drinks Koch By the Barrel While Microsoft Dynamics Sips A Thin Gruel

Apparently my blog post last month accusing Microsoft of neglecting its Dynamics product line struck a nerve. The gist of the post was that Dynamics was falling into irrelevance as Microsoft seemed to focus on bigger and better things. The evidence has been pretty definitive – no Dynamics-specific user conferences, analyst events, or, from what I could see, senior executive interest – and the results have been as expected: out of sight, out of mind. Judging from the feedback I got from all over the enterprise software market, that perception is pretty widespread among customers, partners, and my fellow analysts.

In the meantime, I was invited to New York to attend Infor’s annual user conference, Inforum. This is the showcase for the largest enterprise software company no one has heard of, but that’s all about to change. Highlighting the analyst preview day was a presentation from Infor’s newest investor and industrial partner, Koch Industries. Koch, in case you only know the name because of the Koch brothers political activism (I’m not being partisan, they’re pretty happy to highlight this on the company’s home page), happens to be one of the largest private industrial companies in the country and is now poised to become a laboratory for how far Infor CEO Charles Phillips can take his vision, his ambitions and his company.

You couldn’t find two more contrasting approaches to enterprise software than Infor and Microsoft.

Microsoft responded to my blog post with, to their credit, access to the two senior execs in charge of keeping the lights on at Dynamics, and, from what I understand, lighting a path from the rest of the company back to the core ERP, CRM, and other assets that make up the Dynamics family. What I heard was interesting, and in some ways offered a solid rebuttal to my characterization of Dynamics as the unwanted and unloved stepchild of the Microsoft cloud juggernaut.

But nothing Microsoft said or hinted at or offered under NDA can compare to what Philips was willing to share with a couple dozen analysts as part of one of the best analyst programs in the industry.

Let’s start with the Koch story. Koch’s CFO, Steve Feilmeier, took the stage in front of the analysts at New York’s Javits Center and basically hit a bases-loaded home run into the Hudson River. To set the stage, Feilmeier hammered home the enormity of what partnering with Koch Industries means: Koch is a $100 billion-plus behemoth that spans the oil, gas, paper and pulp, chemicals, and plastics industries, among its many lines of business. It has 130,000 employees, spread across 60 countries, and would be high up on the Fortune 500 if  it were a public company.

Here’s what was so impressive about Feilmeier’s talk, which he basically repeated on the main stage the next day as the conference kicked off: Koch isn’t just a passive investor, it wants to standardize on Infor’s product line across the company. That includes using Infor’s HRMS for managing a workforce slated to grow to 200,000, dropping Infor asset management software into 300 of its manufacturing plants, deploying Infor’s GT Nexus global logistics network across the company, and dumping Oracle Financials in favor of Infor’s Cloudsuite Financials. And that’s just for starters.

What this does for Infor pretty much makes the $2 billion Koch spent for a 49 percent share of Infor the least significant part of the relationship. While the investment is nothing to sneeze at, the solid endorsement of Infor’s strategy by Koch’s senior management, and the promised scale of the Infor deployment at Koch, gives Infor something that Microsoft Dynamics can’t even begin to touch, and rivals like SAP always struggle to come up with: a respected, multi-industry global company that is deploying a broad set of Infor’s products and is willing and able to become a showcase for those deployments.

The willingness – at least so far – of Koch to get on stage with both analysts and customers is not to be discounted. Every vendor is struggling to find customers who are not only willing to be spokespeople for their vendors, but can also talk publicly about deploying a comprehensive set of products – that whole is greater than the sum of the parts story –  from their vendor. And that turns out to be incredibly hard and getting harder all the time.

To be fair, this is a problem that bedevils all enterprise software vendors today. It’s one thing to boast that there are currently 6300 S/4HANA customers, as SAP is now stating  to the market. It’s entirely another thing to have a critical mass of customers deploying and speaking out about how well those deployments are going. Same with Dynamics – though their cone of silence is as much about the internal lack of momentum I mentioned as it is about an industry-wide dearth of referenceable customers. Regardless, there’s no better or more necessary and essential proof point than finding customers who will stand up, take questions and provide real answers to the world.

Neither Phillips nor Feilmeier made any commitment to transparency and access as the Koch relationship unfolds, but it’s pretty clear the challenge is on the table. It would definitely be in Koch’s interests as an investor to play this role: the credibility they lend to Infor’s strategy and products will light up Infor’s sales calls all over the world, which of course will boost the value of Koch’s investment, which just makes good business sense, etc. etc.  As long as there aren’t too many problems executing these very ambitious plans, I can’t think of a reason why Koch wouldn’t want to play that role.

Meanwhile, back in Dynamics-land, the paint is a less fresh and the colors less bright. There are definitely some cool things happening: Microsoft has plans to use the LinkedIn acquisition as the strategic cornerstone of a talent management/HR play that could be a strong player in the market. (But you knew that.) There are continuing efforts to leverage the breadth of functionality in Azure as both IaaS play as well as a developer platform play: PowerApps, Flow, and associated tools, including an well-designed data store that can be used to support rapid, “citizen developer” apps development. (Maybe you haven’t heard that one yet.) And there’s plans to push forward with a re-org of sales and go-to-market in order to drive more innovative thinking into these engagements, with Microsoft’s internal consulting arm taking the lead. (Basically, what every other vendor is doing in the age of digital transformation.)

And…that’s all I can say. Or will say. Because I still don’t really know what’s going to happen to Dynamics: a 30 minute call doesn’t really begin to tell the story, whatever that story is supposed to be. To their credit, Microsoft quickly put their execs on the phone with me after my post came out, and that call genuinely gave me the impression that Dynamics is not “missing in action”, as my post claimed. But it’s clear Dynamics is still not sure how it wants its story told.  Or what that story is about, or how it positions Dynamics competitively in the market, or what Dynamics’ role inside Microsoft is slated to be in the next few years.

In several follow-up calls and emails after my micro-mini-briefing, Microsoft made it clear that they’re not just struggling with their messaging, but whether they really want to engage in the marketplace of ideas the way their competitors do. Their line of questioning said it all: Was I sure what was under NDA and what wasn’t? Would I submit my post for review? Could we discuss this again please? I spent more time interacting about what I was going to do with the relatively sparse information they shared than I spent on the phone with their execs in the first place. Really.

Contrast that with Infor: The follow-up from Infor after a day-long session, which included a fair amount of NDA info, was a simple “thanks for attending.” Trust, Infor understands, is part of the engagement model. The contrast is so deep that, while I could probably say some more non-NDA things about Microsoft’s plans for Dynamics, I’m going to pass. I think it’s best to leave the details of what Microsoft Dynamics is up to for a time when the company is a little more comfortable playing in the marketplace of ideas.

Infor poses no such challenge. In fact, the real challenge with Infor is to distill my 25 pages of notes from the analyst day into a short, coherent analysis. Should I focus on their cloud story – almost 8500 customers, 71 million users, significant momentum in cloud-based revenue and new customer wins? Or their emerging global SI partner story, with the likes of Capgemini, Deloitte, Accenture, and Grant Thornton showing by their mere presence in the Infor market that the big deal flow is definitely starting to happen? Or the continuing growth and maturity of their XI IaaS platform, now in use by 7000 customers? Or their growing IoT strategy, which seems tailor-made to show up in those 300 Koch plants as part of Koch’s asset management strategy?

How about their continuing focus on deep micro-vertical industry functionality – which was highlighted by an almost criminally dense set of slides showing dozens of capabilities per industry? Or the momentum behind their GT Nexus business network? (Though it has to be said that GT Nexus’ focus on blending financing with the business networks – a key part of the strategy when the acquisition was announced – isn’t getting much traction.)

Or should I talk about Infor’s Coleman AI/ML strategy and its well-considered focus on building industry specific solutions instead of going to market with a general purpose platform that, frankly, would just make them the latest entry in the race to the commodity bottom of the AI platform market?

I assume you get the point, and hopefully Microsoft does. But let me spell it out succinctly in case it’s not obvious: enterprise software in the cloud is a white-hot market opportunity, made even more white-hot by Oracle’s NetSuite acquisition, which both pissed off a lot of customers who don’t relish working with Oracle as well as galvanized companies like Infor, and SAP, to step up their enterprise-in-the-cloud game, particularly but not exclusively in the mid-market where Netsuite once thrived.

Infor clearly gets it: The nice thing about being a private equity company is that there’s more breathing room for strategies to mature, and this year’s Inforum was perfect example of what happens when a comprehensive cloud strategy finally comes into its own. Public companies like Microsoft are to be excused for succumbing to the quarterly cadence of Wall Street and focusing on the big wins to the detriment of the small. Grow like a cloud company, be profitable like an on-premise company:  It’s a diktat that Wall Street continues to enforce, to the detriment of everyone.

But that’s a lousy strategy in the long run for Microsoft if t means neglecting the long tail of innovation in favor of the fat quarterly profits that the rest of Microsoft’s cloud business is turning in. While there is no doubt that Microsoft Azure and Office 365 are amazingly successful, and getting more successful all the time, there’s more to the market than the kind of cloud platform and office productivity plays that Microsoft is largely focused on.

The battle for the future of enterprise software – which is both the ultimate proving ground for the cloud, and the ultimate delivery point for differentiated value – is being fought in the lines of business, not the IT shop, which looks at platforms and desktop productivity products as largely commodity technologies. Winning in enterprise software is a matter of also getting those LOBs to use a given vendor’s tools and technologies to transform their businesses – almost regardless of what platform or platforms have been chosen by IT. And working with the LOB means looking for the small wins, not necessarily the biggest and most Wall Street-friendly deals.

The IoT/ML/AI world is a perfect example of this: Every cogent analysis shows that most of the IoT/AI/ML market momentum is based on proofs-of-concept, not enterprise deals. But it’s those POCs that are destined to become the future enterprise-wide deployments every vendor is banking on, and those IoT/AI/ML pioneers – the “citizen developers” or visionary LOB managers – are the midwives of this strategy. Which means that, as the market matures, each of those enterprise-wide deployments will have an enormous ancillary sales impact, requiring more apps, more cloud, more analytics, more integration, more data, and more and more and more.

Infor gets this, which is why they have spent so much time planting seeds that they are just now beginning to harvest. One day, Phillips knows too well, they’ll have to transition from a company that lives off a fat maintenance revenue stream into one that lives off of a fat innovation-based revenue stream. The myriad bets Infor has made, whether it’s by acquiring GT Nexus, or focusing Coleman on product, not just platform, are clearly focused on that day coming sooner rather than later.

That’s because – back to the marketplace of ideas that Microsoft is so far eschewing – It’s pretty clear that the pioneers in LOB are going to go with what they know and what’s in front of them. And they’ll most likely stick with existing providers – assuming they have the innovative tools and products the LOB is looking for – as the LOB move its POCs up the food chain to become enterprise-wide deployments. And it’s these enterprise-wide deployments which in turn mean enterprise-class revenue streams for their vendors.

So if you’re a vendor who is absent when the market is doing advanced show and tell about the future of enterprise software and how your company can transform the enterprise, you’re not just missing out on the early adopters, you’re potentially missing out on the really big prizes as well.

Infor clearly gets it, and it will be fun to watch all this vision unfold inside the domains of Koch Industries and the other companies a name like Koch can attract. Get out the popcorn, this movie is going to get real interesting real soon.

Et tu, Microsoft?

 

Microsoft Dynamics Who? Microsoft Pioneers a New Category: MIA Software

Microsoft is emerging as a potent force in the enterprise software market, propelled by Azure and the success of Office 365. The former provides a comprehensive cloud platform and set of services that are, as a platform for enterprise software, second to none. The latter provides an amazing productivity platform and set of services that spans a broad swath of the day to day requirements of today’s business user. And, for what it does, is also second to none.

That’s the stuff Microsoft likes to shout from the rooftops, and deservedly so. What they’re strangely quiet about is the fact that they also have market leading ERP and CRM software products that are right up there with the best of the best. At a moment in the market when public cloud offerings for enterprise software are hot, and the original offerings from leaders like Netsuite and Salesforce.com are looking a little long in the tooth, there’s a deafening silence around Microsoft’s Dynamics product line – the mellifluously named Dynamics 365 for Operations, Dynamics 365 for Sales, and the other Dynamics products that have a legitimate shot at market leadership.

The omission isn’t just curious, it’s also tragic. Microsoft’s inability to promote of a solid set of enterprise software is a disservice to its customers and a larger enterprise software market that only gets better by increasing the choices customers can make.

Why is Microsoft so quiet about what should be a major set of assets in the highly competitive and fast-growing cloud ERP market? There’s a clue to be had in the checkered history of Microsoft’s enterprise software aspirations, starting with the acquisition of Great Plains in 2001. Since that first acquisition, and the subsequent acquisitions of Navision and Axapta, Microsoft’s senior execs kept looking at the paltry revenue and margins that these products could command relative to Windows and Office and for a long time basically shunned what later became the Dynamics product line.

That benign neglect was severe enough that there were many times when Dynamics execs confided in me that they might be better off being spun out and run as an independent company or sold to a larger enterprise software vendor. This second-class status was the norm pretty much until Kirill Tatarinov was promoted in 2007 to run the show, whereupon Dynamics began to come into its own. Even through a couple of reorgs that threatened to repeat the past pattern of  neglect, Kirill was able to keep the Dynamics flag flying.

But Kirill left two years ago as part of another reorg that put Dynamics’ fate in the hands of EVP Scott Guthrie, and that’s basically when the silent treatment began.

I don’t blame Guthrie for what looks like that same old indifference, he clearly has other and more lucrative fish to fry, such as Azure and Office 365, the latter of which now has 100 million enterprise users (which begs the question of what very large percent of the global economy runs in part on the Office 365 family – I’m sure the answer would give Microsoft some serious bragging rights.).

More importantly, however, I’m not so sure Guthrie really gets enterprise software in all its complexity and glory. Or maybe he just doesn’t get why Dynamics is that important to Microsoft. Perhaps, just like in the olden days after the Great Plains acquisition, Guthrie’s indifference may be a rehash of an historical perception at Microsoft regarding Dynamics’ limited value to the company.

Regardless, any and all scenarios that marginalize Dynamics are basically a damn shame. And it’s a shame that Dynamics has been placed in the cone of silence – no more conferences, no more analyst events, no more regular briefings – and not just because I’m an admitted enterprise software bigot who likes a dynamic (pun intended) market full of great products pushing the envelope on behalf of customers. It’s a shame simply because I think Microsoft and Guthrie are shooting themselves in the foot in the middle of a very fast and competitive race for a key enterprise software platform prize.

The prize? Leadership in the next generation public ERP cloud, the one that puts classic backoffice ERP into the cloud, straps it to a comprehensive platform full of innovation services (as in the ubiquitous trio of IoT, AI and ML, as well as microservices, etc. etc.), and leads the global economy to the promised age of digital transformation. That one.

But instead of putting Dynamics into the sweepstakes, Microsoft keeps shooting Dynamics in the foot. The latest example of friendly fire came at the Microsoft Build developers conference last month. I’ve been writing a lot about how developer outreach is the new imperative for enterprise software vendors who want to play in the cloud platform business, and I went to Build to take the measure of Microsoft’s latest efforts in this regard. The basic issue is that platform providers need developers who reflexively use their tools, services, and platforms to build those cool new apps that will transform the business world. And historically, Microsoft has done this better than most: witness the fact that Build is one of the larger –if not the largest – pure developer conferences in the industry, and Microsoft’s legions of developers are the envy of the enterprise software market.

The focus at Build on what Microsoft touts as the “intelligent cloud” and the “intelligent edge” did a great job of solidifying what I see as a true leadership position in enterprise cloud platforms. This is particularly true with respect to Amazon and Google, but also more specifically with respect to companies like SAP and Salesforce.com, which aspire to be cloud platforms but are basically leading with their apps, not their platforms.

As such, Build was a showcase for all the cool stuff you’d expect as well as a lot of cool stuff that you might not expect, such as what happens when Microsoft pairs the graph API from Office with its newly acquired LinkedIn APIs, or how well Cortana can do live simultaneous human language translation, or Hololens, which is truly in a category of coolness all by itself.

But while Microsoft did a great job of showing off these and other components of its developer toybox/toolkit, by the end of analyst briefing sessions the day before the official start of Build I realized that, at least when it came to talking to the analysts, Dynamics wasn’t part of the developer story. The irony of this is a little mind-boggling when you think that every enterprise software vendor other than Microsoft would give up a body part or two to be able to address the Build audience and show off what their enterprise products and toolkits could do when married to Microsoft Azure APIs, the Office/LinkedIn graph, Cortana’s natural language processing, Hololens, and pretty much everything else under the hood in Redmond.

In other words, while companies like SAP and Salesforce.com can only dream of the day when they have a developer audience of the size and scope that Microsoft can command, Microsoft is squandering a huge opportunity to use its developer network to do with Dynamics what Salesforce and SAP wish they could do with their cloud enterprise software offerings.

Like I said, I’m not sure how well the cloud enterprise software opportunity is even understood at Microsoft. As an example, I asked Scott Guthrie during an analyst Q&A session what role he saw Azure and its Lifecycle Services ALM tool playing in a multi-platform, multi-cloud, multi-cloud app world. This wasn’t meant as a gotcha, it was actually more of a friendly lob: helping customers navigate cloud “sprawl” is a top of mind issue in enterprise software, and if you ask Amazon’s SAP team this question – which I did recently – they have a lot to say about their plans to support the integration and orchestration of different vendor apps across their cloud. After all, Amazon runs SAP and Salesforce and Workday and pretty much any vendor app that needs a public cloud provider.

Just imagine if you build a custom process that spanned two different cloud properties – a CRM and an ERP product, for instance – and your cloud provider managed the full lifecycle of the process, helped manage the integration and orchestration, and made sure that the process was immunized against the different upgrade schedules of the cloud properties. And that vendor had some great IoT APIs, and serious support for predictive modeling and data visualization. Wouldn’t that be nice?

Unfortunately, Guthrie’s answer came off like a punt that sailed out of bounds. The way he launched into something about running Linux VMs and Cloud Foundry on Azure made me wonder if I hadn’t articulated the question well enough. Maybe, or maybe not. To the right audience, the question wouldn’t be halfway out of my mouth before it was being finished for me: If I was in a room full of enterprise software customers of any decent size, or the enterprise software vendors themselves, this question would have been instantly recognized as the big question in the enterprise as the move to the cloud broadens and customers are forced to confront the extreme heterogeneity in their cloud portfolios.

(Ironically, or maybe not, Kirill Tartarinov, now the CEO of Citrix, and his exec team spent a great deal of time discussing this very issue at Citrix’s Synergy user conference last month. They made the most of the term “cloud sprawl”, and their solutions to the problem, centered around their Secure Digital Workspace and Software-Defined Perimeter, were spot on. The Synergy crowd ate it up.)

So where does this “deafening silence” leave Dynamics? Right now they’re deep in the damn shame category of technology marketing. I had the opportunity to meet with their CRM head, Jujhar Singh, at a CRM conference in April. In presentations to a small group of analysts, and then later at dinner, the message that Dynamics CRM had made some extraordinarily huge functional leaps came in loud and clear. The upshot of the 30 minutes we analysts had with Jujhar was that 30 minutes wasn’t enough time to be begin to touch on all the cool new stuff, and the consensus around the room was that the cone of silence was a huge lost opportunity for Dynamics.

This is true across the product line: at this point it’s becoming hard to recommend or even pass judgement on such a stealthy product line, and from where I stand this is keeping Dynamics out of a deal flow that by rights it should be deep in the mix of. Whether it’s a question of showing off the next gen enterprise software process flows that can be enabled by combining Dynamics, Office, Azure, and the rest of the stack, or simply competing head to head in the public cloud market against the likes of SAP, Oracle, Salesforce.com, or Financial Force, Dynamics is increasingly showing its unfortunate leadership in special category of enterprise software that can best be described as MIA, for missing in action.

There was a time when Dynamics was recognized internally for its ability to be the staging ground for the rest of Microsoft’s technology: if you wanted to see what the full complement of Microsoft’s  desktop, cloud, infrastructure and systems software could do, all you had to do is find a decent-sized Dynamics ERP customer. It gave Dynamics a lot of internal cred, and that cred allowed Dynamics to have a seat at the Microsoft table as a legitimate division worthy of some mention.

Apparently that cred is gone, and with no senior exec actively advocating for Dynamics, a market leading product set is left to languish. There’s not a lot of precedent for success with this model of MIA marketing, and, for the sake of the customers and the Microsoft employees who still bleed for Dynamics, the company should get back on the road and start talking about Dynamics again. Or sell it before missing in action morphs into irrelevance or, worse, DOA. As in dead on arrival.

The Fog of Innovation Marketing: SAP Obscures S/4 HANA’s True Competitive Advantage

If you walked away from SAP’s recent SAPPHIRE event scratching your head about which version of S/4 HANA your company should deploy, you’re not alone. There seems to be a fair amount of confusion about the differences between S/4 HANA On-premise/Private Cloud and S/4 HANA Public Cloud. And  that confusion threatens to derail the growing momentum around the company’s flagship cloud products.

The problem is that SAP is trying to get S/4 HANA Cloud to punch above its weight class by claiming it can meet the needs of a large enterprise, and in the process the company is setting the stage for some serious customer confusion about which version of S/4 HANA is the right one for the job. The irony of these efforts is that in sowing this confusion SAP fails to see that the very thing they’re trying to hide by overselling S/4 HANA Cloud is the very thing that actually imbues the overall S/4 HANA product line with the exact attributes that customers need.

Unfortunately, SAP only has itself to blame for the confusion. The official messaging, to be perfectly honest, seems designed to obfuscate rather than enlighten. I had to go three rounds with SAP to get the story straight, and at times it felt like I was deposing a reluctant witness, rather than having a forthright conversation about what will always be a complicated decision for SAP’s customers.

Here’s the gist of the problem. SAP’s official storyline is that S/4 HANA Cloud is as well-suited to run a large, global enterprise as the on-premise and private cloud versions. This is due to the simple fact, SAP officially maintains, that the on-premise and private cloud  editions of S/4 HANA are built off the same code line as S/4 HANA Public Cloud, which means that a customer can chose either one for their upgrade or migration because they are functionally equivalent.

Those italicized words, however many times SAP executives repeat them – and they were repeated to me more than once  — won’t be true for quite a while, if ever. It’s kind of like saying that a Honda Fit is functionally equivalent to a Honda McClaren Formula 1 race car. Both have tires and engines and transmissions, etc., and both can transport you from here to there. But if you’re going compete on a Grand Prix track,  you might want to leave the Honda Fit at home. And if you’re trying to take your kids to the water park or need to hop to the grocery store for a gallon of milk, I’m pretty sure a carbon-fibre, 220 mph Formula 1 race car wouldn’t be the right choice.

Similarly, if you want to run a global company using a standardized set of business processes, S/4 HANA Cloud is your Honda Fit. But if you want to do something more – run an industry solution like Retail, or a fully-functional warehouse management system, or run the fully functional versions of GRC or GTS, for instance – you’re going to need that race car, and that means running S/4 HANA on-premise and either managing it yourself or having a service provider run it for you in a private cloud.

Notice the deliberate use of the words “fully functional.” Read this carefully: the public and private cloud editions are different, the scope of what they can offer is different, and there are very different deployment use cases depending on what your business goals are. The two products’ roadmaps promise a good deal of convergence in the coming years, and the similarities at some point may outweigh the differences. But it’s highly unlikely they will ever be functionally equivalent: the market demand for some industry solutions may never be big enough to move that part of the code line to the public cloud. We shall see.

What I’m baffled about is the thought that SAP thinks there’s something wrong with this kind of full disclosure. From where I sit this is a huge strategic advantage, particularly because of S/4 HANA’s secret weapon: code equivalence.  Both versions do spring from the same code line, which means that any of the “fit to standard” functionality that a customer deploys today in S/4 HANA Private Cloud can be moved, pretty much seamlessly, to S/4 HANA Public Cloud any time the customer wishes. Of course, the above bells and whistles like GRC or EWM – said facetiously: these are hugely important pieces of functionality for some of SAP’s biggest customers – might have to stay in private cloud mode. But, that’s what SAP Cloud Platform is for, right?

This ability to keep the strategically advantageous parts of the S/4 product line in a private cloud and  move the  rest of the company to a public cloud version while running a single code line is something one other company can do. Who? Well, it’s certainly not Oracle, which possesses one of the best examples of cloud code sprawl in the industry. And the rest of the major competitors are cloud-only companies, no private cloud options available.

Only Microsoft can do something similar, to the extent that the Azure Stack offering allows a company to run a complete version of Azure in a private cloud, meaning that its cloud ERP product, Dynamic 365 for Operations, can also run in said private cloud. But I’m not sure there would a business case for splitting a company into pure cloud and Azure Stack versions of Dynamics 365, certainly nothing similar to the mixed S/4 HANA scenario above.

A further advantage of the SAP approach is that a single code line means process equivalence. This means, for instance, that a new “fit to standard” business process in S/4 HANA, version whatever, could be stress-tested and perfected in one location – say in your offshore subsidiary – and then moved around to world to your other locations in a way that would vastly simplify the change management issues inherent in any new process change. Are these other entities running S/4 HANA on prem or in the cloud? Who cares? If it’s available as “fit to standard”, an identical configuration of the one you’ve created for one line of business can be deployed anywhere pretty much as-is.

In other words, SAP customers can separate the strategic decision to go with a migration or upgrade to the S/4 HANA family from the deployment question of cloud or on-premise, at least for a significant percent of their needs. Start with the real questions about managing the technical and business changes needed to thrive in the coming years, look carefully at what customizations need to move to the new platform, and then figure out what has to run where and how: Cloud vs on-premise, migration versus upgrade, cohabitation of ECC and S/4 HANA vs. HEC, etc. etc. If you’re like most large companies – and many smaller ones – you’re probably going to have to mix it up. Or maybe not. These are hard questions to answer, but they’re much easier to deal with once the more salient business and customization issues have been sorted out.

Ironically, underneath this functional equivalence stuff, SAP overall has pretty much has these goals in mind: Leading customers down a decision path that starts with “what do you want to accomplish” and leaving out the issues of which version can meet their functional needs until the time for such a decision is ripe. To help out, SAP has some pretty robust tools, such as S/4 HANA Readiness Check and the Transformation Navigator, both of which can help customers make careful choices dictated by their particular IT and business realities rather than blindly follow some overreaching marketeering that is clearly intended to push SAP’s cloud market status as much as possible, and to hell with the consequences.

If only SAP would stop pretending that all roads lead to S/4 HANA Cloud. SAP should instead embrace the strength and depth of its offerings and stop claiming that this diverse set of offerings is somehow functionally equivalent, before it gets into hot water with its customers. Pretty much every vendor/customer dispute can be boiled down a couple of simple problems, one of the most important of which is mismanaged expectations. Overselling, underdelivering, obfuscation, confusion – these are the paths to customer dissatisfaction and competitive disadvantage. In this case, this functional equivalence concept is made all the more useless by the fact that what SAP is trying to hide – a product line, based on a single code line, as diverse as the customers it’s trying to serve – just happens to be its biggest strength.

Just tell the truth, the whole truth, and nothing but the truth. It’s really that simple..