This is the year of hyping artificial intelligence, machine learning and the internet of things (IoT). Any vendor with any vision, which is everyone, is blanketing customers and partners with pronouncements and keynotes that highlight an increasingly large roster of products, platforms, and technologies loosely organized under the AI/ML/IoT rubric. The result is that these acronyms and the products they represent are everywhere, singing, and dancing their way to our hearts.
But not our wallets. At least not yet.
While it seems as though the primary issue at hand is how to link AI/ML/IoT to the digital transformation wave that has gripped the market, the bigger question centers around whether the revenue predictions these technologies are being associated with will ever even remotely come true? Some of these predictions seem a little hyperbolic: I’ve seen revenue predictions ranging from $20 billion to almost $40 billion over the next eight years or so, and more than one enterprise software vendor CEO has told me and his customers that these technologies will account for the lion’s share of revenues in the near future.
The likelihood of this happening is small, at least from where I sit, and the answer to the $20 – $40 billion question lies somewhere between no way and kind of/sort of. Every time I hear about billions of dollars of sales coming down the pike for these three technologies I start wondering how those numbers will ever obtain without some highly creative, budgetary gerrymandering that shifts existing spending on things like analytics, operations, and app development into the AI/ML/IoT category. Yes, lots could be spent on AI/ML/IoT, but will that really be net new spending, imparting net new growth, or will it be another revenue shell game, hopefully making investors happy but not really yielding massive net growth?
The distinction is important, because more and more big enterprise software companies, even those that are cloud natives, are living off the fumes generated by what is effectively maintenance or renewal revenue: an annuity revenue stream based on maintaining the existing, rather than moving forward to the net new. That simply cannot go on forever, particularly as core enterprise software functionality (such as ERP, HRMS, CRM, etc.) commoditizes – what we like to call these days “fit to standard” – and starts heading to the cloud. In the upper atmosphere those fumes are just going to get thinner and thinner. And In their place, if the vendors are to keep their investors happy, some new, bright shiny thing has to show up to generate billions in net new revenues from thousands of net new customers.
(As an aside, the maintenance stream is so powerful that it papers over lots of transgressions, omissions, and just plain sloppiness: it often seems that it really doesn’t matter whether a deal is a good deal, or an implementation is a good implementation, or a customer is even a happy customer, as long as it produces a steady annuity that means, effectively, that every four or five years the vendor brings in 100 percent of the original deal’s value – at a huge margin. That’s where the real profitability – for those vendors that need to show profits – is in enterprise software is today, and will be for a long time.)
So the shiny new things called AI, ML, and IoT – with snappy brand names like Einstein, Watson, Leonardo, Coleman, and others – are the latest attempt to find an innovation revenue stream that can rival what core enterprise software was able to deliver for the last few decades.
So far, I’m not sure this is the panacea the industry has been looking for.
Let’s start by making one thing very clear – AI, ML, and IoT have been around for years, decades actually, and are themselves neither new nor any easier to actually put to work today than they were when I started my tech career in the 1980s (more on that in a minute, and I don’t mean about how old I must be.)
What’s new is the raw processing power available, firehose-style and in the cloud, from the likes of Azure, AWS, Google Cloud, and others: An absolute necessity considering the underlying need to consume and process the enormous analytical models that underlie AI/ML/IoT functionality. Also new are the quantities of data available to be applied to AI, ML, and IoT: Large datasets are needed in order to use complex statistical algorithms with any hope of statistical validity, and the sensor revolution, the growth of consumer internet data, and the increasing footprint of technology in all aspects of our personal and business lives are yielding a rich pallet of new data sources for use in AI/ML/IoT.
But….the issue of knowing what to do with these technologies, and doing the right/valid things with them, is still a massive challenge. The proofs of concept are piling up, and some of them are pretty impressive. Microsoft is doing really cool things helping elevator company ThyssenKrupp with its elevator maintenance,. And SAP is using components of Leonardo – among other technologies – to help its elevator customer, Schindler, transform their installation process.
The Schindler example is a good one: SAP’s Data Networks group worked closely with Schindler to build the Live Install app that was showcased at last spring’s SAPPHIRE user conference. That work was highly consultative in nature, and, while also highly successful, isn’t necessarily scalable to other companies (such as, one could assume, snarkily, ThyssenKrupp, though they are also an SAP customer): building an app like Live Install, with all the net new digitized processes behind it (including modeling and virtual reality visualization of what the final install will look like) can’t be done out of the box. At least not yet.
This isn’t a criticism of Data Networks, on the contrary: their mandate is to pioneer these kinds of creative use cases that are based on data already available to a customer, and Schindler is a perfect example of this. It’s just that while one can assume SAP made a profit on the project overall, and while it’s clear that there’s a tremendous amount of learning to be had by an undertaking like Live Install, projects like Live Install won’t necessarily yield standardized products that can be included as a line item in a customer contract any time soon.
And that’s because when you combine the knowledge and understanding that customers have about potentially transformative processes or apps (which is limited) with the emerging status of these technologies (which are very nascent), you end up with something that by definition has to be very consulting heavy and relatively light on the packaged, repeatable software side.
It’s the nascent status of these three technologies that poses the greatest threat to vendors looking for something to lead them to the next wave of big projects and big paydays. With pretty much every branded entry (Leonardo, Watson, Einstein, among others) existing primarily as a set of APIs to be used by developers to build highly customized apps, the question of which AI/ML/IoT “product” set to use all too often boils down to a question of which vendor the developer knows best.
And who are these developers? In general, they could be anyone: so-called citizen developers, partners, and hard-core internal coders, among others. While often very disparate in their skill sets, a deep line of business expertise is becoming de rigeur for using these technologies successfully. This expertise is fundamental to the opportunity at hand: killer apps in the AI/ML/IoT market space are by definition very LOB-focused, which is the opposite of the old-school, IT-focused developer audience of yesteryear. IT certainly gets involved, hopefully on a regular basis, but any company engaging in a design-thinking workshop about coming up with a cool, transformative AI/ML/IoT app is going be leaning very heavily on LOB staff to come with the ideas, validate them, and, increasingly, roll up their sleeves and help build a prototype. IT may step in to make sure the backoffice integration is done right, but I expect the LOB to take the lead on a majority of these projects.
This is the meta-transformation that these technologies are bringing to the enterprise: new skills are needed to figure out how to leverage AI/ML/IoT. These are skills that have been there all along, but until now they haven’t been in the room when new technology adoption is being formalized, because the people with those skills traditionally haven’t been involved in new apps development at the initial stages of the process.
Nor are they in the room when an incumbent enterprise software vendor’s shiny new technology tools are being entered, usually by their proponents in the IT department, into the “build my transformational app” sweepstakes. These vendors have always struggled to break out of their IT focus and work within the LOB organizations, even after they acquire LOB-specific vendors, and their field sales staffs tend to have a surfeit of IT connections and a dearth of LOB connections. This leaves the vendors, via their field sales staff, trying to sell tomorrow’s message to last year’s audience. Not really the best way to go-to-market with a new strategy intended to be the next ginormous thing.
Which brings us to the real problem for the enterprise software vendors looking to break new ground in AI/ML/IoT: if you’re an old guard ERP vendor, chances are you’re either not well known to, or highly unpopular with, the LOBs. They either never used your software, used it and hated it, or have just heard about how crappy traditional enterprise software vendor implementations have gone, and want none of it. So when it comes time to build a cool new transformative app, the reflexive move in the LOB is not necessarily to look at the old guard vendors’ new AI/ML/IoT tools. If they even get to hear about them. It’s much easier to start by considering the tools or platforms that are already in the LOB first. If they come with a cool new desktop experience or mobile app that LOB users are familiar with, when it comes time to look into building AI, ML, or IoT apps, the reflexive move will be to the LOB vendor, not the one that the IT folks like.
Newbie cloud-native companies have this problem in reverse: while they are beloved by their users, who usually occupy a specific LOB (sales and service, HRMS, etc.), the rest of the company’s users aren’t necessarily at all familiar with the cloud native company. Nonetheless, when a new app needs to be built that’s exclusively within the domain of the LOB, chances are the LOB cloud vendor’s tool will be used – Salesforce.com’s Trailhead developer engagement platform is a perfect example of this: the last Trailhead conference I went to in June was replete with Salesforce admins and other LOB users avidly upgrading their skillsets in AI/ML/IoT and mobbing the presentations and demo stations with an eagerness that still makes Trailhead the best new developer outreach program in the industry.
But even Trailhead or other LOB vendor offerings have distinct limits. Could a LOB vendor expect the asset maintenance folks at a company like ThyssenKrupp to choose their LOB-focused tool when they’ve had no exposure to it? Not likely: they’ll go with what they know and are familiar with – it’s human nature to default to the known quantity whenever possible. Indeed, in the Salesforce.com world, the fact that Salesforce has repeatedly said it’s going to provide the best CRM cloud in the business would tend to shut out developers (and ISVs) from going with Salesforce for something completely outside the CRM domain.
It’s important to note that against this backdrop of a commoditizing tools and platform approach by most vendors, Infor has taken a different tack, and I’m curious to see how this works out. Their approach is to productize their AI/ML/IoT dreams, and go to market essentially with an apps, not tools, approach. This can’t be done without working very closely with customers as well, and that of course means that Infor will have to finesse the problem of who owns the IP that goes into the finished product. That won’t necessarily be easy. But it does represent a way in which the results of Infor’s early forays into AI/ML/IoT will yield a repeatable, scalable products business instead of more risky consulting-driven business.
Regardless of the approach, the bottom line is that it won’t be easy to convert an installed base that is familiar with an enterprise software backoffice product into the advocates for massive, enterprise-wide AI/ML/IoT projects based on that backoffice vendor’s toolset. The IT folks who know enterprise software aren’t necessarily taking the lead on these new projects, that responsibility is more and more residing in the LOB, many of which are disconnected, disaffected, and/or estranged from the IT side of the business (those of you who have ever tried to bridge the divide between IT and, for example, shop floor operations, or HR and ERP, know what I’m talking about.)
Neither will it be easy to make LOB prototypes or even first generation production apps the harbingers of massive, enterprise-wide sales either: the LOB influencers who can get approval for a POC or even that killer LOB app don’t necessarily have the clout to enforce an enterprise-wise AI/ML/IoT tool or platform standard on the company. Their counterparts in other LOBs will likely have their own tools in mind. And so the morass continues.
What may be more common is that as the POC morphs into a production app it will continue to use the same toolset/platform that it started with, providing an upsell/cross-sell path for the lucky vendor with an inside track in the LOB. Which is why it is incredibly important to be in on these early deals as much as possible in order to plant the seeds for the evolution of these POCs into full-fledged production systems. But that doesn’t mean there will be a single corporate AI/ML/IoT standard to emerge: large enterprises are incredibly heterogeneous, and much of that heterogeneity is due to the fact the LOBs have had the leeway to pick what they see as best of breed apps. I see no reason why the LOBs won’t continue to exercise this independence. And leave it to IT to clean up the mess.
Hence my skepticism about net new revenues in the tens of billions of dollars any time soon. There will certainly be some decent revenue from early POCs as they convert to production apps, and hopefully examples like those at Schindler and Thyssenkrupp will yield upsell opportunities for their respective vendors. But to date I don’t necessarily see a path from there to massive enterprise-wide deployments worth hundreds of millions of dollars. Not of the scale to eventually supplant the aging systems now driving all that maintenance revenue.
Which why I call this a morass: AI/ML/IoT are clearly among the shiniest, newest things around, and as these technologies demo well and make for compelling case studies, it’s easy and fun to showcase the early customer wins. But it’s going to be a long time before these technologies become major factors in their respective companies’ revenue streams.
The most hopeful scenario, which indeed is beginning to play out, is that every vendor – cloud native and traditional backoffice – is poised to reap enormous benefits from what I call the transition to transform opportunity. Companies running older versions of their enterprise software – and that’s usually a majority of any vendor’s customer base – will at a minimum need to move to a new backoffice platform as a means to get the ball rolling on digital transformation and the application of AI, ML, and IoT.
Those transitions could be lucrative – more will be reimplementations than upgrades, in my opinion – and there will be net new customers coming on board as well. But transition projects are only buying time, not the future. The future isn’t in the backoffice, that much we know. Where it lies from a revenue standpoint for vendors, and what is going to induce customers to engage in the next generation of massive, high-priced projects, remains to be seen. AI/ML/IoT will have to play a role, but those technologies alone won’t be enough. Hype can only take a market so far.