Bill McDermott’s First 100 Days: One CEO, One SAP

With so much digital ink spilled, much of it hyperbolically, over the management changes at SAP, and with the prospect of more to come, I’ve decided to weigh in on where SAP is today and where it’s heading. If you don’t want to read the whole post, here’s the take home message: plus ça change, plus c’est la même chose.

Let’s start at the top: Management changes are a normal part of business. With former co-CEO and now solo CEO Bill McDermott in charge, it was inevitable that some execs would have to go. What’s important to remember about SAP is that its structure – an executive, a management board and a supervisory board – means that there’s a pretty deep bench that mitigates the loss of any individual, even one as highly placed as Vishal Sikka. Take a deep breath everyone: SAP will soldier on quite well without Vishal.

Expect a focus on finances. As solo CEO at a time of continuing global economic and geopolitical chaos, McDermott has a lot on his plate, but battening down the financial hatches is clearly at the top of the list. So job number one is tightening up on the excess personnel and inefficient processes that have accumulated over the years. Hence some layoffs last week, and probably more on the way – all of which will clear the way for McDermott to hire people where he needs them, not where they happen to have landed following the last acquisition. And you can be sure that Bill wants to look the Street in the eye following his first quarter as solo CEO and prove he’s running a tight ship and improving margins.

Consolidating power also means making sure acquisitions are really acquisitions. The major acquisitions of late – Sybase, SuccessFactors, and Ariba – have had the tendency to act like autonomous entities that are legally part of SAP but either pretend they’re not or pretend SAP was the one that was acquired. (This is an old problem, dating back to the Business Objects acquisition.) I expect to see more layoffs and management changes to reflect the need for McDermott to make sure that, as solo CEO, he’s running a single, unified company.

And you should expect to see more evidence of a one company, one CEO message at SAPPHIRE as well. There are too many examples – HP and Microsoft to name the biggest – of companies that have suffered from an inability to organize around a single brand and go-to-market strategy. I don’t think Bill wants to start his solo CEO career repeating those kinds of mistakes.

HANA, Fiori, Learning Hub, cloud: the innovator’s dilemma at SAP is not “where’s my innovation”, but “how can I execute on all my innovation?” This is a good problem, a very good problem, to have. Executing on too much innovation beats executing without innovation any day (just ask Oracle.) A further reason to acknowledge the legacy of Vishal without bemoaning his departure – unless you thought of Vishal as a detail-oriented execution guy, which means you didn’t really know him.

SAP’s timing on cloud is spot-on. Finally, SAP has come to a new innovation at just the right time. Standalone cloud for the sake of saving capital is giving way to hybrid clouds that need deeper backoffice connectivity in support of deeper business processes than the first wave of cloud providers can offer. Nothing has been sown up for SAP, but its hybrid cloud messaging is resonating with the masses of customers who now, finally, appreciate the inevitability of cloud.

HANA’s success will continue to evolve. It’s clear that HANA didn’t have a particularly great quarter in Q1 – otherwise we would still be hearing about it six weeks after the quarter closed. That’s fine – Vishal’s main problem/attribute is that his reach exceeded his grasp (and good for Vishal… “or what’s a heaven for”, to finish Browning’s famous line), and that goes in particular for his plans for HANA. But no matter, HANA is set firmly in the mindset of the SAP customer base and understood for what it is. It’ll take time to free the collective budgets of the SAP customer base to create a massive uptake for HANA, but if SAP and McDermott can be patient they will be rewarded.

So, from a customer standpoint, I don’t see a whole lot of impact from Vishal’s departure. Product strategy isn’t changing, and in fact, based on what I’ve seen in my SAPPHIRE pre-briefings, things are going to get better for customers in some important ways. For those who bemoan the loss of Vishal’s frankness and openness – and some customers I respect have made it clear those attributes will be missed – I offer the hope that Hasso Plattner sets the cultural norm in that regard, and a frank and open replacement for Vishal will emerge from the seeming void of today. Every good leader needs a voice of truth – or at least apparent truth – to allow the market to vent some steam. I’m sure McDermott knows he has everything to gain by having someone in a senior position building that kind of trust and credibility. And if he doesn’t know it yet, he’ll hear from me and lots of other people about why he needs a point man or woman for this vital role.

For employees, this was far from a warning shot of “more to come”. Of course there’s more to come – standing still is no longer an option for SAP or any company competing in the global economy today. If you want a stable, boring, predictable life, you need to rethink your decision to work for SAP or the entire tech market, for that matter. I know that’s harsh, and something the German works councils don’t like to hear. And I grant them the point. But change – dynamic, and at times gut-wrenching change – is what our industry is predicated on. So, until the revolution comes to upend the status quo, we’re all going to have our guts wrenched on a regular basis.

And this change was predictable – there’s been tons of speculation over the years that McDermott has been grooming himself for a run at the governorship of Pennsylvania, or some such political post. I now realize he’s been grooming himself for a run at the top spot at SAP – and now that he’s arrived, like all new top execs, he’s going to make his mark. So expect more change, it’s part of the process of ushering in new management, and part of the process of surviving in the vicious world of tech.

But also expect some stability in what needs to be a sensible, stable, predictable execution of a myriad new products and strategies that have been cooking in the SAP kitchen for the past decade. It won’t be easy, it won’t even necessarily be pretty, but the real drama at SAP isn’t in the events of the past two weeks. It’s in the events to come, the new products to be launched, the competitors to beat or be beaten by, the missteps to correct. That’s going to provide enough drama to make the recent past seem boring and uneventful. Which is exactly as it should be.

Security, Privacy, Big Data, and Informatica: Making Data Safe at the Point of Use

It’s hard to find a set of topics more relevant to the interplay of technology and society than security and privacy. From Glenn Greenwald’s new book on NSA leaker Edward Snowden to the recent finding of a European Union court that Google has to drastically alter the persistence of user data in its services, the societal fallout from the Internet as it enters its Big Data phase is everywhere.

So it was with no small amount of interest that I sat through the first day of Informatica’s user conference last week, listening to how this former somewhat boring and still very nerdy data integration company is transforming itself into a front-line player in what has become an all-out war to protect the privacy and security of our companies and persons.

The position of Informatica is simple: for optimal usability and control, manage data at the point of use, not at the point of origin. Companies still get to run their back-end data centers using all those legacy tools and skills the IT department cherishes, but when it comes to managing the multi-petabyte world of wildly disparate data from every conceivable (and a few inconceivable) sources, trying to manage, massage, transform, protect, reject, and otherwise deal with data at the source is a Sisyphean task best left to the realm of mythology.

Of course, it’s still easy to walk out of a presentation like Agile Data Integration for Big Data Analytics at GE Aviation and miss this not-so-hidden message: GE Aviation tried doing data transformation at the source for the dozens of engine types and thousands of engines GE Aviation monitors, and realized after pushing that boulder up the hill that it was better to do the transformation as the data were being loaded in a “data lake” for analysis. Faster, more agile, better results were the key take-aways from GE Aviation’s efforts.

As the conference wore on, the customer stories and the announcements of new capabilities like Project Springbok, the Intelligent Data Platform, and Secure@Source, it became clear that Informatica’s brand is poised to become synonymous with something far removed from the collection of three-letter acronyms – MDM, TDM, ILM, DQ, and others – that characterizes much of Informatica’s messaging today.

The big picture problem that Informatica solves is a not-so-hidden side of the Big Data gold rush now under way. As data grows exponentially in quantity and sources, the ability of companies to manage those data diminishes proportionally. Indeed, what constitutes “managing” data itself is changing at an unmanageable clip.

In the new world of Big Data, data quality has to be managed along five main parameters: is it the right data for the job, is it the right amount of data, is it in the right format to be useful, is its access and use being controlled appropriately, and is it being analyzed and deployed appropriately?

These big, broad parameters in turn beg  a whole set of questions about data and its uses: data has to be safe and secure, it has to be reliable and timely, it has to be  blended and transformed in order to be useful, it has to be moved in and out of the right kind of databases, it has to be analyzed, archived, tested for quality, made as accessible as necessary and hidden from unauthorized use. Data has to journey from an almost infinite number of potential sources and formats to an equally infinite number of targets, pass increasingly rigorous regulatory regimes and controls, and emerge safe, useful, reliable, and defensible.

Our data warehouse legacy treats data like water, and models data management on the central utility model that delivers potable water to our communities: Centralize all the sources of water into a single water treatment plant, treat the water according to the most rigorous drinking water standard,  and send it out to our homes and businesses. There it would move through a single set of pipes to the sinks, tubs, dishwashers, scrubbers, irrigation systems, and the like, where it would be used once and sent on down the drain.

But data isn’t like water in so many ways. Primarily, big data comes from many sources in many many different formats, and desperately requires an enormous quantity of work before it can be useful. And being useful is very different depending on which data is to be used in which way.  Time series data is useful for spotting anomalies, sentiment data has a lot of noise that needs to be filtered, customer data is fraught with error and duplicates, sensor data is voluminous in the extreme, financial and health-related data are highly regulated and controlled. And if you want to develop new apps and services, you’ll need to figure out how to get your hands on a data set for testing purposes that accurately reflects the real data you’ll eventually want to use without actually using real data that might have confidential or regulated information in it.

Trying to deal with these issues as data emerges from its myriad sources isn’t just hard, at times it’s impossible. All too often the data a company uses for mission critical processes like planning and forecasting comes from a third party – a retailer’s POS data or a supply chain partner’s inventory data – over which the user has no control. All the more reason why Informatica’s notion of dealing with data at the point of use makes the most sense.

So where does Informatica go from here? Judging by my conversations with its customers, there’s a huge market demand, though much of it is not necessarily understood in precisely the terms that Informatica is now addressing. Data at the point of use issues abound in the enterprise, the trick for Informatica is to see that its brand is identified as the solution to the problem at all levels of the enterprise.

Right now there are lots of ways in which these problems are solved that don’t involve Informatica – I was just at Anaplan’s user conference listening to yet another example of how a customer is using Anaplan’s planning tool to do basic master data management at the point of use by training business users to spot data anomalies in the analytics they run against their data. Using Anaplan to do this isn’t a bad idea – other users of planning engines like Kinaxis do the same thing – but Informatica can and should make the case that planning is planning and data management is data management.

Doing this level of analysis at the point of use is – back to the water analogy – akin to testing your water for contamination right before you start to cook. Wouldn’t you rather just start the whole cooking process knowing the water was safe in the first place?

Moving Informatica from its secure niche as the  “data integration” company to something a little more innovative and forward looking will take some nerve: It’s not clear that Informatica’s investors get it, but then again the investor community tends to like the status quo if it delivers quarterly numbers even if the long term prospects are dimming (c.f. Hewlett Packard).

This may be a time for a little leadership, and not followship, when it comes to the question of where Informatica has to go next. The customers are ready for this new vision, and the market is too. With so many different vendors vying for the opportunity to solve these problems, the time for Informatica to strike is now. This is one Big Data opportunity that won’t wait.

The Windows Phone Dilemma: Are Crapps All that Matter, or Can Dynamics Help Save Microsoft’s $7 billion Nokia bet?

It’s almost ironic that using a Windows 8 phone is actually a major geek credential – albeit geek more in the mold of driving a DeLorean than tooling around in a state of the art Tesla. But as a Windows 8 phone user for the past five months, I can say that no one notices these days when you have a new iPhone, but running around with a Windows 8 phone certainly draws comment, a good deal of it polite, if not positive. Perceptions aside, my Nokia 928 and its software are a pretty damn good tool for work, and as a business tool it’s pretty much on par with my old iPhone – albeit not a perfect one.  And, if you put the larger form factor of the Nokia product line into the equation, the extra real-estate makes a huge difference in my day to day work and consumer life.

The work side of Windows Phone has some important things working in its favor, but the play side is a different story. In so many important ways, from a dearth of apps to stupid little things like the fact that Windows phones can’t show Amazon Prime movies, Microsoft’s Windows 8 phones just don’t compare to iOS or Android phones. The myriad lacunae in the consumer side make the numbers so dramatically stacked against Windows 8 phone that many pundits are wondering how Microsoft can ever make good on its phone strategy – a strategy that, independent of the $7 billion Ballmer-bucks it took to acquire Nokia, is essential to the ultimate success of the post-Ballmer Microsoft.

The answer isn’t simple, but it’s also clear that nothing simple is going to succeed in overcoming the lead currently enjoyed by Apple and Google (and Samsung, while we’re at it.) But I think there’s a direct, albeit complicated way out of this conundrum: instead of a frontal assault in the consumer market against two very large and dominant players, Microsoft should use a flanking maneuver against the Apple/Google/Samsung axis by establish a solid beachhead in the enterprise and driving its success from there to the consumer market. Granted, it goes against the received wisdom about consumer/enterprise convergence – but that wisdom dates all the way back about to 2010 or so, give or take a few months. So not exactly the best of historical models to follow.

Making good on reversing the tide roiling against Windows Phone is where  Dynamics comes in. Leading the charge in the enterprise is definitely a role that Dynamics is already playing for Microsoft, a role that was in evidence at last week’s Dynamics Convergence conference, and it’s a role for Dynamics that has a lot of important things going for it.

Bucking the current consumer/enterprise convergence story shouldn’t be as heretical as it sounds. The phone market, particularly the smart phone market, is still a very nascent market stuck in its teething phase, and we have no way of knowing whether the enterprise/consumer influence model can be bi-directional or is forever jammed in a forward gear. What we do know is that the history of the mobile market in general is one of rapid turnover and the sudden death – and humiliation – of companies that scant months earlier were sitting exactly where Apple and Google are today. Do the names Blackberry and Palm ring a bell? Got a Symbian phone, anyone? Current market position – regardless of enterprise/consumer tidal flows – is no indicator of future position. In a market in which the future reimagines itself quarterly, that’s saying a lot.

If the flow of influence can go from the enterprise to the consumer, then Microsoft Dynamics can be a major part of the current, a fact that was on broad display at Convergence. Perhaps the best and most interesting example was the demo by Delta Airlines of the flight attendant point-of-sale systems the airline has deployed based on a Windows 8 phone platform. Every time you buy a drink or snack on Delta, your card is swiped on a Windows 8 phone and the transaction is connected via on-board Wi-Fi to a Dynamics AX system that manages credit card authorization, inventory and billing. Delta can also upsell unused premium seats after take-off using the phones, and has plans to eventually sell Broadway tickets and help consumers buy other goods and services on board as well.

This direct from ERP to phone POS connection is the kind of enterprise chops that Microsoft has in spades. Microsoft has been a POS vendor since the early days of Windows,  back when the only Google on the planet  was spelled googol and represented a big number with a hundred zeroes trailing behind it. Microsoft’s ability to offer a single platform for enterprise apps that span the mobile, desktop, and back-office world is second to none, even if the ultimate goal – a single code base for all three deployment models – is still a year away. No other phone vendor can support the holy enterprise trinity of mobile, desktop and back-office as well as Microsoft. The fact that its enterprise back-office can be delivered in the cloud via Azure is an added bonus.

Is this enough to propel Windows Phone to greater prominence? Probably not by itself. Microsoft’s Window Phone strategy is one of the more dramatic legacies of the pre-One Microsoft era. Separate codes bases, separate strategic directions, obvious technical gaps (synching my Windows 8 Phone to Office 365 doesn’t tend to work as seamlessly as it did with my iPhone) and that massive lack of apps are clearly more than a little Dynamic mojo can overcome.

But these and other gaps are being closed rapidly. The one that may be the hardest is the apps gap  — there really is no way to compare what’s available on iOS and Android to the slim pickings available on Windows 8 phone – unless you look at the Microsoft platform as an enterprise-first platform.

In that guise, it’s easy to see much of what iOS is famous for – and I’m just judging from what pops up in the Apple Store and in online “top iOS apps” list – are the kind of apps that from an enterprise perspective are better referred to as call crapps – apps that are more likely to help you pass or waste time than do something productive.

Nothing wrong with wasting time, unless when you’re at work. Crapps at work are the bane of productivity, and half of what I don’t like about BYOD comes from the availability of smart phones to suck productive time from our work lives. (The other half comes from the obvious and as yet unsettled security problems that come  from BYOD policies, particularly for the largely unsecured Android platform.)

This makes the apps gap a bit of a virtue for Microsoft, and when you add a superior sense of security  and an at least theoretical adherence to a much more rigorous privacy model than either Google or Apple ascribe to, Windows Phone starts to make sense in the enterprise. Add to that the mobile-ready enterprise apps and cloud functionality that Dynamics brings to the party, and suddenly Windows Phone starts to look like an enterprise leader in the making.

An interesting detail in the Delta deployment is that the off-the-shelf Nokias they are using have no cellphone contract – they’re Wi-Fi-only devices. That not only saves money, but also places a healthy restriction on their use. This makes a ton of sense for a device that’s transmitting thousands of credit card authorizations on the 5000 flights Delta provides each day. Restricting consumer-like usage patterns in an enterprise device makes sense: In the post-Target data breach era, not having funky personal apps that can leak data on a POS device is hugely important.

The good news for Microsoft is that the smartphone market in the enterprise is still up for grabs: while the iPhone is all over the place in the enterprise, its role as a strategic platform is less well-established (ubiquitous doesn’t equal strategic, it’s important to note). This is particularly true in the enterprise, a place where Microsoft’s biggest smartphone rivals – Google and Apple on the OS side, Samsung and Apple on the hardware side – have traditionally lagged Microsoft.

How much do these rivals lag in the enterprise? My favorite example comes from the embedded OS market. A little known fact is that most hand-held enterprise devices  — RFID readers, warehouse scanners and the like – run an ancient Microsoft OS called Windows CE. As CE – based on Windows XP – aged gracelessly and Window 8 loomed on the horizon, embedded developers clamored for a new OS from Microsoft, largely to no avail. Years literally passed before Microsoft released a Windows 8 Embedded preview in November 2012 that is now part of what is called Windows Compact 2013.

During the gap years, if Apple or Google had wanted to make their own flanking inroads into the enterprise, a concerted effort towards an embedded OS would have been relatively easy. Microsoft pissed off a huge number of developers, ISVs, and VARs with its devil-may-care, we’ll upgrade CE when we get around to it attitude, and pretty much every one I talked during that time would have been happy to consider an Apple or Google alternative. The fact that neither company took the bait shows how limited their enterprise vision really is.

Will a flanking maneuver to the enterprise pay off for Microsoft? I’m certain it will, but the bigger question of whether it will be enough to drive Windows Phone into the consumer space remains hard to answer. But it’s important to remember that the phone market refreshes much more quickly than the PC market ever did (I’m writing this on a laptop – still a top of the line touchscreen, hybrid tablet, Windows 8.1 machine – that I bought three smart phones ago. I’m pretty confident I’ll upgrade my current phone before I upgrade my current laptop) and all those iPhones and Samsung Android in phones in the enterprise today will be ready for an upgrade in a year at most.

Windows Phone, meanwhile, will soon overlap with Windows tablets and desktop machines, and the ability to build apps that span a phone, tablet, desktop, and back-office use case will be uniquely Microsoft’s. Those apps will be unlike anything that can be done with iOS or Android. As these first multi-platform apps and business processes roll out, their functionality, security, and cross-platform usability may help send a standard for consumer apps that Microsoft’s rivals will be hard-pressed to compete with.

As with everything new and different, the answer will boil down to execution – how well Satya Nadella can embrace the vision of a unified platform, how well Microsoft’s infamously siloed operations can continue the One Microsoft momentum and deliver the goods, and how well its partners will continue to build the last mile or yard of functionality and deliver it to the customer – enterprise or consumer.

But the seeds of change are planted, and Dynamics clearly has a role to play that may be more important than just securing the enterprise. It may be an exaggeration to say one day that the battle for the future of Microsoft was won on the playing field of Dynamics. But five years from now that might be more true than anyone will admit today.


Testing, Training, Succeeding….

There are many nerdy little corners of enterprise software that don’t get the big buzz effect of overly hyped concepts like “social” and “mobile”, but never let inattention lure you into complacency. There are many factors that lead to project success or failure, and some of the more nerdy are in fact much more relevant to overall enterprise success than giving away iPads to your salespeople or trying to foist some collaboration software on an un-collaborative workforce.

One of those nerdy corners is applications testing. While it’s a rare exec who sits bolt upright in a cold sweat at 3 AM over the sudden realization that great testing tools are precisely what has been lacking all this time, that’s only because the rest are sleeping the sleep of the ignorant. Project failure is really death by a thousand cuts, and one of the issues that cuts deeply is the problem with software testing.

Actually, there are myriad problems associated with software testing – many of which remind me of the problems associated with training end users (more on that later). The primary issue is that testing is often a low priority “to-do” that is executed using last century’s tools and last week’s college graduates. And, despite the growing set of regulations requiring that the testing of sensitive software environments is done safely, many companies skimp on testing – like they skimp on training – and then wonder why things aren’t going the way they had hoped.

So, nerdy though it may be, when Informatica asked me to sit in on briefing about their latest Test Data Management and Dynamic Data Masking  announcement, I took the meeting with no small amount of interest.  What I heard and saw were a set of enhancements that seek to bring testing best practices up a notch or two by making sure that new applications are tested using data that’s as “real” as it can get without using production data, and by ensuring that data in production environments is hidden from the view of unauthorized users.

Why bother? What could go wrong, particularly when you’re using a dedicated test instance of your software? My favorite test data near-disaster story involved a famous process manufacturing company upgrading its SAP environment using an outside systems integrator. Midway through the upgrade, someone realized that, while  the project was using that dedicated test instance, the data set they were using contained real data, including the highly proprietary recipes of this manufacturer’s customers, any of which would have been willing and able to sue the manufacturer into receivership if the recipes had somehow been copied from the system.

Not that a third party contractor – replete will all the appropriate permissions and clearances – working in some obscure corner of a big IT shop would ever think of stealing secure information and using it in an illicit manner. That never happens, does it?

Needless to say, the proverbial sh*t hit the fan – luckily before the recipes hit the internet.

Now imagine you’re a hospital upgrading or migrating your patient management system – wanna guess what would happen if a regulator found out anyone outside of a medical provider had access to patient data? Or you’re a retail chain store upgrading – belatedly – the security sub-system in your point of sale terminals – how wise would it be to use real customer credit card data to test the upgrade? Not a chance.

These scenarios shift to a different level of complexity when trying to test a net-new application, for which there is usually no existing data set to use as a template for a test data set. In this case creating a test data set involves ensuring that the data are as close to the real deal as possible, so that when real data are used in the production environment there is a credible reason to believe that all functional and safety issues have been taken into consideration. This is harder than it sounds: creating a credible dummy data set isn’t for dummies.

What Informatica has done is automate test data creation and production data masking processes that have traditionally been time-consuming and fraught with potential danger. They’ve also linked this to their flagship PowerCenter product in order to offer test data and data masking as a service, in the cloud (for Test Data Management, which also runs on-premise) and on-premises (for Dynamic  Data Management). It’s nerdy, but the impact is two-fold. The first is that, by lowering the cost and complexity of generating and managing test data and data masking, this little nerdy corner of the IT world just got a lot less costly and easier to manage.  And second, by lowering cost and complexity Informatica makes it easier for its customers to overcome institutional inertia and cost-justify their testing spend. The lower the barriers, the greater the likelihood that the need for quality testing will be recognized and acted on.

Then there’s  the links between testing and training, particularly when it comes to net-new application development. One of the implementation and go-live best practices I’ve been recommending for years is to train as much as possible, early and often. And I don’t mean big fat binders  full of generic functional descriptions and 8-hour, mind-numbing, core-dump classroom training sessions.  That’s a proven way to waste money and foment mediocrity.

What I’ m talking about is training on real systems, using live data, that are as close to the actual production system as possible. Ideally, this is done in support of agile development, so that the process owners are testing the development system, and using their experiences to both inform the final development as well as get started on building the test scripts and end-user training system.

Not a lot of customers do this – or at least not enough, judging by how many implementation failures can be traced to lousy training – and not a lot of vendors or implementers promote these concepts. SAP recently released a new capability called Live Access that allows customers to train their end users on a real, production-quality system using a simulated data set. This radically raises the value of training and aligns the formerly boring world of end-user training with the well-established body of knowledge on the value of experience-based, hands-on training.

Informatica’s tools enable this too – the rules for test dataset generation and for otherwise ensuring that the wrong people don’t see the right data also serve the purpose of building training data sets that are as real as possible without being too real for safety. While it’s going to take a lot more to shift the majority of enterprises from the “training as an afterthought” to “training as a strategic function” camp, the availability of these  products remove one of the more important barriers to achieving this worthwhile goal.

A final word on test data and data masking. One of the best reasons to own a tool that facilitates these functions is that it’s a guarantee that if you don’t have the capability, someone is currently wasting a lot of time and money doing it the hard way. It may be your own internal IT department, or it may be your systems integrator, inflating the budget with more of those recent college graduates you didn’t realize you were paying to give them some on-the-job training.

Regardless, worrying about this nerdy corner of the enterprise software market doesn’t have to require a major leap of strategy. It should be enough to know that there’s some pretty nice 21st century tools that can save you a significant amount of time and money that you probably would rather spend on something a little less nerdy,  like some of that cool social or mobile software you’ve been itching to get your hands on.

Wouldn’t you?

Satya Nadella, CEO: Good News for Microsoft Dynamics, Bad News for the Competition

The ascendency of Satya Nadella to the top spot at Microsoft is welcome news to a wide range of internal and external stakeholders. He’s an insider who knows not just where the skeletons are buried, but, more importantly, where the gemstones are buried too.

One of those gemstones is Microsoft Dynamics, and as a result of the Nadella Era you can expect to see much more traction for Dynamics both inside and outside Microsoft. While I don’t expect that Dynamics will overtake Office or Windows any time soon in revenues or influence, with Nadella at the helm the value of Dynamics inside and outside Microsoft is destined to grow significantly.

This ascendency of Dynamics starts with the fact that Nadella used to run Dynamics, which is a pretty good start. In conversations last fall, as the board was avidly searching for Ballmer’s successor, Nadella made it clear that he gets why Dynamics has evolved to hold a strategic position inside Microsoft and the market at large. And that understanding bodes well for the future prospects of Dynamics, and perhaps less well for companies trying to compete with Microsoft.

Nadella’s comments addressed a number of key issues, but the most important was his reinforcement of the notion that Dynamics has a role in helping realize the value of new and emerging Microsoft infrastructure assets, particularly cloud assets like Office365 and Azure.  This is less mundane than it might seem at first blush: As social, mobile, and cloud converge – to use Nadella’s term, or commoditize to use mine  – business process, workflow, and supporting data models become increasingly strategic assets for any vendor.

In fact, this creeping commoditization of some of Microsoft’s – and the rest of the industry’s — core assets is what really should be front and center on Nadella’s to-do list for the next year or two or five or 20. It’s easy, if you take Microsoft apart piece by piece, to see that big parts of what constitutes its core products are individually under attack by some very worthy opponents. Google is pushing on Microsoft’s traditional hegemony on the desktop with Chrome, and it owns search (sorry, Bing), Android is #2 in phones and tablets (sorry, Nokia and Surface), and Google Docs is trying its hardest to usurp Office. Apple, of course, owns cell phones and tablets. SQL Server is under a total assault from multiple quarters: a growing invasion of NoSQL DBs and high-end in-memory DBS like SAP HANA are making SQL Server look a little old and tired.

The list goes on: Lync and Skype – well, actually they are their own worst enemies (note to Microsoft’s third party tech support partners, particularly InfoSys and Accenture: if you’re going to use Lync to debug customer problems,  teach your employees how to debug Lync too.) Xbox is sitting in the middle of a console gaming market that is being eaten alive by mobile gaming. SharePoint is under attack by upstarts like Box and oldstarts like Open Text. Azure is more or less just sitting there, letting everyone else’s cloud strategy grab all the glory. Dynamics is heading right into SAP territory, with Infor fast on its heels.  And on and on.

Looking at this as a list of individual parts, this multi-front assault can only be won by Microsoft by taking the commodity road and competing on the basis of volume and price. It’s a war of attrition that I wouldn’t want to be part of, but if that were the road Microsoft were to take, then the single piece of advice that I would give to a Microsoft competitor would be to divide and conquer.  One on one, in a standalone bake off,  most of the Microsoft products are beatable either on feature/functionality, price, or reputation.

But if Microsoft, and Nadella, can keep moving forward with Ballmer’s One Microsoft strategy, and try to rise above the commodity scrum and work on selling the strategic synergy between the myriad parts of the Microsoft product mix, then it’s a going to be a vastly different game. If well-executed, marketing and selling the sum of the parts would be more than enough to lift Microsoft’s core products out of this commoditizing tsunami and onto some high-value high ground.

The high ground is more than just a question of bundling, though under One Microsoft the company is finally able to actually bundle software, services, and hardware from its many divisions into a single, synergistic offer that can be sold, at the high end of the market, by a growing enterprise direct sales force and in the mid-market by a growing army of “sum of the parts” partners.

There’s an important asset, beyond bundling, that One Microsoft can bring to the table (cue Dynamics), and that’s the ability to enable customers and partners to define and implement strategic business processes that can run on top of this broad, and otherwise seemingly disparate, Microsoft product set. While there may be some good reasons not to have a single vendor too dominant in any particular enterprise domain, the ability of Microsoft and its partners to deliver solutions that innovate core business processes and are optimized across Microsoft’s own hardware, software, and services would be an extremely compelling opportunity for many customers. Not to mention ISVs looking to build end-to-end processes based on a single development environment and a unified hardware/software/services platform.

The fact that Microsoft is the only major vendor to effectively span both the consumer and business worlds means that it can provide a degree of seamlessness between these two worlds – bear in mind that at the end point of every business is a customer – that can make B2C processes more that much more efficient.  Those more efficient B2C processes in turn could drive greater levels of efficiency in the B2B processes that support them: if, for example,  a company has a better connection to its customers and a better sense of its demand (B2C) then it can use that efficiency to better manage its supply chain (B2B). This relatively straightforward example has analogs all over the enterprise.

This potential, as I have written before,  is best seen in the context of the high-level business processes that Dynamics can offer in the market.  Having a Dynamics business process at the apex of the Microsoft value proposition is the best way to showcase the value of the synergy between the disparate pieces of the Microsoft product story.

The good news for Microsoft is that Nadella has been part of the execution of this strategy already: Ballmer started pulling his direct reports together into weekly meetings last year in order to drive this synergy forward, and it’s clear that Nadella, as of late last year, was a believer. If he just stays the course, this strategy will make a huge impact on Microsoft’s market position. And if he refines some of the pieces of the strategy – giving Azure more focus, fixing Lync and Skype, converging Windows Phone and the rest of Windows, continuing to drive the message of desktop/tablet synergy, to name a few – and pushes the role of Dynamics as showcase and raison d’etre for One Microsoft, he’ll be well on his way to defining the Nadella Era as one of growth.

The bad news for Microsoft’s competitors is that if Nadella can pull this off it’s going to be increasingly hard to disparage Microsoft and its products as “late to the market”, “low-cost commodities”, or “not enterprise-class.”  What will matter more than anything is that innovative business processes are enabled in a cost-effective and seamless way, tying the enterprise together from the phone and desktop to the back office and cloud. It’s still a vision, not a reality, but it’s one that Nadella can readily set his sights on with a serious chance of success.

There’s no shortage of work to do, and it’s going to take time to see if this grand vision can be realized. But the nice thing about an insider like Satya is that he won’t spend the next six months trying to get his head around this behemoth of a company, having spent the last 22 years laboring towards that goal. Instead he gets to focus on the real task at hand: making sure those who thought this company is “too big to succeed” are wrong. That, at a minimum, will make this an amazing turnaround to watch unfold.


The New Year in SAP-land: Selling Customer Success (Part II)

Where we last left off, I was admonishing the SAP field to greater glory around the theme of customer success. Here’s the rest of my letter to the SAP field:

Sell Value, Not Platforms: Now I’m going probably get yelled again by some of your execs, but trust me, there are going to be very few customers who see you walking in the door trying to sell a massive platform reboot and sob with relief at the prospect. First of all, you and every other sales exec in the waiting room are trying to sell a “platform:” a development platform, commerce platform, mobile platform, decision-support platform, human-machine interaction platform, content management platform, or a platform of platforms. It’s getting hard to walk out the office of the CIO without a platform pitch smacking someone in the head. Okay, HANA would make a HEC of a platform, but you’ve got to do more than just sell a platform strategy in order to rise above the noise. And, as you can’t sell your company’s platforms (and yes, there is, to the great confusion of the market, more than one platform in SAP-land) as platforms for their own sake, then you’d better sell them in terms of their value.

Of course, what defines the value of a new platform is different for every company, and that means you’d better know how to sell the difference between a platform that can rationalize a massive, heterogeneous technology infrastructure, a platform that can support a new, hybrid cloud/on prem deployment scenario, a platform that can support new strategic business initiatives, and a platform that can better align a company with its partners and customers. Even if in all cases it’s the same platform. In fact, it turns out SAP has platforms for all those use cases – but selling the use case first is what will help differentiate your platform pitch from everyone else’s.

Sell SAP’s Ecosystem: SAP has one of the best networks of ISV partners in the industry, and those partnerships do a lot to extend the value – and success – of SAP in the market. But unless there is a hard and fast reseller agreement in place, and a direct monetary incentive to sell a partner product, field sales traditionally hasn’t been too eager to pull out the stops and get a partner product into the deal. The partners are an amazing asset that need to be better utilized, especially the ones that aren’t sold on SAP paper. These are often companies that are either innovating on your technology way ahead of the curve or offering ways to implement faster and get better results, all of which translates to greater customer success. Either way, you need to broaden your horizons and get more of these partner products involved in the deal, or risk losing to the vendor that does.

Cross-Sell or Die: Was that strong enough? Want stronger? Do you want to become like HP, or Microsoft (until recently, anyway)? I can’t tell you how many times I’ve talked to people across your company who don’t know about key strategic products from other divisions and business units, and can only sell what they’re used to selling. Companies that grow by the combination of acquisition and organic innovation can’t succeed if selling is hampered by silos. I’ve seen the death by silo movie a hundred times, and it’s always an unhappy ending. Get your heads out of the sand and sell your company’s products, and not just the ones you can sell in your sleep.

Sell How the Customer Consumes Software, Not How SAP Builds It: Very few customers today buy ERP, whatever that is. Few are buying ultra-fast databases for the sake of having a fast database, no one wants a purely horizontal product when there’s one that is tailor-made for a specific industry or geography. In other words, resist the temptation to pitch to the customer the usual three letter acronyms and cooler than cool technology. Customers want industry best practices, process excellence, great interactions and user experiences. They are buying verb phrases – sell more, service better, innovate faster, get closer, be more, spend less – and not noun phrases like big data, real-time interaction, hybrid cloud, and mobile analytics. They want their processes to be best in class for their industry, geography, customer and user base. They want you to talk their language, not talk at them in tech jargon and TLAs (three-letter acronyms, for those of you who are TLA-challenged). You don’t need to dumb it down to have this conversation – in fact you might just find yourself having a much more strategic discussion if you leave the TLAs at the door.

Sell to the Present, Not Just the Future: It’s easy to be tempted by the allure of the new, but most of your customers won’t go fully cloud, fully HANA, and fully mobile any time soon. What they will do is implement some of the new alongside the old, and they are expecting you to sell them a large degree of comfort around the fact that the old won’t become the neglected along the road to the new. It’s important to emphasize this – otherwise you’re selling revolution, and that’s not going to get you or your customers where you want to be in either the short or long term. There’s a lot to be said for offering hybrid strategies to your customers that support the old and the new, cloud and on-prem, mobile workers and desktop workers. It will make you a lot of friends in an installed base that has been watching the innovations with a combination of excitement and trepidation.

Let me close this overly long missive with the following: This may be the most competitive year you’ve ever had to sell in. Your competitors are getting smarter (okay, most of them anyway), and have been stepping up the innovation side of their businesses as well (again, most, but not all.) And the SAP customer base has a peculiarity that is, I think, unique to SAP: they’re decidedly more independent, and more prone to looking outside SAP, than their counterparts in the Oracle market. They’re under constant sales assault by companies like IBM and Microsoft, which both partner closely and compete aggressively with SAP. And the customers have armies of global SIs camped at their doors, all of which command a seat at the table for any major technology change, and few of which seem adverse to throwing SAP under the bus when things don’t go as planned.

In short, it’s going to be an impossibly complicated and difficult year in which to sell some of the most innovative and comprehensive software and services in the industry against some of the ablest and most aggressive competitors in the business. And, of course, just remember that if you at first you don’t succeed…. You’re toast.

So, Happy FKOM, and a very Happy New Year.

The New Year in SAP-land: Selling Customer Success (Part I)

I’ve realized recently that despite everything we analysts do and say about enterprise software company strategies, new products and technologies, trends, and all the other coins of the analyst realm, what matters most is how the sales force sells. If the field sales force can’t get in front of the right influencer at the right time with the right mix of product and strategy, then every analysis, recommendation, critique and consulting gig geared towards fine tuning go-to-market strategies quickly goes to hell in a hand basket. The bottom line is no go-to-market strategy is so perfect that it can’t die an ignominious death in the field.

It’s clear that the entire enterprise software market is at an inflection point when it comes to its customers. Depending on making quota by leveraging the good will from years of customer/vendor “partnership” isn’t going to cut it. While many deals are still done based on long-established relationships, the growing number of influencers and the complexity of the interoperability of new products and business processes means that counting on the old “nobody got fired for buying (insert your company here)” strategy is a quick road to the loss column.

These thoughts are coming to mind as the new year begins, and, the ritual of field kickoff meetings follows suit. Up in a week is SAP’s Field Kick-off Meeting (FKOM), and it’s a given that SAP’s massive sales force will have one of the biggest challenges in the industry. Not just because of the innovations that SAP is bringing to the market – which are myriad and overwhelming for those us who try to follow them, much less for a sales exec trying to translate them into something a customer is willing to pay for  – but because these innovations are opening up new competitive challenges as they attempt to open up new markets for SAP and its partners.

Below is a letter to SAP’s FKOM attendees on the eve of the event, highlighting the things I think might make a difference in the coming year. It’s in two parts – I’m posting part I today, and will follow with part II in two more days. While it’s addressed to SAP, I believe the suggestions below apply to every enterprise software salesforce in the business.  Enterprise software is changing, mostly for the better, and sales execution needs to change with it, or else.

Dear SAP Field,

This is the year in which the pressure is all on you – once again. SAP is coming to market with an unprecedented collection of products, strategies, services, and technology, and all these initiatives will either live or die in your hands. You’re going to be expected to be smarter, more strategic, better able to sell solutions, more attuned to the business user, and willing to make your life – and bonuses – more complex by getting SAP customers to adopt a growing portfolio of subscription-priced cloud services. 

You’re supposed to be better at vertical selling, more attuned to bringing partner products into the mix, and ready to prove the value of massive improvements in software delivery, life-cycle management, training, and development.  And you’ll need to know when to offer HANA Enterprise Cloud, HANA Cloud Platform, ARIBA’s business networks, Jam’s social collaboration, new user experiences like Fiori, specialized applications like Commodity Management as well as new initiatives like Smart Business.

You’re also going to have to know when to bring in SAP Services, when to suggest a Design Thinking workshop, when to cede the high ground to your big SI partners, when to send a deal downstream to a mid-market partner. You’ll need to know what the opportunities are in mobile, how to differentiate between a Sybase ASE, IQ, and SAP HANA sale. And while you do so you’ll need to be fending off the strategic incursions of Oracle, Workday,, Microsoft Dynamics, Infor, IBM, and a never-ending armada of erstwhile competitors looking to invade your shores and take away your hard-earned market share.

If you think this is hard – then you’re halfway there. And if you think this is impossible, then you’re actually showing signs of sanity: Rest assured, selling enterprise software was never a sane, rational process even in the best of times. But more importantly, with SAP’s senior management exhorting you to glory – and an ever-higher sales quota –  you’re probably wondering how you’ll ever pull any of this off.

So, with FKOM looming and a new race to Q4 starting up, permit me to offer some suggestions gleaned from my own attempts at rationalizing what SAP is doing and aligning it with what customers are looking for. You may not like what I have to offer, and it’s possible that your bosses might not like it either: I’m a believer that targeting your quota first and your customers’ long term interests second may be good for the quarter but bad for everything else. Regardless, I think we can agree that selling this vast array of products and services as checklist items on a price list isn’t going to get you, or SAP, where you want to be by this time next year.

Sell Success, Not Licenses: If you do one thing to change how you interact with customers, it should be around selling success. Overselling and under-delivering are the unfortunate legacies of the enterprise software market, and all too many deals gone-bad are effectively the result of a salesperson promising whatever it took to get the deal done. It’s true that not every deal that goes bad is oversold –  there may have been some overselling in the Avon deal, but fundamentally that deal went south because the global SI on the project ran it into the ground. But if you’re only in a sale to make your quota, in my opinion you’re doing everyone a disservice.

This of course means that your bosses have to give you the air cover to walk away from a deal or from plumping it up unnecessarily. If you’re going to be a good guy or gal, and do what’s right for the customer to the possible detriment of your quota, you should be able to do so without being stigmatized for not overselling. It ain’t going to be easy – ideally this new attitude would move all the way up the food chain to the board, and would be something that the investors who demand killer quarters regardless of who dies in the attempt  would adopt as well. But regardless of who’s on board, selling success should be your job number one.

Sell Your Own Implementation Services, Not Your Partners’: Okay, a lot of people, particularly your global SI partners, are going hate me for saying this. Deloitte won’t like it for sure, they’re still running for cover in the wake of the Avon fiasco (Ding dong, Deloitte – Josh calling. Anyone home?) But the fact remains that SAP Services may be the best choice for not only implementing the latest and greatest – all that HANA you’re supposed to sell, for example – as well as the best choice for ensuring customer success. SAP Services is getting pretty good at driving innovation based on SAP’s latest products, the RDS offerings are singularly successful at managing implementation costs and timetables, and they definitely care a lot more about preserving SAP’s brand reputation than some global SI’s we’ve already discussed.

SAP’s Global SI partners have had too many incentives over the years to oversell the services side of a deal and jack the project cost and timetable way up, to the detriment of customer success. This is part of their DNA, and while some of them, particularly the boutiques, are getting the customer success religion, it’s hard not to be a global SI and still dream the oversell dream. So cut them out as much as possible.

Sell the Value of Training and Education: Top notch training and education is probably the single best guarantee of delivering long-term value to a customer, and SAP has some impressive, and impressively under-appreciated, education assets. This is an uphill battle worth trying to win with a customer base that has traditionally seen training and education as a waste of time and money. The ability of SAP to deliver training online, in-context, and tailored to a customer’s particular implementation is more than just nice to have – it’s really the only way to ensure that a shifting, mobile workforce will be able to optimize their use of the increasingly complex set of processes and technologies that SAP has to offer.

If you let your customers sign a deal without having some training and education in the mix from the get-go, you’re creating the first and most important precondition for project failure: lack of training. Another reason why you should be selling SAP’s training is that those global SI’s I’ve been picking on also have a sorry history of either not selling training and education or offering a mediocre product to those customers that opt in. SAP has something to offer that is top notch, and as concepts like strategic workforce planning and talent management become broadly accepted best practices, training and education services will be essential for making sure the right people are on the job at the right time.

Sell Lifecycle Management: Anyone in the marketing side of Solution Manager will tell you, I’m more in the loyal opposition than fanboy camp when it comes to Sol Man. This product set has been a mess for customers to understand and implement, and the wealth of partner products that perform different parts of the Sol Man functionality attest to the need for faster and easier ways to do some of the things Sol Man does. That being said, customer success can be best guaranteed by enabling a comprehensive lifecycle management function, and Sol Man has basically no equivalent in the market today. As SAP moves everything to the cloud, Sol Man’s benefits will become even easier to access – and in fact for pure cloud customers, much of what Sol Man does will be completely transparent. But while you’ve got your customers’ attention regarding success, sell them a little insurance too. If I were the SAP Board I’ve give your customers a discount on their maintenance if they fully implement Sol Man (ditto on that offer for training and education): nothing will help ensure success more than a fully implemented ALM strategy, whether it’s based only Sol Man or some combination of Sol Man and partner products.

Sell Interaction and Process Excellence , Not Mobile and Cloud: I know you’re going to have customers who say that they need a cloud or mobile strategy, and it’s going to be tempting to try to give them one by selling them some cloud or mobile software. But my experience is that the customers that ask for products in this manner today are either way behind the market or missing the opportunity to have a more inclusive, internal dialogue with their stakeholders about what really should matter to the customer: supporting world class stakeholder interaction and process excellence.

This isn’t about more mobile and more cloud: Moving the needle on your customers’ business processes should be a technology-agnostic quest that starts at the process and interaction level. Technology platform choices like cloud and its many sub-categories should be made in support of the process improvement goals, not as a pre-condition to process improvement. And supporting mobile today is like supporting alternating current – try to re-think a core process that doesn’t have a mobile component. So don’t lead with the obvious, lead with what really changes the game for the customer.

(End Part I. Part II to follow on Thurs.)

With Friends Like These…. Uncovering Responsibility in Avon’s Rollout Failure

“Victory has many fathers. Defeat is an orphan”. 

                                          President John Fitzgerald Kennedy, on the Bay of Pigs Fiasco

It’s great line, and one that popular culture has changed to success has many fathers and failure is an orphan. JFK’s line is a stirring example of a leader taking ultimate responsibility for what happened under his watch. But the truth, which any student of the art of failure knows well, is that it takes as many people to kill an undertaking as it does to make it succeed.

So when I see headlines like InformationWeek’s “Avon Pulls Plug on $125 Million SAP Project”, my first reaction is to cringe at the obvious lack of knowledge about complex software implementations that went into that headline. Implementation projects always have many fathers and mothers, so to blame the software vendor for the failure is tantamount to blaming the grandparents for the misdeeds of their grandchild. They provided the DNA, no doubt, but someone else actually raised the little devil and loosed him on the world.

In the spirit of a truthier form of the truth, the headline should have read “Avon Pulls Plug on $125 Million Deloitte/IBM/SAP/Avon Project,” which properly distributes the blame for the failure, in probable order of culpability. Subsequent reporting on the story by IW has unearthed IBM’s role as the provider of the UI for the project, and some sleuthing on my part has dug up Deloitte’s role.

While Deloitte has declined to comment on their primary role in yet another enterprise software failure, and SAP and Avon haven’t publicly commented on who the SI was, I’m pretty confident my sources are right about the fact that Deloitte was the primary SI on the job.

As the primary SI, the headline could more succinctly read “Avon Pulls Plug on $125 Million Deloitte Project.” Of course, that might require Deloitte to take the high road and assume ultimate responsibility as the contractor in chief, in line with the spirit of JFK’s mea culpa as the Commander in Chief of a failed operation with hundreds of other “parents”. Not a likely scenario: as a snarky aside, one could be tempted to add the word “again” or “another” when talking about Deloitte and implementation failure in the same sentence. Deloitte, particularly when it comes to failures involving SAP software, is apparently a serial offender. More on this in a moment.

The reason I’m being so hard on Deloitte (and IW) is that it’s very clear that enterprise software projects have at least four “parents”, and in most cases each is contributing an essential component of the project’s DNA and each bears considerable responsibility for how well the project goes. The first three are readily identified: the software vendor, the SI, and the hardware vendor. Usually there are multiple vendors for each of these categories, in this case IBM Websphere built the front end UI and connectivity components, while SAP provided the ERP back office functionality. There are also several SIs involved in most large projects, though there is only one prime contractor among them.

The fourth parent is a little trickier to call out, as none of the other three really want to be accused of blaming the victim. But in the spirit of the truth, if you’re going assign responsibility for project failure, and of course project success, you’ve got to mention the customer as well. Sad but true – customers bear significant responsibility for project failure. Whether its Avon’s IT staff, or U.S. Dept. of Health and Human Services, a forensic analysis of project failure always finds fingerprints on the murder weapon that belong to the customer.

But there’s responsibility and then there’s ultimate responsibility, and with so many big vendors on contract to make this Avon project work – companies that collectively have thousands of successful implementations under their belts – it’s clear the real parental responsibility for this disaster comes from one of the vendor categories, not the customer side.

While not knowing the intimate details of the contract in question, it’s safe to assume that Deloitte was the primus inter pares in the deal. In a project with $125 million at stake, at a public company that pulls in $10 billion in revenues each year, and in the midst of a turnaround architected by a new CEO, most companies would engage in a project using an established global SI like Deloitte as the prime contractor. A global SI like Deloitte is precisely the company you want, apparently, to provide air cover when you go to your board for a $100 million project. Boards tend to think the Deloittes of the world won’t screw up this bad, and are willing to pay for the insurance policy that comes with using a major SI brand.

With board-level approval in hand, these global SI relationships typically extend deep into the C-suite and tend suck up most of the oxygen in the room – and the bulk of the project’s cost – especially relative to the revenues being paid to the enterprise software vendor. Indeed, in many of these large accounts, a company like Deloitte is easily earning ten times more revenues than any individual software vendor.

With that revenue differential comes a huge differential in influence, and therefore responsibility.

The prime contractor is the one that is supposed to manage the complexities of the project, mediate the different project needs and overcome the obstacles, bring all the different hardware and software products into harmonious union, and guarantee that when the project goes live, it meets certain basic criteria. FYI, a $100 million-plus write-down is not one of them.

If Deloitte was truly the prime, and there’s a lot of circumstantial evidence that it was, then the act of going live with a project that underperformed so dramatically – according to all reports – was Deloitte’s call. SAP provided the backend, and probably had some inkling that things might not be as copacetic as they should have been. IBM Websphere built the user experience, and would have had to definitely be aware that orders, inventory visibility, and a host of other problems were mucking up from the get-go. But the only player on the field with a full view of the action – or lack thereof – was Deloitte.

And if they were made of the same stuff as a JFK, they would have fallen on their sword with honor, instead of, as of this writing, trying to hide their parentage by not commenting.

This story of culpability in enterprise software is as old as the market segment, and the ability of the global SIs to make everyone but themselves look bad when the project flames out is impressive to the point of absurdity. What is also absurd is the pass they get from customers for this lack of culpability – but of course if project failures are only reported as software failures, how would the customers know what role the SIs play?

One of the bright shining hopes in the growth of cloud computing is the fact that the ability of big SIs to wreak havoc on customers’ implementations, to the detriment of their vendor “partners”, will wane as large swaths of what was once the SI’s bailiwick become subsumed in a standard cloud implementation model. Of course, that wouldn’t have saved Avon in this case. But with less of the implementation cost under the SI’s control, and less of the implementation success left up to a busload of junior programmers learning on the job, project failure may diminish.

Regardless, a little transparency is long overdue in the SI market. What really is Deloitte’s track record in enterprise software? Avon, the County of Marin, Los Angeles Unified School District, Levi-Strauss, the States of California and Florida – those are a few of the Deloitte failures that we know of. But it’s not easy to know about either failures or successes. Could it be that the risk of failure a la Deloitte outweighs the brand value of having Deloitte on your side? Shouldn’t a customer be able to judge the relative value of an SI based on some objective data about their ability to deliver on time and on budget?

And could a software vendor like SAP or IBM – or any other vendor dependent on global SIs as a channel – insulate itself from the negative drag that SI-driven project failure creates by insisting on some modicum of accountability from its SI partners, instead of being hung out to dry, as is the standard operating procedure of the SIs today?

Okay, when pigs fly, you say? One can dream, can’t one?

Finally, where does this leave our friends at IW? As an ex-journalist, I understand the maxim “if it bleeds it leads” as well as any, and I guess that SAP was an easy target (though where was IBM in the original report?) But this story, IMO, could have been even bloodier if the true story had been told. With just a little digging a reporter could have figured out that Deloitte, not SAP, was the primary culprit, and a headline along the lines of “Global SI Giant Deloitte Leaves Avon Calling…. for a $125 Million Write-Down” might have gotten more page hits. Maybe not, but at least it would have been more accurate than the headline they used. Better luck next time….


One OpenText: E Pluribus Unum, Enterprise Software Style

It’s becoming the latest trend in enterprise software company evolution. After years of merger and acquisition, in which dozens of products and thousands of customers were dumped helter-skelter into a single corporate bucket, yet another agglomeration of disparate products, services, and technologies is trying to rationalize its offerings.

This time the company is OpenText, and the rationalized product set is code-named Red Oxygen,  a collection of five suites of functionality that span search, analytics, archiving, publishing and presentment, social collaboration, process management, integration, content management, and a development and deployment platform. Though the product set is vast, the goal is simple: rationalize a massive collection of products, strategies, and market acquired over its 30-plus years of existence.

In pulling a disparate set of over 100 products under a single umbrella, OpenText is showcasing the ambitions of its CEO, Mark Barrenechea, to avoid being road-kill in the too big to succeed club. In this regard, OpenText is following in the footsteps of companies like Microsoft and Infor, both of which have woken up from a binge of acquisitions and poorly integrated business and product strategies and realized that leveraging the sum of the parts requires more than just a single corporate logo.

From the looks of what OpenText presented at its recent Enterprise World conference, Red Oxygen is just what the company needs – a tangible strategy, and a new platform, for helping customers innovate around the increasingly important domain of content, big data, and business process. At face value this makes a lot of sense – content is at the heart of pretty much every major business transformation I’ve witnessed or worked on, and the interrelationship between content and business process is fundamentally what has to be improved in order for processes to improve.

While OpenText has a lot of ground to cover between its Red Oxygen strategy and execution, the foundation has clearly been laid for re-imagining a compelling reason for enterprises to work with a unified OpenText.  

There are three reasons why this strategic rationalization is essential for a company, like OpenText, with dozens of acquisitions under its belt: existing customers need it, prospective customers need it, and an often overwhelmed and outgunned sales force needs it.

For existing customers, a unified product strategy – as opposed to the smorgasbord of products that in many companies results from an often investor-friendly and customer-hostile acquisition strategy – gives customers a strategic vision and roadmap for the products they’ve invested in. Many companies became customers of OpenText through acquisition, and they need to know that the single product they bought can lead them to a promised land of other products, and innovation and value, etc. etc.

Of course, the trick is to make sure that the inevitable bad news that some products will be orphaned or simply rationalized out of existence is countered with some really good news about the new strategy and products that will replace them.  Problematic for some customers, but essential for the vendor going forward.  

For prospects, the rationalized strategy is usually about sloughing off the lingering market-dinosaur status that often is the result of an investor-friendly acquisition strategy that’s more about creating a fat maintenance revenue stream than product innovation.  This is clearly part of the rationale behind Red Oxygen, as it was the rationale behind Infor’s Infor 10x strategy and the Fusion software and middleware strategy that Oracle – the king of investor-driven acquirers – tried and failed to use to rationalize its software binge.

(Over in Redmond, Microsoft has been accused of looking like it is on the road to extinction, but not by running up a massive maintenance stream based on a hodge-podge of older products. Though you could argue that its desktop and office productivity monopolies were definitely making Microsoft as fat and happy  as any maintenance-revenue rollup, what was really happening was that the company had let its different operating units function so autonomously that they were indifferent to the need to cross-sell Microsoft products. While the cause was different the effect was similar: existing and prospective customers may have bought the products, but they weren’t  being sold a pan-Microsoft strategy, and a massive up-sell and cross-sell opportunity was squandered. In theory, this is what the company’s One Microsoft is intended to solve.)

Like Microsoft, Infor pre-Infor 10x, and like Oracle today despite Fusion, OpenText needed a shot in the arm that could leapfrog its reputation as an old guard market laggard  doomed to obsolescence in the face of an aggressive new set of competitors.  Reputations like these are often unfairly earned – there is a mountain of evidence that customers have continued to spectacularly innovate using OpenText’s existing portfolio, especially but not exclusively in the SAP market for which OpenText is SAP’s top OEM partner . But there’s nothing like having a loud-mouthed startup, such as Box, trying to steal your market and paint you into a strategic corner to force a little strategic change.

Finally, there’s the problem of sales execution – that great black hole where all good marketing ideas go to die. Companies that grow through rapid acquisition usually find themselves at a point where there are just too many wildly different and often over-lapping products in the portfolio to sell. In frustration the field sales team simply rolls up its sleeves and goes tactical, selling only the point solutions they’re most familiar with and foregoing any attempt to sell a strategic product set or vision. This works well-enough as long as the customers don’t want vision and the competition isn’t doing a halfway decent job of selling a vision of their own. But once the competitors start looking visionary, and the customers start thinking that a little vision might be good for them as well, then a field sales force that can only sell tactically is going to be drag its company down the road of mediocrity.

This triple threat is clearly what Barrenechea is trying to avoid with Red Oxygen, and judging from the reaction at Enterprise World there’s a decent chance he will succeed. While there were definitely customers I spoke with who voiced concerns about whether their favorite product or capability might be lost in the shuffle, there was clearly a sense of relief that OpenText was starting to show some moxie. How well moxie translates to market and mind share will take some time to discern.

There’s one important caveat for OpenText and it’s fellow acquirers-cum-strategic visionaries. The pan-enterprise story is a good one, and a necessary one for vendor and enterprise alike. But it’s hard to sell at a time when more and more influence is being divested to the line of business buyer who unfortunately doesn’t care as much as he or she should about how LOB buying decisions fit into a broader corporate strategy. This problem only gets worse when you add a platform to the mix, as Barrenechea  has done with Red Oxygen. Take your vitamins and eat your vegetables … always easier said than done.

In the end, OpenText really doesn’t have a choice – no more than any other serial acquirer in today’s fast-moving technology market has a choice. And as for the ambitiousness of its plan – it’s definitely better to err on the side of too much vision than not enough. The vision is a good one, now we’ll have to see if the customers agree.

Too Big to Fail? How about Too Big to Succeed?

Achieving economies of scale is one of the axioms of modern business, a driver for mergers and acquisitions across all industries. This relatively simple concept helped drive a huge swath of industrial companies to impressive degrees of success: in many many cases, buying at massive scale, building at massive scale, and delivering at massive scale has helped drive up revenues and profits, improve market share and share price, and improve share of wallet in accounts large and small.

But does this drive to bulk up always work as well as its proponents – many of whom are all too often conflicted by the huge fees they and their companies stand to gain in the M&A business – would have us believe? And what about the customers supposedly at the center of these transactions – is there anything in it for them?

I think the answer, particularly in our tech-driven service economy, is increasing no. We’ve seen a side of this in the “too big to fail” phenomenon that was the main side show to the current recession: financial institutions whose very size made them too big to just cast them off into the depths of an economic Tartarus. And so as a society and global economy we bailed out these miscreants and clucked our tongues at the notion that we had no choice, that they were just too big to fail.

But isn’t the fact that they needed bailing out really highlight the notion that these institutions were indeed too big to succeed? Too big to move quickly, too big to think smart, too big to act responsibly, too big to provide great customer service, and too big to get out of their own way?

I’ve reflected on this problem of too big to succeed as I have reviewed the trials and tribulations of a number of tech companies that have become or are at risk of joining this misbegotten club. These are companies that have been buying up customers, products, and market share with increasing frequency and avariciousness. In many cases they have made their owners wealthy and their shareholders happy, but all too often they have failed to deliver on the promises that were meant to justify a growth-at-all-costs strategy. What looks good on paper – bigger is better – is looking more and more to be as healthy for tech companies as an IV drip of anabolic steroids is for an athlete . Good for the seasonal batting average, but increasingly bad for the long term.

I spent some time doing strategy work for Hewlett-Packard during the brief reign of Leo Apotheker, and saw firsthand the effect of too big to succeed. My favorite among many examples was the next generation router that I was shown precisely because it couldn’t be brought to market by HP as it was organized in the Mark Hurd era: the different product, manufacturing, sales, and marketing groups that would be needed to work harmoniously together to bring this prototype to market simply had no mechanisms or processes that would allow them to actually do so, regardless of the potential for value.

This lacuna was in force across the company: well-established product lines, like printers, servers, and PCs all had their own sales forces, and their own lead gen activities. There was simply no way for a customer to strike a pan-HP deal, or for HP to combine its sales efforts and leverage all its product lines in the kind of synergistic sale, that if done right, could have significantly improved a half-dozen financial and customer satisfaction KPIs.

In other words, while at the time HP was a $50 stock beloved by its investors, riding on a succession of big ticket acquisitions, inside the company was a rotting core that would eventually begin to ooze out through the edges in bad quarter after bad quarter. The company that a succession of CEOs, from Carly Fiorina to Mark Hurd, had bulked up by buying the likes of DEC, Compaq, EDS, Vertica, Palm (and Autonomy, during Apotheker’s time) had become too big to succeed. And, as far as I can tell, remains so today.

Some of my impressions on the too big to succeed concept have been gleaned from my experience as a consumer. AT& T is a good case in point. They sell a wide range of services that they simply cannot seem to coordinate effectively. My experience last summer moving my home and office was a case study in too big to succeed: AT&T sales and marketing pumped the hell out of their Uverse services, but the technical side of the company couldn’t fulfill on the promise. Instead of turning on new services they shut off existing services, they dispatched technicians without a clue what they supposed to be doing on site, and those poor schmucks frequently made things worse: within the space of two visits from AT&T’s technicians my office mate’s internet account was taken down and the telephone service to my new landlady was fried, my office internet and voice couldn’t be connected, and our home phone number rang into the great digital void, disconnected from our new home. For the better part of a week.

More importantly, all of the above took place after I contacted AT&T’s senior executive vice president for Home Solutions about my problems executing our move, and was assigned not one but two special purpose managers. These two gentlemen are part of a full time team in the office of the President created to sort out the intractable problems that, for the lucky few who know how, are able to escape the labyrinth of the AT&T disservice center and find their way to the exec who is actually responsible for AT&T’s customer sat.

But finding my two new BFFs was more of a Pyrrhic victory than a genuine triumph: Even with one of these top guns helping out, and identifying himself to his colleagues as a way to make sure they understood the importance of satisfying this customer, AT&T simply couldn’t execute one of its most basic customer-centric business processes: move and upgrade service to an existing account. Too bleeping big to succeed indeed. Two months later, they’re still getting it wrong.

I’ve had similar experiences with Chase, egregious enough to cancel my Chase card and vow never to do business with Jamie Diamond again. I think some of the post-hoc analysis about Chase’s role in the financial crisis would put them squarely in the too big to succeed category. There’s also United Airlines, and Sprint to add to the list, and more where that came from.

Not every company is willing to march, lemming-like, off the cliff without a fight. I think it’s safe to say that Steve Ballmer’s reorg at Microsoft is predicated precisely on the desire not to become road kill on the too-big-to-succeed highway. I could literally write a book (or at least a post) about how many times I’ve personally witnessed one part of Microsoft proceeding in willful ignorance of an opportunity in another part of Microsoft to sell a better product, fill a strategic hole in an existing product, provide strategic justification for why two products should go to market together, or provide competitive cover for another product against a larger, outside competitor.

One Microsoft
is Ballmer’s answer to that mess, but the strategy is still in its infancy, Microsoft is still standing on the precipice of too big to succeed, and the company just got bigger by executing a $7.5 billion deal for Nokia’s phone business. But there’s solid signs that Microsoft is trying to buck its destiny, and, from what I can see and hear, it might just succeed.

GE is another company that has impressed me with its foresight. One of the major reasons the company has built GE Software and is pushing a new platform for the industrial internet is to rationalize its different and often siloed industrial lines of businesses and make sure that GE and its customers can leverage the massive opportunity represented by sensor-based data analysis and operations optimization. It’s impressive that one of the most success industrial companies in the world realized that it needed to make sure that the structure and business model that got it to where it is today didn’t stop it from succeeding in the next big opportunity.

Other companies could do with a similar hard look in the mirror. Oracle is definitely one of them: anyone visiting Oracle OpenWorld this year would have seen a company teetering under the sheer bulk of the steroid-drip of acquisitions it has made over the years. The Sun acquisition has proven to be too high a price to pay to keep Java away from IBM, and the engineered systems strategy is clearly an attempt to justify a hardware strategy whose time has come and gone in the age of cloud computing and low-cost, high-performance, standard hardware.

In this light the company’s enterprise software strategy is the main victim of Oracle’s too-big-to-succeed tendencies: instead of leading the innovation charge, enterprise software at Oracle is more of an enabler of the company’s doomed hardware strategy than a crucible of new ideas and new capabilities. And as long as Oracle’s enterprise software is forced to march to the company’s hardware and database drum, success in enterprise software will be harder and harder to guarantee.

I have to add SAP to the mix as well as a potential member of the club. While not as acquisitive as Oracle or Microsoft, SAP has also bulked up its product and customer base, and there’s a genuine risk that, at least when it comes to the SAP field sales team, it has become increasingly difficult for anyone outside to top echelons of the company to articulate the full value of SAP’s vast portfolio to customers and prospects. What is saving SAP for now is a mono-maniacal focus on HANA, mostly because HANA is a concept and product line that can be relatively easily distilled for the field to sell. Just don’t ask them to explain the full breadth of what SAP can bring to the table, there’s too much in the kit for them to handle. This should serve as an early warning sign to SAP management that they’re at risk for joining the too big to succeed club.


I think it’s time to admit that economies of scale work well in industrial companies with industrial processes, but sheer size is no advantage in a service economy. In a service economy, or any economy that depends on getting people to come together in order to provide innovative services to customers, doing more with less – the mantra of the economies of scale mavens – simply doesn’t work.

AT&T thinks it can grow it service offerings without significantly altering its service delivery and support, and it pays the price for having products and services its field technicians and help desk personnel don’t understand and can’t support. Chase, and let’s throw Citi under the same bus as well, thinks that it can continue to fail in coordinating its services across its multiple product lines, and provide completely sub-standard customer service along the way. As long as it makes its numbers, or at least appears to be.

Oracle thinks that it just needs to keep offering more products while collecting huge maintenance revenues, regardless of whether the products work together or are providing real innovation to its customers. And HP – I simply don’t know what Meg Whitman thinks she’s going to do, but it’s clear that recapturing the mantle of innovation and leadership in Silicon Valley isn’t going to happen given the current strategy.

Of course, success and failure are relatively ambiguous terms, so it’s easy to say that AT&T, HP, Oracle, Chase, and Citi are all successful companies, depending on how you measure them. But if the measure of success is the ability to deliver value, innovation and great customer service – simultaneously and at scale – then I would argue that the companies above are members in good standing of the too big to succeed club. And they’re not the only ones.

Can this problem be solved? I think, I hope, that the Darwinian forces that are embodied in another great market maxim –the customer is always right — will help tilt M&A fever in the right direction: towards genuine customer value, not just shareholder value or the lip service that spawns such laughable corporate slogans as We’re a financially strong company with a proven commitment to our customers, community and economy (Chase), Bringing it all together for our customers (AT&T), or Oracle’s Less complexity, more innovation – which basically reads like the headline of an analyst report on the requirements for getting Oracle out of the morass it has created in the race to join the club.

(There’s also Citi’s rather baffling slogan: Informed by the past and inspired by the future. You almost wonder if they’re trying to be contrite or have simply failed to see the irony in calling out the need to be informed by their checkered past. Or maybe they used the same branding company that came up with Oracle’s slogan.)

In the end, the real question we as customers, consumers, and tech executives should ask ourselves is whether value, innovation and great customer service are a realistic goal for a merger or acquisition, or if these terms are used as a smoke screen to mask a financial transaction that benefits everyone but the customer.
If my wish that Darwinian karma will eventually wreak revenge on the over-acquisitive, then perhaps that eventual karmic justice will become part of the equation in evaluating the long-term prospects of a merger or acquisition. Because if the result of customer neglect is the destruction of equity through customer flight, then maybe, just maybe one day more attention will be paid to what’s in it for the customers. And then posts like this will seem to be a quaint example of a far-gone past, like an Upton Sinclair expose or a Thomas Nast cartoon.

You gotta dream…