There are many nerdy little corners of enterprise software that don’t get the big buzz effect of overly hyped concepts like “social” and “mobile”, but never let inattention lure you into complacency. There are many factors that lead to project success or failure, and some of the more nerdy are in fact much more relevant to overall enterprise success than giving away iPads to your salespeople or trying to foist some collaboration software on an un-collaborative workforce.
One of those nerdy corners is applications testing. While it’s a rare exec who sits bolt upright in a cold sweat at 3 AM over the sudden realization that great testing tools are precisely what has been lacking all this time, that’s only because the rest are sleeping the sleep of the ignorant. Project failure is really death by a thousand cuts, and one of the issues that cuts deeply is the problem with software testing.
Actually, there are myriad problems associated with software testing – many of which remind me of the problems associated with training end users (more on that later). The primary issue is that testing is often a low priority “to-do” that is executed using last century’s tools and last week’s college graduates. And, despite the growing set of regulations requiring that the testing of sensitive software environments is done safely, many companies skimp on testing – like they skimp on training – and then wonder why things aren’t going the way they had hoped.
So, nerdy though it may be, when Informatica asked me to sit in on briefing about their latest Test Data Management and Dynamic Data Masking announcement, I took the meeting with no small amount of interest. What I heard and saw were a set of enhancements that seek to bring testing best practices up a notch or two by making sure that new applications are tested using data that’s as “real” as it can get without using production data, and by ensuring that data in production environments is hidden from the view of unauthorized users.
Why bother? What could go wrong, particularly when you’re using a dedicated test instance of your software? My favorite test data near-disaster story involved a famous process manufacturing company upgrading its SAP environment using an outside systems integrator. Midway through the upgrade, someone realized that, while the project was using that dedicated test instance, the data set they were using contained real data, including the highly proprietary recipes of this manufacturer’s customers, any of which would have been willing and able to sue the manufacturer into receivership if the recipes had somehow been copied from the system.
Not that a third party contractor – replete will all the appropriate permissions and clearances – working in some obscure corner of a big IT shop would ever think of stealing secure information and using it in an illicit manner. That never happens, does it?
Needless to say, the proverbial sh*t hit the fan – luckily before the recipes hit the internet.
Now imagine you’re a hospital upgrading or migrating your patient management system – wanna guess what would happen if a regulator found out anyone outside of a medical provider had access to patient data? Or you’re a retail chain store upgrading – belatedly – the security sub-system in your point of sale terminals – how wise would it be to use real customer credit card data to test the upgrade? Not a chance.
These scenarios shift to a different level of complexity when trying to test a net-new application, for which there is usually no existing data set to use as a template for a test data set. In this case creating a test data set involves ensuring that the data are as close to the real deal as possible, so that when real data are used in the production environment there is a credible reason to believe that all functional and safety issues have been taken into consideration. This is harder than it sounds: creating a credible dummy data set isn’t for dummies.
What Informatica has done is automate test data creation and production data masking processes that have traditionally been time-consuming and fraught with potential danger. They’ve also linked this to their flagship PowerCenter product in order to offer test data and data masking as a service, in the cloud (for Test Data Management, which also runs on-premise) and on-premises (for Dynamic Data Management). It’s nerdy, but the impact is two-fold. The first is that, by lowering the cost and complexity of generating and managing test data and data masking, this little nerdy corner of the IT world just got a lot less costly and easier to manage. And second, by lowering cost and complexity Informatica makes it easier for its customers to overcome institutional inertia and cost-justify their testing spend. The lower the barriers, the greater the likelihood that the need for quality testing will be recognized and acted on.
Then there’s the links between testing and training, particularly when it comes to net-new application development. One of the implementation and go-live best practices I’ve been recommending for years is to train as much as possible, early and often. And I don’t mean big fat binders full of generic functional descriptions and 8-hour, mind-numbing, core-dump classroom training sessions. That’s a proven way to waste money and foment mediocrity.
What I’ m talking about is training on real systems, using live data, that are as close to the actual production system as possible. Ideally, this is done in support of agile development, so that the process owners are testing the development system, and using their experiences to both inform the final development as well as get started on building the test scripts and end-user training system.
Not a lot of customers do this – or at least not enough, judging by how many implementation failures can be traced to lousy training – and not a lot of vendors or implementers promote these concepts. SAP recently released a new capability called Live Access that allows customers to train their end users on a real, production-quality system using a simulated data set. This radically raises the value of training and aligns the formerly boring world of end-user training with the well-established body of knowledge on the value of experience-based, hands-on training.
Informatica’s tools enable this too – the rules for test dataset generation and for otherwise ensuring that the wrong people don’t see the right data also serve the purpose of building training data sets that are as real as possible without being too real for safety. While it’s going to take a lot more to shift the majority of enterprises from the “training as an afterthought” to “training as a strategic function” camp, the availability of these products remove one of the more important barriers to achieving this worthwhile goal.
A final word on test data and data masking. One of the best reasons to own a tool that facilitates these functions is that it’s a guarantee that if you don’t have the capability, someone is currently wasting a lot of time and money doing it the hard way. It may be your own internal IT department, or it may be your systems integrator, inflating the budget with more of those recent college graduates you didn’t realize you were paying to give them some on-the-job training.
Regardless, worrying about this nerdy corner of the enterprise software market doesn’t have to require a major leap of strategy. It should be enough to know that there’s some pretty nice 21st century tools that can save you a significant amount of time and money that you probably would rather spend on something a little less nerdy, like some of that cool social or mobile software you’ve been itching to get your hands on.