People, Process, and Technology: The Three Pillars of Data Quality
Subscribe to our blog
For many people, managing data quality seems like a daunting task. They may realize that it is an important issue with financial consequences for their organization, but they don’t know how to proceed in managing it. With the right strategy, however, any organization can reap the benefits of consistent data quality, by focusing on three core principles: People, Process, and Technology.
Taken together, these three areas serve as the cornerstones of a structured approach to data quality that you can implement and manage. And more important, a framework that lets you track the ROI of successful data quality. Let’s look at each of these in detail:
This is frankly where most organizations fail at the data quality game: not allocating dedicated gatekeepers for the health of their data. It is a very easy mistake to make when budgets are tight, resources are focused on revenue-generating functions like sales or product development, and the business case for data quality gets lost amidst a host of competing priorities.
The single biggest thing an organization can do for data quality is to devote dedicated resources to it. This becomes an easier sell once you look at the real costs of bad data: for example, research shows that 25% of all contact records contain bad data, a third of marketing leads use fake names, and half of all phone numbers provided won’t connect. Run these numbers across the direct costs of customer acquisition, add in missed sales opportunities, increased customer care costs, and even potential compliance fines, and you often have the financial justification for a data quality gatekeeper.
How much control do you have over data entry points, data accuracy, and verification? For too many organizations, the answer is none – with resulting costs due to factors such as duplicate data entry, human error, or lack of verification. And who is responsible for maintaining the integrity of your business data? Too often, the answer is “no one,” in a world where data rarely ages well. An average of 70% of contact data goes bad in some form each year, which ushers in yet another level of direct and indirect costs.
One of the more important roles of a data gatekeeper is to have processes in place to manage the touch points for your data, engineer data quality in on the front end of customer and lead acquisition, and maintain this data over the course of its life cycle. Having the right policies and procedures in place gives you control over your data, and can make the mechanics of data quality frictionless and cost-effective. Or as your teachers used to put it, an ounce of prevention is worth a pound of cure.
Data quality solutions range from simply scanning spreadsheets for duplicates and mistakes, all the way to automated tools for tasks such as address validation, lead validation, and verification of email or phone contact information. And far too often, the solution of choice for an organization is to do nothing at all.
Ironically, using the best available automated tools for data quality is often a surprisingly cost-effective strategy, and can yield your best ROI. Automated tools can be as simple as verifying an address, or as sophisticated as creating a statistical ranking value for the quality of a lead or contact record. Used properly, these tools can put much of the hard work of data quality on autopilot for you and your organization.
Ensuring your organization’s data quality can seem like an overwhelming task. But broken into its component parts – your people, your process, and your technology – this task can turn into logical steps that pay themselves back very quickly. It is a simple and profitable three-step strategy for any organization that runs on data.