Posts Tagged ‘Contact Data’

Data Quality, AI, and the Future

What do you think of when you hear the term “artificial intelligence” (or AI for short)? For many people, it conjures up images of robots, science fiction, and movies like “2001 – A Space Odyssey,” where an evil computer wouldn’t let the hero back on his spaceship to preserve itself.

Real AI is a little less dramatic than that, but still pretty exciting. At its root, it involves using machine learning – often based on large samples of big data – to automate decision-making processes. Some of the more public examples of AI are when computers square off against human chess masters or diagnose complex problems with machinery. And you already use AI every time you ask your phone for directions or a spam filter keeps junk mail from reaching your in-box.

In the area of contact marketing and customer relationship management, some experts are now talking about using AI for applications such as predictive marketing, automated targeting, and personalized content creation. Many of these applications are still in the future, but product introductions aimed at early adopters are already making their way to the market.

Data Quality is Key in AI

One thing nearly everyone agrees on, however, is that data quality is a potential roadblock for AI. Even a small amount of bad data can easily steer a machine learning algorithm wrong. Imagine, for example, you are trying to do demographic targeting – but given the percentage of contact data that normally goes bad in the course of a year, your AI engine may soon be pitching winter coats to prospects in Miami.

Here are what some leadership voices in the industry are saying about the data quality problem in AI:

  • Speaking at a recent Salesforce conference, Leadspace CEO Doug Bewsher described data quality as “AI’s Achilles heel,” going on to note that its effectiveness is crippled if you try using it with static CRM contact data or purchased datasets.
  • Information Week columnist Jessica Davis states in an opinion piece that “Data quality is really the foundation of your data and analytics program, whether it’s being used for reports and business intelligence or for more advanced AI and related technologies.”
  • A recent Compliance Week article calls data quality “the fuel that makes AI run,” noting that centralized data management will increasingly become a key issue in preventing “silos” of incompatible information.

The ROI of Accurate and Up-to-Date Contact Data is Larger than Ever

Naturally, this issue lies right in our wheelhouse. For years, we have been preaching the importance of data quality and data governance for contact data – particularly given the costs of bad data in time, human effort, marketing effectiveness, and customer reputation. But in an era where automation continues to march on, the ROI of good contact data is now growing larger than ever.

We aren’t predicting a world where your marketing efforts will be taken over by a robot – not anytime soon, at least. But AI is a very real trend, one which deserves your attention from here. Some exciting developments are on the horizon in marketing automation, and we are looking forward to what evolves over the next few years.

Find out more about how data quality and contact validation can help your business by visiting the Solutions section of our website.

Address Detective – Why it is so cool!

Service Objects has been providing USPS CASS-Certified Address Validation services for over 17 years. Over this time, we have developed one of the best systems for validating, correcting and appending useful data points to US addresses. Our address validation service specializes in fuzzy matching for address corrections and, more importantly, making sure that each and every address provided is NOT changed to something unexpected or incorrect.

While our address validation service is top notch, the focus on both USPS and accuracy introduces necessary limits on how we treat addresses that might be messy or missing key elements.  Which brings us to one of Service Objects more under appreciated offerings, our DOTS Address Detective service.

 

Address Detective and its Operations

Address Detective was born from a need to help our customers fill in the gaps and make sense of their very messy and/or incomplete addresses. This service is an ever-evolving collection of address utilities designed to help with various problems that can arise from these messy or incomplete addresses.  Currently, there are three operations available that each solve uniquely different problems.  It is helpful to understand what each operations does and how it can be best used to correct an address before you even start your implementation.

Operation NameDescription
FindAddressUses name and phone number to assist with the processing of very messy or incomplete addresses.
FindAddressLineTakes inputs that might be jumbled into the wrong columns and parses them into a usable result.
FindOutlyingAddressesDigs into alternative data sets from USPS to identify addresses that while not deliverable may still be good addresses.

 

Address Detective’s Operations Explained: FindAddress

The flagship operation of Address Detective is FindAddress. This service was designed to help clients with addresses that may be so messy or incomplete that they may not be obviously fixable, even to the human eye. FindAddress is given free reign to be more aggressive in its basic operation but also makes use of other data points like name, business name or phone number to assist with the validation.

Behind the scenes the service will dig into public and proprietary data sources to connect the dots between given data points to return an accurate result. The service is not designed to return an address if one is not given, its designed to analyze data given with cross referenced values in order improve or validate a normally unvalidatable address.

For example, perhaps the desired address is:

Taco Bell
821 N Milpas St
Santa Barbara, CA 93103

But what if the input address is something like:

Milpas Street
Santa Barbara, CA 93103

Clearly, not enough information is given for this address to pass validation. A house number is always required. DOTS Address Detective is able to use either the name “Taco Bell” or the phone number, (805) 962-1114, to properly identify and standardize the right location. The partial input values given are still important to compare back and make sure the most accurate result is returned.

What about addresses that are even messier with misspelled or incorrect data:

Milpaaaas Str
Santa Bar, CF 93103

Given either “Taco Bell” or (805) 962-1114, there is still enough information to go on to compare, cleanse and return the correct standardized result.

 

Address Detective’s Operations Explained: FindAddressLines

The second operation, FindAddressLines, solves a very different problem. We would often run lists of addresses for clients where they would give us a .csv file of addresses with data points that were in unexpected locations. Perhaps they tracked multiple address lines in which the third or fourth address line contained the normal “main” address line.  For example; what if they had something like this:

Four Address Lines:

Address 1: Johson Paper Bag Company
Address 2: C/O John Smith
Address 3: Floor 4
Address 4: 123 Main Street
City: Santa Barbara
State: California
ZIP: 93101

If the user does not know that the needed address in this case is Address4 (123 Main Street) its possible they may be sending the address: Johnson Paper Bag Company, C/O John Smith, Santa Barbara, CA, 93101 which obviously would not be a valid address. Perhaps they have an even bigger problem and there was an error in how the address was stored or a corrupted database leading to something like this:

Corrupted Database Example:

Address 1: 123 Main St
City: Apt 5
State: Santa Barbara
ZIP: CA

Both of these cases are solved by using the FindAddressLines. FindAddressLines takes in a generic list of Address inputs and analyzes them to figure out how to properly assign the inputs to the correct fields.  The result is then validated, corrected and standardized as a normal address. While there is some synergy with the FindAddress operation here, in order to properly parse out an address, the address would have to at least look like an address.  FindAddressLines would not be able to do anything with an address of “Milpas Street” as opposed to “821 Milpas Street”.

 

Address Detective’s Operations Explained: FindOutlyingAddresses

The final operation is FindOutlyingAddresses. This operation cross references several massive non-USPS datasets to find likely good addresses when USPS cannot. While our Address Validation service is designed to accurately identify deliverable addresses and contains the vast majority of US based addresses it does not cover everything. Pockets of addresses either in very rural areas or some well known areas like Mammoth Lakes (California) do not have deliverable houses, all mail is delivered to a local post office for pickup by residents.

FindOutlyingAddresses aims to fill in the blanks of these hard to find addresses. They may not be important for mail delivery but still play a vital role in identifying lead quality. While the data returns for this operation are not as complete as our Address Validation service, we will attempt to identify the data points at the lowest level we can. Do we know the house number exists? Maybe the house number does not exist but we know the street does? This operation will return as much useful information as it can about these locations.

 

Address Validation + Address Detective = Powerful One-Two Punch

One of the best ways to ensure you have accurate and up-to-date address information is by combining our Address Validation service with Address Detective. This combination allows many of our customers to identify and repair addresses that they would have normally discarded.  We are always happy to help our clients set up this powerful one-two punch.

In its most basic form, we use Address Validation to correct and verify all addresses. Addresses that could not be validated or corrected by the initial, stricter validation process, would be sent to our Address Detective service where supplemental information helps ‘solve’ the address and returns a viable address.

 

What is next for Address Detective?

DOTS Address Detective is an ever-evolving collection of operations that were created to meet the needs of our clients. We are always looking for new algorithms, data sets and features we can add to meet these needs and help clients recover and update even more addresses.

One of the more recent requests we are working on is helping identify GDPR exposure.  Our clients need to know if a contact record resides in any of the European Countries that are covered by the far-reaching privacy protection regulations of the GDPR. It is always a little more fun to solve real-world problems that our clients are facing and we are excited to be launching a new international address detective service in the coming week to help.  (By the way, if you think it is simple to identify a country by an address, try taking this Country Quiz.)

We encourage clients and prospects alike to reach out and let us know if they have a need that does not seem to be covered by one of our current products.  Share your needs or try it today to see what DOTS Address Detective can do to help!

 

data privacy laws

A New Data Privacy Challenge for Europe – and Beyond

New privacy regulations in Europe have recently become a very hot topic again within the business community. And no, we aren’t talking about the recent GDPR law.

A new privacy initiative, known as the ePrivacy Regulation, deals with electronic communications. Technically a revision to the EU’s existing ePrivacy Directive or “cookie law,” and pending review by the European Union’s member states, it could go into effect as early as this year. And according the New York Times, it is facing strong opposition from many technology giants including Google, Facebook, Microsoft and others.

Data privacy meets the app generation

Among other things, the new ePrivacy Regulation requires explicit permission from consumers for applications to use tracking codes or collect data about their private communications, particularly through messaging services such as Skype, iMessage, games and dating apps.  Companies will have to disclose up front how they plan to use this personal data, and perhaps more importantly, must offer the same access to services whether permission is granted or not.

Ironically this new law will also remove the previous directive’s need for the incessant “cookie notices” consumers now receive, by using browser tracking settings, while tightening the use of private data. This will be a mixed blessing for online services, because a simple default browser setting can now lock out the use of tracking cookies that many consumers routinely approved under the old pop-up notices. As part of its opposition to these new rules, trade groups are painting a picture of slashed revenues, fewer free services and curbs on innovation for trends such as the Internet of Things (IoT).

A longstanding saying about online services is that “when something is free, you are the product,” and this new initiative is one of the more visible efforts for consumers to push back and take control of the use of their information. And Europe isn’t alone in this kind of initiative – for example, the new California Consumer Privacy Act, slated for the late 2018 ballot, will also require companies to provide clear opt-out instructions for consumers who do not wish their data to be shared or sold.

The future: more than just European privacy laws

So what does this mean for you and your business? No one can precisely foretell the future of these regulations and others, but the trend over time is clear: consumer privacy legislation will continue to get tighter and tighter. And the days of unfettered access to the personal data of your customers and prospects are increasingly coming to an end. This means that data quality standards will continue to loom larger than ever for businesses, ranging from stricter process controls to maintaining accurate consumer contact information.

We frankly have always seen this trend as an opportunity. As with GDPR, regulations such as these have sprung from past excesses the lie at the intersection of interruptive marketing, big data and the loss of consumer privacy. Consumers are tired of endless spam and corporations knowing their every move, and legislators are responding. But more important, we believe these moves will ultimately lead businesses to offer more value and authenticity to their customers in return for a marketing relationship.

Freshly Squeezed…Never Frozen

Data gets stale over time. You rely on us to keep this data fresh, and we in turn rely on a host of others – including you! The information we serve you is the product of partnerships at many levels, and any data we mine or get from third party providers needs to be up-to-date.

This means that we rely on other organizations to keep their data current, but when you use our products, it is still our name on the door. Here at Service Objects, we use a three-step process to do our part in providing you with fresh data:

Who: We don’t make partnerships with just anyone.  Before we take on a new vendor, we fully vet them to be sure this partnership will meet our standards, now and in the future. To paraphrase the late President Reagan, we take a “trust but verify” approach to every organization we team up with.

What: We run tests to make sure that data is in fact how we expect it to be. This runs the gamut from simple format tests to ensuring that results are accurate and appropriate.

When: Some of the data we work with is updated in real time, while other data is updated daily, weekly, or monthly.  Depending on what type of data it is, we set up the most appropriate update schedule for the data we use.

At the same time, we realize this is a partnership between us and you – so to get the most out of our data, and for you to have the best results, we always suggest that you make sure to re-check some of your data points periodically, regardless of whether you are using our API or our batch processing system. Some of the more obvious reasons for this are that people move, phone numbers change, emails change, areas get redistricted, and so on. To maintain your data and keep it current, we recommend periodically revalidating it against our services.

Often business will implement our services to check data at the point of entry into their system, and also to perform a one-time cleanse to create a sort of baseline. This is all a good thing, especially when you make sure that data is going into your systems properly and is as clean as possible. However, it is important to remember that in 6-12 months some of this data will no longer be current.  Going the extra step to create a periodic review of your data is a best practice and is strongly recommended.

We also suggest keeping some sort of time stamp associated with when a record was validated, so that when you have events such as a new email campaign and some records have not been validated for a long time – for example, 12 months or more – you can re-run those records through our service.  This way you will ensure that you are getting the most out of your campaign, and at the same time protect your reputation by reducing bounces.

Finally, here is a pro tip to reduce your shipping costs: in our Address Validation service, we return an IsResidential indicator that identifies an address as being residential or not.  If this indicator changes, having the most recent results will help your business make the most cost-effective shipping decisions.

For both us and you, keeping your data fresh helps you get the most out of these powerful automation tools. In the end there is no specific time span we can recommend for verification that will suit every business across the board, and there will be cases where it isn’t always necessary to keep revalidating your data: the intervals you decide to use for your application will depend mostly on your application. But this is still an important factor to keep in mind as you design and evaluate your data quality process.

To learn more about how our data quality solutions can help your business, visit the Solutions section of our website.

When that data is incomplete, poorly defined, or wrong, there are immediate consequences: angry customers, wasted time, and difficult execution of strategy. Employing data quality best practices presents a terrific opportunity to improve business performance.

The Unmeasured Costs of Bad Customer and Prospect Data

Perhaps Thomas Redman’s most important recent article is “Seizing Opportunity in Data Quality.”  Sloan Management Review published it in November 2017, and it appears below.  Here he expands on the “unmeasured” and “unmeasurable” costs of bad data, particularly in the context of customer data, and why companies need to initiate data quality strategies.

Here is the article, reprinted in its entirety with permission from Sloan Management Review.

The cost of bad data is an astonishing 15% to 25% of revenue for most companies.

Getting in front on data quality presents a terrific opportunity to improve business performance. Better data means fewer mistakes, lower costs, better decisions, and better products. Further, I predict that many companies that don’t give data quality its due will struggle to survive in the business environment of the future.

Bad data is the norm. Every day, businesses send packages to customers, managers decide which candidate to hire, and executives make long-term plans based on data provided by others. When that data is incomplete, poorly defined, or wrong, there are immediate consequences: angry customers, wasted time, and added difficulties in the execution of strategy. You know the sound bites — “decisions are no better than the data on which they’re based” and “garbage in, garbage out.” But do you know the price tag to your organization?

Based on recent research by Experian plc, as well as by consultants James Price of Experience Matters and Martin Spratt of Clear Strategic IT Partners Pty. Ltd., we estimate the cost of bad data to be 15% to 25% of revenue for most companies (more on this research later). These costs come as people accommodate bad data by correcting errors, seeking confirmation from other sources, and dealing with the inevitable mistakes that follow.

Fewer errors mean lower costs, and the key to fewer errors lies in finding and eliminating their root causes. Fortunately, this is not too difficult in most cases. All told, we estimate that two-thirds of these costs can be identified and eliminated — permanently.

In the past, I could understand a company’s lack of attention to data quality because the business case seemed complex, disjointed, and incomplete. But recent work fills important gaps.

The case builds on four interrelated components: the current state of data quality, the immediate consequences of bad data, the associated costs, and the benefits of getting in front on data quality. Let’s consider each in turn.

Four Reasons to Pay Attention to Data Quality Now

The Current Level of Data Quality Is Extremely Low

A new study that I recently completed with Tadhg Nagle and Dave Sammon (both of Cork University Business School) looked at data quality levels in actual practice and shows just how terrible the situation is.

A new study that I recently completed with Tadhg Nagle and Dave Sammon (both of Cork University Business School) looked at data quality levels in actual practice and shows just how terrible the situation is.

We had 75 executives identify the last 100 units of work their departments had done — essentially 100 data records — and then review that work’s quality. Only 3% of the collections fell within the “acceptable” range of error. Nearly 50% of newly created data records had critical errors.

Said differently, the vast majority of data is simply unacceptable, and much of it is atrocious. Unless you have hard evidence to the contrary, you must assume that your data is in similar shape.

Bad Data Has Immediate Consequences

Virtually everyone, at every level, agrees that high-quality data is critical to their work. Many people go to great lengths to check data, seeking confirmation from secondary sources and making corrections. These efforts constitute what I call “hidden data factories” and reflect a reactive approach to data quality. Accommodating bad data this way wastes time, is expensive, and doesn’t work well. Even worse, the underlying problems that created the bad data never go away.

One consequence is that knowledge workers waste up to 50% of their time dealing with mundane data quality issues. For data scientists, this number may go as high as 80%.

A second consequence is mistakes, errors in operations, bad decisions, bad analytics, and bad algorithms. Indeed, “big garbage in, big garbage out” is the new “garbage in, garbage out.”

Finally, bad data erodes trust. In fact, only 16% of managers fully trust the data they use to make important decisions.

Frankly, given the quality levels noted above, it is a wonder that anyone trusts any data.

When Totaled, the Business Costs Are Enormous

Obviously, the errors, wasted time, and lack of trust that are bred by bad data come at high costs.

Companies throw away 20% of their revenue dealing with data quality issues. This figure synthesizes estimates provided by Experian (worldwide, bad data cost companies 23% of revenue), Price of Experience Matters ($20,000/employee cost to bad data), and Spratt of Clear Strategic IT Partners (16% to 32% wasted effort dealing with data). The total cost to the U.S. economy: an estimated $3.1 trillion per year, according to IBM.

The costs to businesses of angry customers and bad decisions resulting from bad data are immeasurable — but enormous.

Finally, it is much more difficult to become data-driven when a company can’t depend on its data. In the data space, everything begins and ends with quality. You can’t expect to make much of a business selling or licensing bad data. You should not trust analytics if you don’t trust the data. And you can’t expect people to use data they don’t trust when making decisions.

Two-Thirds of These Costs Can Be Eliminated by Getting in Front on Data Quality

“Getting in front on data quality” stands in contrast to the reactive approach most companies take today. It involves attacking data quality proactively by searching out and eliminating the root causes of errors. To be clear, this is about management, not technology — data quality is a business problem, not an IT problem.

Companies that have invested in fixing the sources of poor data — including AT&T, Royal Dutch Shell, Chevron, and Morningstar — have found great success. They lead us to conclude that the root causes of 80% or more of errors can be eliminated; that up to two-thirds of the measurable costs can be permanently eliminated; and that trust improves as the data does.

Which Companies Should Be Addressing Data Quality?

While attacking data quality is important for all, it carries a special urgency for four kinds of companies and government agencies:

Those that must keep an eye on costs. Examples include retailers, especially those competing with Amazon.com Inc.; oil and gas companies, which have seen prices cut in half in the past four years; government agencies, tasked with doing more with less; and companies in health care, which simply must do a better job containing costs. Paring costs by purging the waste and hidden data factories created by bad data makes far more sense than indiscriminate layoffs — and strengthens a company in the process.

Those seeking to put their data to work. Companies include those that sell or license data, those seeking to monetize data, those deploying analytics more broadly, those experimenting with artificial intelligence, and those that want to digitize operations. Organizations can, of course, pursue such objectives using data loaded with errors, and many companies do. But the chances of success increase as the data improves.

Those unsure where primary responsibility for data should reside. Most businesspeople readily admit that data quality is a problem, but claim it is the province of IT. IT people also readily admit that data quality is an issue, but they claim it is the province of the business — and a sort of uneasy stasis results. It is time to put an end to this folly. Senior management must assign primary responsibility for data to the business.

Those who are simply sick and tired of making decisions using data they don’t trust. Better data means better decisions with less stress. Better data also frees up time to focus on the really important and complex decisions.

Next Steps for Senior Executives

In my experience, many executives find reasons to discount or even dismiss the bad news about bad data. Common refrains include, “The numbers seem too big, they can’t be right,” and “I’ve been in this business 20 years, and trust me, our data is as good as it can be,” and “It’s my job to make the best possible call even in the face of bad data.”

But I encourage each executive to think deeply about the implications of these statistics for his or her own company, department, or agency, and then develop a business case for tackling the problem. Senior executives must explore the implications of data quality given their own unique markets, capabilities, and challenges.

The first step is to connect the organization or department’s most important business objectives to data. Which decisions and activities and goals depend on what kinds of data?

The second step is to establish a data quality baseline. I find that many executives make this step overly complex. A simple process is to select one of the activities identified in the first step — such as setting up a customer account or delivering a product — and then do a quick quality review of the last 100 times the organization did that activity. I call this the Friday Afternoon Measurement because it can be done with a small team in an hour or two.

The third step is to estimate the consequences and their costs for bad data. Again, keep the focus narrow — managers who need to keep an eye on costs should concentrate on hidden data factories; those focusing on AI can concentrate on wasted time and the increased risk of failure; and so forth.

Finally, for the fourth step, estimate the benefits — cost savings, lower risk, better decisions — that your organization will reap if you can eliminate 80% of the most common errors. These form your targets going forward.

Chances are that after your organization sees the improvements generated by only the first few projects, it will find far more opportunity in data quality than it had thought possible. And if you move quickly, while bad data is still the norm, you may also find an unexpected opportunity to put some distance between yourself and your competitors.

______________________________________________________________________

Service Objects spoke with the author, Tom Redman, and he gave us an update on the Sloan Management article reprinted above, particularly as it relates to the subject of the costs associated with bad customer data.

Please focus first on the measurable costs of bad customer data.  Included are items such as the cost of the work Sales does to fix up bad prospect data it receives from Marketing, the costs of making good for a customer when Operations sends him or her the wrong stuff, and the cost of work needed to get the various systems which house customer data to “talk.”  These costs are enormous.  For all data, it amounts to roughly twenty percent of revenue.

But how about these costs:

  • The revenue lost when a prospect doesn’t get your flyer because you mailed it to the wrong address.
  • The revenue lost when a customer quits buying from you because fixing a billing problem was such a chore.
  • The additional revenue lost when he/she tells a friend about his or her experiences.

This list could go on and on.

Most items involve lost revenue and, unfortunately, we don’t know how to estimate “sales you would have made.”  But they do call to mind similar unmeasurable costs associated with poor manufacturing in the 1970s and 80s.  While expert opinion varied, a good first estimate was that the unmeasured costs roughly equaled the measured costs.

If the added costs in the Seizing Opportunity article above doesn’t scare into action, add in a similar estimate for lost revenue.

The only recourse is to professionally manage the quality of prospect and customer data.  It is not hyperbole to note that such data are among a company’s most important assets and demand no less.

©2018, Data Quality Solutions

 

The Role of Data Quality in GDPR

If you do business with clients in the European Union, you have probably heard of the new General Data Protection Regulation (GDPR) that takes effect in Spring 2018. This new EU regulation ushers in strict new requirements for safeguarding the security and privacy of personal data, along with requiring active opt-in permission and ease of changing this permission.

Most articles you read about GDPR nowadays focus on the risks on non-compliance, and penalties are indeed stiff: up to €20 million or 4 percent of annual turnover. However, we recently hosted a webinar at Service Objects with two experts on GDPR, and they had a refreshing perspective on the issue – in their view, regulators are in fact helping your business by fundamentally improving your relationship with your customers. As presenter Tom Redman put it, “Regulators are people (and customers) too!”

Dr. Redman, known as the Data Doc, is the author of three books on data quality as well as the founder of Data Quality Solution, and the former head of AT&T’s Data Quality Lab. He was joined on our webinar by Daragh O’Brien, founder and CEO of Castlebridge, an information strategy, governance, and privacy consultancy based in Ireland. Together they made a case that GDPR is, in a sense, a healthy evolution across Europe’s different cultures and legal systems, taking a lead role in how we interact with our customers.

As Daragh put it, “(What) we’re currently calling data are simply a representation of something that exists in the real world who is a living breathing person with feelings, with emotions, with rights, and with aspirations and hopes, and how we handle their data has an impact on all of those things.” And Tom painted a picture of a world where proactive data quality management becomes a corporate imperative, undertaken to benefit an organization rather than simply avoid the wrath of a regulator.

At Service Objects, we like Tom and Daragh’s worldview a great deal. For our entire 15-plus year history, we have always preached the value of engineering data quality into your business processes, to reap benefits that range from cost savings and customer satisfaction all the way to a stronger brand in the marketplace. And seen through the lens of recent developments such as GDPR, we are part of a world that is rapidly moving away from interruptive marketing and towards customer engagement.

We would like to help you be part of this revolution as well. (And, in the process, help ensure your compliance with GDPR for your European clients.) There are several ways we can help:

1) View the on-demand replay of this recent webinar, at the following link: https://www.serviceobjects.com/resources/videos-tutorials/gdpr-webinar

2) Download our free white paper on GDPR compliance: https://www.serviceobjects.com/resources/articles-whitepapers/general-data-protection-regulation

3) Finally, contact us for a free one-on-one GDPR data quality assessment: https://www.serviceobjects.com/contact-us

In a very real sense, we too are trying to create a more interactive relationship with our own clients based on service and customer engagement. This is why we offer a rich variety of information, resources and personal connections, rather than simply tooting our horn and bugging you to purchase something. This way we all benefit, and close to 2500 existing customers agree with us. We feel it is time to welcome the brave new customer-focused world being ushered in by regulations such as GDPR, and for us to help you become part of it.

Three Building Blocks to Global Data Protection Regulation (GDPR) Compliance

Is your business ready for the GDPR? On May 25, 2018 a sweeping change in global consumer privacy, one that will fundamentally change the way companies around the world perform outbound marketing, will become law. This is the date that enforcement commences for the European Union’s new General Data Protection Regulation (GDPR), governing the use of personal data for over 500 million EU residents. US companies who market to customers or prospects in Europe will now face strict regulations surrounding the use and storage of consumer data, backed by potentially hefty revenue-based fines.

However, recent studies have shown that many businesses are woefully unprepared for GDPR, which will require changes ranging from point-of-entry data validation to the management of changing contact information. So, what is a good way to get started on the road to compliance? Start with these three building blocks.

For most organizations, GDPR compliance pivots around three fundamental building blocks: consent management, data protection, and data quality.

The first two of these building blocks will revolve around process change for most organizations. In the first case, consent management means that you will now need to prove that you have permission to use someone’s personal data for marketing purposes, and maintain records of this permission.

There are no exceptions to this rule for previously captured data, which means that consent may need to be re-acquired under mechanisms acceptable under GDPR. This also extends to providing easy and accessible ways for consumers to reverse this permission, extending all the way to Europe’s concept of “the right to be forgotten”—requiring you to erase all traces of a person’s contact information if requested by a consumer.

The second building block, data protection, involves deploying processes—and possibly specific people—designed to protect consumers’ personal data from unauthorized disclosure.

At a process level, this means that organizations will need to show that they have safeguards in place against personal data being stolen or misused. One popular approach for this involves pseudonomization, where key personal information is kept separate and secure until actual use. Unlike anonymization, where ownership of data cannot be reconstructed, pseudonomiization allows certain identifying characteristics to be used as a “password” to combine other separately-stored components of information at the time of use.

If your organization is large enough, GDPR may also require the formal role of a Data Protection Officer (DPO), with dedicated responsibilities within an organization for protecting personal data. The specific criteria for needing a DPO is “large-scale systematic monitoring of individuals,” along with more specific situations such as public authorities and organizations handling large scale data processing of criminal convictions. With or without a formal DPO, companies will be expected to have a documented game plan for protecting consumer information.

Finally, data quality serves as the third building block. Once upon a time incorrect, fraudulent or changing contact records were seen as an annoyance, or perhaps an unavoidable expense—and if people received unsolicited marketing materials or contacts as a result, it was their problem to endure or resolve. Today, in the era of GDPR, data quality issues can lead to compliance problems with serious financial consequences. This means that data must be verified and corrected, both at the point of entry and time of use.

Of all three of these building blocks, data quality is the one area that is probably represents the largest ongoing responsibility for most organizations. Thankfully, it is also the one that is the most amenable to automation.

Interested in finding out more about the role contact data plays in Global Data Protection Regulation (GDPR)? Visit our GDPR Solutions page, which contains a variety of resources that explain the key principles of GDPR compliance for contact data, and how automated data quality tools can protect your marketing efforts in the European marketplace.

Email Marketing Tip: Dealing With Role Addresses

Do you have any friends named “info” or “customerservice”?

If you do, our sympathies, because their parents were probably way over-invested in their careers. But in all likelihood, you probably don’t. Which leads to a very important principle about your email marketing: you always need to make sure you are marketing to real people.

Email addresses like “info@mycompany.com” or “customerservice@bigorganization.com” are examples of what we call role addresses. They are not addressed to a person, but rather to a job function and generally include a number of people on the distribution list. They serve a valuable purpose, particularly in larger organizations – if you have a problem with Amazon.com, for example, you don’t want to wait for Cindy to get back from vacation first to respond to you.

You probably realize that role email addresses create the same problems as any other non-person in your marketing database: wasted human effort, lower response rates, bounces, and the like. However, there are several other important reasons to purge role addresses from your contact database:

Bounce Rate. Role emails are generally the responsibility of an email administrator.  These administrators are not always kept in the loop when individuals move onto other positions or leave the company.  This can result in a role email’s distribution list not being up-to-date and emails being sent to inactive email addresses.  These inactive addresses are usually set to automatically bounce emails, resulting in a higher bounce rate and poorer campaign performance.

Blacklisting. Spamming a role email address doesn’t just annoy people. As one article points out, it can trigger spam complaints and damage your sender reputation – in fact, role accounts are often used as spam traps by account holders. This can lead to your IP being blacklisted for the entire organization, cutting you off from leads or even existing customers far beyond the original email.

CAN-SPAM compliance. Permission to send email is fundamentally a contract with an individual, and marketing to a role email address risks having your materials go to people who did not opt-in or agree to your terms and conditions – putting you at risk for being in violation of the US CAN-SPAM act that governs email marketing.

New laws. In Europe, the new General Data Protection Regulation (GDPR) takes effect in 2018, severely restricting unsolicited email marketing. While it is not always clear that you are mailing to Europe (for example, many people do not realize that household names like Bayer and Unilever are based there), you are still bound by their laws and potentially stiff penalties. Eliminating role accounts from your contact database is an important part of mitigating this exposure.

Exponential risk. When it comes to risk, role addresses are the gift that keeps on giving. One of these addresses may go to 10 different people or more – and only one of them needs to complain to get you in trouble. Moreover, you can easily get multiple complaints for the price of one errant message.

Customer reputation. When someone signs up for your contact list using a role address, it is a form of “friendly fraud” that absolves them from personally receiving your emails – much like the person who signs up as “Donald Duck” to receive a free marketing goodie. But when other people start receiving your materials without their permission as a result, it is not a good way to start a customer relationship.

Thankfully, avoiding role-based addresses is relatively easy. In fact, many large email marketing providers won’t import these address in the first place. Or if you manage your contact database from within your own applications environment, we can help. Our email validation capabilities flag role-based addresses in your database like sales, admin, support, webmaster, billing, and much more. In addition, we perform over 50 verification tests, clean up common spelling and syntax errors, and return a quantitative quality score that helps you accept or reject addresses at the point of import.

So, with pun fully intended, your role in data quality is to ensure that your online marketing only goes to live, real people who welcome your message. Our role is to automate this process to make it as frictionless as possible. Together, we can keep your email contact data ready to roll!

Character Limitations in Shipping Address Fields – There is a Solution

If you are using an Address Validation service for shipping labels, then you may occasionally run into character count limitations with the Address1 field. Whether you are using UPS, Fedex, ShipStation or any other shipping solution, most character limits tend to range between 30 or 35 characters (some even as low as 25 characters). While most addresses tend to be under this limit, there are always outliers that you’ll want your business solution to be ready to handle.

If you are using a DOTS Address Validation solution, you are in luck! The response from our API not only validates and corrects bad addresses but also allows you to customize address lines to meet your business needs.  Whether you are looking to have your address lines be under a certain limit, want to place apartment or unit information on a separate line, or customize the address line in some other way, we can show you how to integrate the Address Validation response from Service Objects’ API into your business logic.

Below is a brief example using our DOTS Address Validation US 3 service to demonstrate the fragments that are returned in a typical valid response:

FragmentHouse
FragmentPreDir
FragmentStreet
FragmentSuffix
FragmentPostDir
FragmentUnit
Fragment
FragmentPMBPrefix
FragmentPMBNumber

If you are worried about exceeding a certain character limit, you can programmatically check the Address1 line result from our service to see if it exceeds a particular limit.

Check the Result – Not the Input

There are two obvious reasons you should check the result of the service instead of the input.   First, you want to use validated and corrected addresses on your mailing label. Second, the input address may be too long before validating but post-validation, the corrected addressed could meet the requirements and no customizations are needed to fit within the character limitations.

With this understanding, if the resulting validated street address in Address1 line is over the character limitation, then your application can go about splitting up the address in ways that best suit your needs.

For example, let’s say you have a long address line like the following:

12345 W FAKE INDUSTRIAL ST NE STE 130, #678

This is obviously a fake street, but it helps demonstrate some of the different ways you can handle long address lines. In the example, the address ends up being around 45 characters long, including spaces. The service would return the following fragments for this address:

Fragment House: 12345
FragmentPreDir: W
FragmentStreet: Fake Industrial
FragmentSuffix: St
FragmentPostDir: NE
FragmentUnit: STE
Fragment: 130
FragmentPMBPrefix: #
FragmentPMBNumber: 678

With this example, one solution to reduce the character limits would be to move the Suite and Mail Box information to a separate address line, so it would appear like so:

12345 W FAKE INDUSTRIAL ST NE
STE 130, #678

You may need to fine tune the logic in your business application from this basic algorithm, but this can help you get started with catering your validated address information to meet different character limitations.

In most cases, the following can be used in Address line 1:

  • FragmentHouse
  • FragmentPreDir
  • FragmentStreet
  • FragmentSuffix
  • FragmentPostDir

And the following in Address line 2:

  • FragmentUnit,
  • Fragment
  • FragmentPMBPrefix
  • FragmentPMBNumber

PO Boxes

There is an important exception to be aware of – PO Boxes. It is necessary to determine if the address is a PO Box to avoid applying the above logic to this type of address. It is simple to determine if the result is a PO Box by checking the DPVNotes field returned from the Address Validation service.  PO Boxes typically will fit under character length limitations but some organizations choose to rebuild addresses from fragments regardless of field length.  If this is the case and you have a PO Box, then the fragments to rebuild the PO Box are:

  • FragmentStreet
  • FragmentHouse

Highly Customizable

The examples above may require some fine tuning to meet your business requirements but hopefully they have also demonstrated the highly customizable nature of the address validation service and how it can be catered to meet your address validation needs.

If you have any questions about different integrations into your particular application contact our support team at support@serviceobjects.com and we will gladly provide any support that we can!