Posts Tagged ‘Contact Data’

data privacy laws

A New Data Privacy Challenge for Europe – and Beyond

New privacy regulations in Europe have recently become a very hot topic again within the business community. And no, we aren’t talking about the recent GDPR law.

A new privacy initiative, known as the ePrivacy Regulation, deals with electronic communications. Technically a revision to the EU’s existing ePrivacy Directive or “cookie law,” and pending review by the European Union’s member states, it could go into effect as early as this year. And according the New York Times, it is facing strong opposition from many technology giants including Google, Facebook, Microsoft and others.

Data privacy meets the app generation

Among other things, the new ePrivacy Regulation requires explicit permission from consumers for applications to use tracking codes or collect data about their private communications, particularly through messaging services such as Skype, iMessage, games and dating apps.  Companies will have to disclose up front how they plan to use this personal data, and perhaps more importantly, must offer the same access to services whether permission is granted or not.

Ironically this new law will also remove the previous directive’s need for the incessant “cookie notices” consumers now receive, by using browser tracking settings, while tightening the use of private data. This will be a mixed blessing for online services, because a simple default browser setting can now lock out the use of tracking cookies that many consumers routinely approved under the old pop-up notices. As part of its opposition to these new rules, trade groups are painting a picture of slashed revenues, fewer free services and curbs on innovation for trends such as the Internet of Things (IoT).

A longstanding saying about online services is that “when something is free, you are the product,” and this new initiative is one of the more visible efforts for consumers to push back and take control of the use of their information. And Europe isn’t alone in this kind of initiative – for example, the new California Consumer Privacy Act, slated for the late 2018 ballot, will also require companies to provide clear opt-out instructions for consumers who do not wish their data to be shared or sold.

The future: more than just European privacy laws

So what does this mean for you and your business? No one can precisely foretell the future of these regulations and others, but the trend over time is clear: consumer privacy legislation will continue to get tighter and tighter. And the days of unfettered access to the personal data of your customers and prospects are increasingly coming to an end. This means that data quality standards will continue to loom larger than ever for businesses, ranging from stricter process controls to maintaining accurate consumer contact information.

We frankly have always seen this trend as an opportunity. As with GDPR, regulations such as these have sprung from past excesses the lie at the intersection of interruptive marketing, big data and the loss of consumer privacy. Consumers are tired of endless spam and corporations knowing their every move, and legislators are responding. But more important, we believe these moves will ultimately lead businesses to offer more value and authenticity to their customers in return for a marketing relationship.

Freshly Squeezed…Never Frozen

Data gets stale over time. You rely on us to keep this data fresh, and we in turn rely on a host of others – including you! The information we serve you is the product of partnerships at many levels, and any data we mine or get from third party providers needs to be up-to-date.

This means that we rely on other organizations to keep their data current, but when you use our products, it is still our name on the door. Here at Service Objects, we use a three-step process to do our part in providing you with fresh data:

Who: We don’t make partnerships with just anyone.  Before we take on a new vendor, we fully vet them to be sure this partnership will meet our standards, now and in the future. To paraphrase the late President Reagan, we take a “trust but verify” approach to every organization we team up with.

What: We run tests to make sure that data is in fact how we expect it to be. This runs the gamut from simple format tests to ensuring that results are accurate and appropriate.

When: Some of the data we work with is updated in real time, while other data is updated daily, weekly, or monthly.  Depending on what type of data it is, we set up the most appropriate update schedule for the data we use.

At the same time, we realize this is a partnership between us and you – so to get the most out of our data, and for you to have the best results, we always suggest that you make sure to re-check some of your data points periodically, regardless of whether you are using our API or our batch processing system. Some of the more obvious reasons for this are that people move, phone numbers change, emails change, areas get redistricted, and so on. To maintain your data and keep it current, we recommend periodically revalidating it against our services.

Often business will implement our services to check data at the point of entry into their system, and also to perform a one-time cleanse to create a sort of baseline. This is all a good thing, especially when you make sure that data is going into your systems properly and is as clean as possible. However, it is important to remember that in 6-12 months some of this data will no longer be current.  Going the extra step to create a periodic review of your data is a best practice and is strongly recommended.

We also suggest keeping some sort of time stamp associated with when a record was validated, so that when you have events such as a new email campaign and some records have not been validated for a long time – for example, 12 months or more – you can re-run those records through our service.  This way you will ensure that you are getting the most out of your campaign, and at the same time protect your reputation by reducing bounces.

Finally, here is a pro tip to reduce your shipping costs: in our Address Validation service, we return an IsResidential indicator that identifies an address as being residential or not.  If this indicator changes, having the most recent results will help your business make the most cost-effective shipping decisions.

For both us and you, keeping your data fresh helps you get the most out of these powerful automation tools. In the end there is no specific time span we can recommend for verification that will suit every business across the board, and there will be cases where it isn’t always necessary to keep revalidating your data: the intervals you decide to use for your application will depend mostly on your application. But this is still an important factor to keep in mind as you design and evaluate your data quality process.

To learn more about how our data quality solutions can help your business, visit the Solutions section of our website.

When that data is incomplete, poorly defined, or wrong, there are immediate consequences: angry customers, wasted time, and difficult execution of strategy. Employing data quality best practices presents a terrific opportunity to improve business performance.

The Unmeasured Costs of Bad Customer and Prospect Data

Perhaps Thomas Redman’s most important recent article is “Seizing Opportunity in Data Quality.”  Sloan Management Review published it in November 2017, and it appears below.  Here he expands on the “unmeasured” and “unmeasurable” costs of bad data, particularly in the context of customer data, and why companies need to initiate data quality strategies.

Here is the article, reprinted in its entirety with permission from Sloan Management Review.

The cost of bad data is an astonishing 15% to 25% of revenue for most companies.

Getting in front on data quality presents a terrific opportunity to improve business performance. Better data means fewer mistakes, lower costs, better decisions, and better products. Further, I predict that many companies that don’t give data quality its due will struggle to survive in the business environment of the future.

Bad data is the norm. Every day, businesses send packages to customers, managers decide which candidate to hire, and executives make long-term plans based on data provided by others. When that data is incomplete, poorly defined, or wrong, there are immediate consequences: angry customers, wasted time, and added difficulties in the execution of strategy. You know the sound bites — “decisions are no better than the data on which they’re based” and “garbage in, garbage out.” But do you know the price tag to your organization?

Based on recent research by Experian plc, as well as by consultants James Price of Experience Matters and Martin Spratt of Clear Strategic IT Partners Pty. Ltd., we estimate the cost of bad data to be 15% to 25% of revenue for most companies (more on this research later). These costs come as people accommodate bad data by correcting errors, seeking confirmation from other sources, and dealing with the inevitable mistakes that follow.

Fewer errors mean lower costs, and the key to fewer errors lies in finding and eliminating their root causes. Fortunately, this is not too difficult in most cases. All told, we estimate that two-thirds of these costs can be identified and eliminated — permanently.

In the past, I could understand a company’s lack of attention to data quality because the business case seemed complex, disjointed, and incomplete. But recent work fills important gaps.

The case builds on four interrelated components: the current state of data quality, the immediate consequences of bad data, the associated costs, and the benefits of getting in front on data quality. Let’s consider each in turn.

Four Reasons to Pay Attention to Data Quality Now

The Current Level of Data Quality Is Extremely Low

A new study that I recently completed with Tadhg Nagle and Dave Sammon (both of Cork University Business School) looked at data quality levels in actual practice and shows just how terrible the situation is.

A new study that I recently completed with Tadhg Nagle and Dave Sammon (both of Cork University Business School) looked at data quality levels in actual practice and shows just how terrible the situation is.

We had 75 executives identify the last 100 units of work their departments had done — essentially 100 data records — and then review that work’s quality. Only 3% of the collections fell within the “acceptable” range of error. Nearly 50% of newly created data records had critical errors.

Said differently, the vast majority of data is simply unacceptable, and much of it is atrocious. Unless you have hard evidence to the contrary, you must assume that your data is in similar shape.

Bad Data Has Immediate Consequences

Virtually everyone, at every level, agrees that high-quality data is critical to their work. Many people go to great lengths to check data, seeking confirmation from secondary sources and making corrections. These efforts constitute what I call “hidden data factories” and reflect a reactive approach to data quality. Accommodating bad data this way wastes time, is expensive, and doesn’t work well. Even worse, the underlying problems that created the bad data never go away.

One consequence is that knowledge workers waste up to 50% of their time dealing with mundane data quality issues. For data scientists, this number may go as high as 80%.

A second consequence is mistakes, errors in operations, bad decisions, bad analytics, and bad algorithms. Indeed, “big garbage in, big garbage out” is the new “garbage in, garbage out.”

Finally, bad data erodes trust. In fact, only 16% of managers fully trust the data they use to make important decisions.

Frankly, given the quality levels noted above, it is a wonder that anyone trusts any data.

When Totaled, the Business Costs Are Enormous

Obviously, the errors, wasted time, and lack of trust that are bred by bad data come at high costs.

Companies throw away 20% of their revenue dealing with data quality issues. This figure synthesizes estimates provided by Experian (worldwide, bad data cost companies 23% of revenue), Price of Experience Matters ($20,000/employee cost to bad data), and Spratt of Clear Strategic IT Partners (16% to 32% wasted effort dealing with data). The total cost to the U.S. economy: an estimated $3.1 trillion per year, according to IBM.

The costs to businesses of angry customers and bad decisions resulting from bad data are immeasurable — but enormous.

Finally, it is much more difficult to become data-driven when a company can’t depend on its data. In the data space, everything begins and ends with quality. You can’t expect to make much of a business selling or licensing bad data. You should not trust analytics if you don’t trust the data. And you can’t expect people to use data they don’t trust when making decisions.

Two-Thirds of These Costs Can Be Eliminated by Getting in Front on Data Quality

“Getting in front on data quality” stands in contrast to the reactive approach most companies take today. It involves attacking data quality proactively by searching out and eliminating the root causes of errors. To be clear, this is about management, not technology — data quality is a business problem, not an IT problem.

Companies that have invested in fixing the sources of poor data — including AT&T, Royal Dutch Shell, Chevron, and Morningstar — have found great success. They lead us to conclude that the root causes of 80% or more of errors can be eliminated; that up to two-thirds of the measurable costs can be permanently eliminated; and that trust improves as the data does.

Which Companies Should Be Addressing Data Quality?

While attacking data quality is important for all, it carries a special urgency for four kinds of companies and government agencies:

Those that must keep an eye on costs. Examples include retailers, especially those competing with Amazon.com Inc.; oil and gas companies, which have seen prices cut in half in the past four years; government agencies, tasked with doing more with less; and companies in health care, which simply must do a better job containing costs. Paring costs by purging the waste and hidden data factories created by bad data makes far more sense than indiscriminate layoffs — and strengthens a company in the process.

Those seeking to put their data to work. Companies include those that sell or license data, those seeking to monetize data, those deploying analytics more broadly, those experimenting with artificial intelligence, and those that want to digitize operations. Organizations can, of course, pursue such objectives using data loaded with errors, and many companies do. But the chances of success increase as the data improves.

Those unsure where primary responsibility for data should reside. Most businesspeople readily admit that data quality is a problem, but claim it is the province of IT. IT people also readily admit that data quality is an issue, but they claim it is the province of the business — and a sort of uneasy stasis results. It is time to put an end to this folly. Senior management must assign primary responsibility for data to the business.

Those who are simply sick and tired of making decisions using data they don’t trust. Better data means better decisions with less stress. Better data also frees up time to focus on the really important and complex decisions.

Next Steps for Senior Executives

In my experience, many executives find reasons to discount or even dismiss the bad news about bad data. Common refrains include, “The numbers seem too big, they can’t be right,” and “I’ve been in this business 20 years, and trust me, our data is as good as it can be,” and “It’s my job to make the best possible call even in the face of bad data.”

But I encourage each executive to think deeply about the implications of these statistics for his or her own company, department, or agency, and then develop a business case for tackling the problem. Senior executives must explore the implications of data quality given their own unique markets, capabilities, and challenges.

The first step is to connect the organization or department’s most important business objectives to data. Which decisions and activities and goals depend on what kinds of data?

The second step is to establish a data quality baseline. I find that many executives make this step overly complex. A simple process is to select one of the activities identified in the first step — such as setting up a customer account or delivering a product — and then do a quick quality review of the last 100 times the organization did that activity. I call this the Friday Afternoon Measurement because it can be done with a small team in an hour or two.

The third step is to estimate the consequences and their costs for bad data. Again, keep the focus narrow — managers who need to keep an eye on costs should concentrate on hidden data factories; those focusing on AI can concentrate on wasted time and the increased risk of failure; and so forth.

Finally, for the fourth step, estimate the benefits — cost savings, lower risk, better decisions — that your organization will reap if you can eliminate 80% of the most common errors. These form your targets going forward.

Chances are that after your organization sees the improvements generated by only the first few projects, it will find far more opportunity in data quality than it had thought possible. And if you move quickly, while bad data is still the norm, you may also find an unexpected opportunity to put some distance between yourself and your competitors.

______________________________________________________________________

Service Objects spoke with the author, Tom Redman, and he gave us an update on the Sloan Management article reprinted above, particularly as it relates to the subject of the costs associated with bad customer data.

Please focus first on the measurable costs of bad customer data.  Included are items such as the cost of the work Sales does to fix up bad prospect data it receives from Marketing, the costs of making good for a customer when Operations sends him or her the wrong stuff, and the cost of work needed to get the various systems which house customer data to “talk.”  These costs are enormous.  For all data, it amounts to roughly twenty percent of revenue.

But how about these costs:

  • The revenue lost when a prospect doesn’t get your flyer because you mailed it to the wrong address.
  • The revenue lost when a customer quits buying from you because fixing a billing problem was such a chore.
  • The additional revenue lost when he/she tells a friend about his or her experiences.

This list could go on and on.

Most items involve lost revenue and, unfortunately, we don’t know how to estimate “sales you would have made.”  But they do call to mind similar unmeasurable costs associated with poor manufacturing in the 1970s and 80s.  While expert opinion varied, a good first estimate was that the unmeasured costs roughly equaled the measured costs.

If the added costs in the Seizing Opportunity article above doesn’t scare into action, add in a similar estimate for lost revenue.

The only recourse is to professionally manage the quality of prospect and customer data.  It is not hyperbole to note that such data are among a company’s most important assets and demand no less.

©2018, Data Quality Solutions

 

The Role of Data Quality in GDPR

If you do business with clients in the European Union, you have probably heard of the new General Data Protection Regulation (GDPR) that takes effect in Spring 2018. This new EU regulation ushers in strict new requirements for safeguarding the security and privacy of personal data, along with requiring active opt-in permission and ease of changing this permission.

Most articles you read about GDPR nowadays focus on the risks on non-compliance, and penalties are indeed stiff: up to €20 million or 4 percent of annual turnover. However, we recently hosted a webinar at Service Objects with two experts on GDPR, and they had a refreshing perspective on the issue – in their view, regulators are in fact helping your business by fundamentally improving your relationship with your customers. As presenter Tom Redman put it, “Regulators are people (and customers) too!”

Dr. Redman, known as the Data Doc, is the author of three books on data quality as well as the founder of Data Quality Solution, and the former head of AT&T’s Data Quality Lab. He was joined on our webinar by Daragh O’Brien, founder and CEO of Castlebridge, an information strategy, governance, and privacy consultancy based in Ireland. Together they made a case that GDPR is, in a sense, a healthy evolution across Europe’s different cultures and legal systems, taking a lead role in how we interact with our customers.

As Daragh put it, “(What) we’re currently calling data are simply a representation of something that exists in the real world who is a living breathing person with feelings, with emotions, with rights, and with aspirations and hopes, and how we handle their data has an impact on all of those things.” And Tom painted a picture of a world where proactive data quality management becomes a corporate imperative, undertaken to benefit an organization rather than simply avoid the wrath of a regulator.

At Service Objects, we like Tom and Daragh’s worldview a great deal. For our entire 15-plus year history, we have always preached the value of engineering data quality into your business processes, to reap benefits that range from cost savings and customer satisfaction all the way to a stronger brand in the marketplace. And seen through the lens of recent developments such as GDPR, we are part of a world that is rapidly moving away from interruptive marketing and towards customer engagement.

We would like to help you be part of this revolution as well. (And, in the process, help ensure your compliance with GDPR for your European clients.) There are several ways we can help:

1) View the on-demand replay of this recent webinar, at the following link: https://www.serviceobjects.com/resources/videos-tutorials/gdpr-webinar

2) Download our free white paper on GDPR compliance: https://www.serviceobjects.com/resources/articles-whitepapers/general-data-protection-regulation

3) Finally, contact us for a free one-on-one GDPR data quality assessment: https://www.serviceobjects.com/contact-us

In a very real sense, we too are trying to create a more interactive relationship with our own clients based on service and customer engagement. This is why we offer a rich variety of information, resources and personal connections, rather than simply tooting our horn and bugging you to purchase something. This way we all benefit, and close to 2500 existing customers agree with us. We feel it is time to welcome the brave new customer-focused world being ushered in by regulations such as GDPR, and for us to help you become part of it.

Three Building Blocks to Global Data Protection Regulation (GDPR) Compliance

Is your business ready for the GDPR? On May 25, 2018 a sweeping change in global consumer privacy, one that will fundamentally change the way companies around the world perform outbound marketing, will become law. This is the date that enforcement commences for the European Union’s new General Data Protection Regulation (GDPR), governing the use of personal data for over 500 million EU residents. US companies who market to customers or prospects in Europe will now face strict regulations surrounding the use and storage of consumer data, backed by potentially hefty revenue-based fines.

However, recent studies have shown that many businesses are woefully unprepared for GDPR, which will require changes ranging from point-of-entry data validation to the management of changing contact information. So, what is a good way to get started on the road to compliance? Start with these three building blocks.

For most organizations, GDPR compliance pivots around three fundamental building blocks: consent management, data protection, and data quality.

The first two of these building blocks will revolve around process change for most organizations. In the first case, consent management means that you will now need to prove that you have permission to use someone’s personal data for marketing purposes, and maintain records of this permission.

There are no exceptions to this rule for previously captured data, which means that consent may need to be re-acquired under mechanisms acceptable under GDPR. This also extends to providing easy and accessible ways for consumers to reverse this permission, extending all the way to Europe’s concept of “the right to be forgotten”—requiring you to erase all traces of a person’s contact information if requested by a consumer.

The second building block, data protection, involves deploying processes—and possibly specific people—designed to protect consumers’ personal data from unauthorized disclosure.

At a process level, this means that organizations will need to show that they have safeguards in place against personal data being stolen or misused. One popular approach for this involves pseudonomization, where key personal information is kept separate and secure until actual use. Unlike anonymization, where ownership of data cannot be reconstructed, pseudonomiization allows certain identifying characteristics to be used as a “password” to combine other separately-stored components of information at the time of use.

If your organization is large enough, GDPR may also require the formal role of a Data Protection Officer (DPO), with dedicated responsibilities within an organization for protecting personal data. The specific criteria for needing a DPO is “large-scale systematic monitoring of individuals,” along with more specific situations such as public authorities and organizations handling large scale data processing of criminal convictions. With or without a formal DPO, companies will be expected to have a documented game plan for protecting consumer information.

Finally, data quality serves as the third building block. Once upon a time incorrect, fraudulent or changing contact records were seen as an annoyance, or perhaps an unavoidable expense—and if people received unsolicited marketing materials or contacts as a result, it was their problem to endure or resolve. Today, in the era of GDPR, data quality issues can lead to compliance problems with serious financial consequences. This means that data must be verified and corrected, both at the point of entry and time of use.

Of all three of these building blocks, data quality is the one area that is probably represents the largest ongoing responsibility for most organizations. Thankfully, it is also the one that is the most amenable to automation.

Interested in finding out more about the role contact data plays in Global Data Protection Regulation (GDPR)? Visit our GDPR Solutions page, which contains a variety of resources that explain the key principles of GDPR compliance for contact data, and how automated data quality tools can protect your marketing efforts in the European marketplace.

Email Marketing Tip: Dealing With Role Addresses

Do you have any friends named “info” or “customerservice”?

If you do, our sympathies, because their parents were probably way over-invested in their careers. But in all likelihood, you probably don’t. Which leads to a very important principle about your email marketing: you always need to make sure you are marketing to real people.

Email addresses like “info@mycompany.com” or “customerservice@bigorganization.com” are examples of what we call role addresses. They are not addressed to a person, but rather to a job function and generally include a number of people on the distribution list. They serve a valuable purpose, particularly in larger organizations – if you have a problem with Amazon.com, for example, you don’t want to wait for Cindy to get back from vacation first to respond to you.

You probably realize that role email addresses create the same problems as any other non-person in your marketing database: wasted human effort, lower response rates, bounces, and the like. However, there are several other important reasons to purge role addresses from your contact database:

Bounce Rate. Role emails are generally the responsibility of an email administrator.  These administrators are not always kept in the loop when individuals move onto other positions or leave the company.  This can result in a role email’s distribution list not being up-to-date and emails being sent to inactive email addresses.  These inactive addresses are usually set to automatically bounce emails, resulting in a higher bounce rate and poorer campaign performance.

Blacklisting. Spamming a role email address doesn’t just annoy people. As one article points out, it can trigger spam complaints and damage your sender reputation – in fact, role accounts are often used as spam traps by account holders. This can lead to your IP being blacklisted for the entire organization, cutting you off from leads or even existing customers far beyond the original email.

CAN-SPAM compliance. Permission to send email is fundamentally a contract with an individual, and marketing to a role email address risks having your materials go to people who did not opt-in or agree to your terms and conditions – putting you at risk for being in violation of the US CAN-SPAM act that governs email marketing.

New laws. In Europe, the new General Data Protection Regulation (GDPR) takes effect in 2018, severely restricting unsolicited email marketing. While it is not always clear that you are mailing to Europe (for example, many people do not realize that household names like Bayer and Unilever are based there), you are still bound by their laws and potentially stiff penalties. Eliminating role accounts from your contact database is an important part of mitigating this exposure.

Exponential risk. When it comes to risk, role addresses are the gift that keeps on giving. One of these addresses may go to 10 different people or more – and only one of them needs to complain to get you in trouble. Moreover, you can easily get multiple complaints for the price of one errant message.

Customer reputation. When someone signs up for your contact list using a role address, it is a form of “friendly fraud” that absolves them from personally receiving your emails – much like the person who signs up as “Donald Duck” to receive a free marketing goodie. But when other people start receiving your materials without their permission as a result, it is not a good way to start a customer relationship.

Thankfully, avoiding role-based addresses is relatively easy. In fact, many large email marketing providers won’t import these address in the first place. Or if you manage your contact database from within your own applications environment, we can help. Our email validation capabilities flag role-based addresses in your database like sales, admin, support, webmaster, billing, and much more. In addition, we perform over 50 verification tests, clean up common spelling and syntax errors, and return a quantitative quality score that helps you accept or reject addresses at the point of import.

So, with pun fully intended, your role in data quality is to ensure that your online marketing only goes to live, real people who welcome your message. Our role is to automate this process to make it as frictionless as possible. Together, we can keep your email contact data ready to roll!

Character Limitations in Shipping Address Fields – There is a Solution

If you are using an Address Validation service for shipping labels, then you may occasionally run into character count limitations with the Address1 field. Whether you are using UPS, Fedex, ShipStation or any other shipping solution, most character limits tend to range between 30 or 35 characters (some even as low as 25 characters). While most addresses tend to be under this limit, there are always outliers that you’ll want your business solution to be ready to handle.

If you are using a DOTS Address Validation solution, you are in luck! The response from our API not only validates and corrects bad addresses but also allows you to customize address lines to meet your business needs.  Whether you are looking to have your address lines be under a certain limit, want to place apartment or unit information on a separate line, or customize the address line in some other way, we can show you how to integrate the Address Validation response from Service Objects’ API into your business logic.

Below is a brief example using our DOTS Address Validation US 3 service to demonstrate the fragments that are returned in a typical valid response:

FragmentHouse
FragmentPreDir
FragmentStreet
FragmentSuffix
FragmentPostDir
FragmentUnit
Fragment
FragmentPMBPrefix
FragmentPMBNumber

If you are worried about exceeding a certain character limit, you can programmatically check the Address1 line result from our service to see if it exceeds a particular limit.

Check the Result – Not the Input

There are two obvious reasons you should check the result of the service instead of the input.   First, you want to use validated and corrected addresses on your mailing label. Second, the input address may be too long before validating but post-validation, the corrected addressed could meet the requirements and no customizations are needed to fit within the character limitations.

With this understanding, if the resulting validated street address in Address1 line is over the character limitation, then your application can go about splitting up the address in ways that best suit your needs.

For example, let’s say you have a long address line like the following:

12345 W FAKE INDUSTRIAL ST NE STE 130, #678

This is obviously a fake street, but it helps demonstrate some of the different ways you can handle long address lines. In the example, the address ends up being around 45 characters long, including spaces. The service would return the following fragments for this address:

Fragment House: 12345
FragmentPreDir: W
FragmentStreet: Fake Industrial
FragmentSuffix: St
FragmentPostDir: NE
FragmentUnit: STE
Fragment: 130
FragmentPMBPrefix: #
FragmentPMBNumber: 678

With this example, one solution to reduce the character limits would be to move the Suite and Mail Box information to a separate address line, so it would appear like so:

12345 W FAKE INDUSTRIAL ST NE
STE 130, #678

You may need to fine tune the logic in your business application from this basic algorithm, but this can help you get started with catering your validated address information to meet different character limitations.

In most cases, the following can be used in Address line 1:

  • FragmentHouse
  • FragmentPreDir
  • FragmentStreet
  • FragmentSuffix
  • FragmentPostDir

And the following in Address line 2:

  • FragmentUnit,
  • Fragment
  • FragmentPMBPrefix
  • FragmentPMBNumber

PO Boxes

There is an important exception to be aware of – PO Boxes. It is necessary to determine if the address is a PO Box to avoid applying the above logic to this type of address. It is simple to determine if the result is a PO Box by checking the DPVNotes field returned from the Address Validation service.  PO Boxes typically will fit under character length limitations but some organizations choose to rebuild addresses from fragments regardless of field length.  If this is the case and you have a PO Box, then the fragments to rebuild the PO Box are:

  • FragmentStreet
  • FragmentHouse

Highly Customizable

The examples above may require some fine tuning to meet your business requirements but hopefully they have also demonstrated the highly customizable nature of the address validation service and how it can be catered to meet your address validation needs.

If you have any questions about different integrations into your particular application contact our support team at support@serviceobjects.com and we will gladly provide any support that we can!

Now or Later? When to Clean Your Marketo Database

If you were to make a list of the things people love to do, data cleanup would usually rank pretty low on the list. (Except for us here at Service Objects. We rather enjoy data cleanup. But then again, we’ve always been a little different.) This naturally leads to another question: should you clean up your contact data BEFORE you put it into Marketo, or LATER, before you actually use it in a campaign?

We have a three-part answer to this question: yes, yes, and automate the process.

Here’s why: there are irreplaceable benefits to each process. And when you properly automate it with the right tools, the process becomes frictionless and institutionalizes the ROI of these benefits. Let’s explore this in more detail.

Validating contact data such as names, email, physical addresses and phone numbers BEFORE loading them into Marketo has several advantages:

Saving money.  Your Marketo pricing tier is depending on the number of leads in your database. By cleaning this data on the front end, you can often delay or perhaps even avoid entirely the problem of moving to a higher tier and paying more for non-viable leads. And within your tier, fewer bad leads translates directly to less human intervention throughout the marketing cycle and more accurate analytics.

Garbage in, garbage out. Putting dirty data into your marketing database skews whatever metrics or analyses you might do beyond marketing campaigns, including the all-important conversion rate. And catching bad contact information in real-time lets you message the user at time of entry so they can correct it, preserving valuable leads and preventing possible customer service issues.

Detecting bogus names and fraudulent leads. What good is a database full of Donald Ducks and Ninja Turtles, who faked you out to get a free report? Tools such as name validation can programmatically catch and keep fraudulent contact information out of your lead database in the first place.

Lead preservation. Conversely, your bad contact data can be a hidden source of leads and revenue – if you use automated tools to correct bad addresses or append missing information such as contact phone numbers.

Finally, there is the broader question of lead quality. Marketo’s own lead scoring – based on tracking activities, behavior and demographics – is important but may not provide front-end protection from fraudulent or bad data. Contact-level lead validation adds a quantitative value for lead quality, based on over 200 criteria, that lets you decide to fast-track a lead, put them in your drip campaign to see how they respond, or even discard the lead.

Now, let’s look at the other side of the coin. Validating lead data LATER at regular intervals, particularly at the time you use it, has several advantages as well.

Coping with change. Over 70% of contact data will go bad in the course of just a year. Lead validation tools can check your existing leads and then correct, update, or remove them based on the results. This saves you money by only keeping and paying for viable leads, allowing you to better identify sources of high and low quality leads and providing more accurate reporting.

Taking care of your customers. By triggering emails or other contacts to customers who appear to have changed their addresses, using tools such as our national change-of-address (NCOA Live) capabilities, you provide better service and pro-actively avoid future service or delivery failures.

Making your IT department happy. Lead and contact validation tools from Service Objects are easily automated within Marketo using our Webhooks which can be found on Marketo’s LaunchPoint marketplace. In addition, we offer convenient offline batch processing for contact data files without a technical interface.

Of course, automated contact and lead validation are not the only forms of data cleanup that can help – this blog by Perkuto’s John Hill touches on other useful areas such as screening out competitors, inactive leads and people with unresponsive email addresses. With a clear process in place – and the right automation partner – it can be easy and inexpensive to optimize the value of your Marketo database at EVERY contact touch point.

GDPR Compliance: Is Your Business Ready?

If you conduct business in Europe, May 2018 will be an important date. This is when the planned introduction of the European Union’s General Data Protection Regulation (GDPR) is scheduled to take effect.

GDPR represents a sweeping set of privacy regulations that impact your use of personal data from European citizens. If you conduct business with people from Europe – whether they are your customers, employees, or job prospects – GDPR affects you as well. It will require you to have policies in place to protect people’s personal data, as well as require notification when this data has been breached. And penalties for violations will be extremely stiff, up to the greater of 20 million Euros or 4% of your gross turnover.

GDPR starts with the definition of “personal data.” This is an extremely broad net: a recent article from Software Development magazine notes that the European Commission’s guidelines include both obvious data such name, address or email, and associated data ranging from bank accounts to photos and social media posts. Even the IP address a European is using on their computer is considered part of this personal data.

Much like the HIPAA requirements on electronic health care data in the United States, GDPR will require organizations to safeguard the personal data they collect and store in the course of doing business. At one level, this will involve technology such as encrypted data storage, password protection, and other approaches, along with policies and procedures for protecting this data. At another level, it obligates you to inform European consumers about your privacy policies, gain explicit consent to collect and use their personal data and provide them with the ability to control or opt-out of data collection. And in the event personal data is compromised, you need a plan for reaching people affected by the breach.

Each of these levels have important areas where data quality and GDPR compliance efforts intersect. Some of the questions businesses will have to ask themselves include:

  • Do we have accurate contact information for people we do business with in Europe?
  • Is there a notification procedure in place for our privacy and data policies, including opting out of data collection or making changes to personal data?
  • If a breach notification were necessary, do we have the means to quickly reach all affected parties?
  • How do we handle changes to contact information? What if a person in your database moves, changes jobs, or gets a new email address?

This means that your GDPR and data quality strategies will need to be closely linked. Tools such as international address verification, lead validation and name validation can help make sure data is complete and correct as it enters your system, and stays correct when it is needed later. As a recent article in Information Management points out, the key to GDPR compliance lies in proactively analyzing your data and performing a thorough risk assessment long before an actual privacy issue arises.

The European Union has long been on the vanguard of consumer protection legislation, and the new GDPR regulations are the latest in an effort to level the playing field between big data and the individual rights of its citizens. They have a global reach, whether you do business in Europe or serve Europeans from elsewhere. At a broader level, GDPR is part of a new reality that businesses will soon need to work with, one that is part of a larger trend toward increasing privacy regulations.

May 2018 is coming soon – is your business ready?