Posts Tagged ‘Data Quality’

The High Cost of Poor Data Quality

If you are a marketing, order fulfillment or data manager, there are some things you don’t want to be greeted with when you come to work in the morning – and a surprising number of them revolve around issues with your contact data. For example:

  • You’ve sent someone’s sensitive personal information to a similar but incorrect address that was fat-fingered during data entry, and it’s become a news story.
  • 20% of your marketing budget was spent on direct mail pieces to people with names like “Mickey Mouse” and “SpongeBob SquarePants” who faked out your lead magnets to get free bonuses.
  • You shipped several high-ticket items to a fraudster using fake contact information.
  • You are facing a court order for violations of the Telephone Consumer Protection Act (TCPA), for unsolicited telemarketing to wireless phones that once belonged to your contacts but have now changed hands.

According to Gartner, the cost of bad data to US businesses is roughly $15 billion per year as of 2018, and UK site MyCustomer notes that bad customer data alone costs UK businesses 5.9% of annual revenue. But those are just aggregate numbers that often don’t mean anything to the average business. Here, let’s look at some of the real ground-level consequences of bad contact data.

Marketing inefficiency

According to this source, the average company spends $180,000 per year simply on direct mail that is misdelivered due to inaccurate data. Your cost per converted lead is directly impacted by the quality of your data, in a chain that runs through areas such as direct mail costs, list maintenance, human intervention, and the yield and ROI of your campaigns.

Reputational damage

Misdelivered packages. Service failures. Customer service issues. Problems like these are what, down in the trenches, create negative brand reputations that no amount of advertising or marketing can overcome – particularly in an era of social media, where your failings are always on display. Conversely, when people can count on you for quality in all of their interactions with you, it builds consumer trust and a good word-of-mouth reputation.

Compliance issues

This is one area where the cost of bad contact data becomes very real and tangible, as newer data privacy regulations have introduced serious penalties for compliance violations. For example, the European Union’s GDPR regulation includes potential fines of up to 4% of annual revenue, while the TCPA regulation mentioned above includes penalties of up to $1500 per individual violation, resulting in numerous multi-million dollar judgments against consumer firms nationwide. As a result, compliance issues alone have become a major reason for an increasing focus on data quality.

Pay now or pay later

One way to look at the impact of data quality on your business is what we call the “1-10-100” rule:

  • Catching bad data at the time of data entry may cost one cent per entry
  • Correcting bad data at the time of use may cost ten cents
  • Managing the consequences of using bad data may cost a dollar – or more

Scaling this to an organizational level, a proactive approach to data hygiene is far and away the most cost-effective way to avoid the negative financial and reputational consequences of bad contact data.

This means having processes that encompass all of your contact data touch points, including marketing, shipping, customer service and more. In particular, it means ensuring clean contact data at both the time of data entry and the time of deployment, since data decays at a substantial rate every year.

Today this also means automating the process of contact data quality, by integrating tools such as address validation, email validation, lead and order validation directly into your marketing automation or CRM environment.

Want to learn more? Visit our solutions pages online, or download our free white paper Hitting the Data Trifecta: Three Secrets of Achieving Data Quality Excellence.

Third-Party Platform Integration

Today I am writing about the ways to integrate with Service Objects APIs. This will be different from our other discussions where we talk about running batches and hooking up our APIs in code. Here I am going to discuss leveraging our APIs in third-party platforms and embedding our services directly into those underlying technologies. Third-party platforms have varying degrees of available integration channels, and Service Objects products can easily be injected into most of these channels.

Why data quality is important for third-party platforms

First off, what do we mean by third-party platforms? When we talk about third-party platforms, we are talking about CRM systems for the most part. From the Service Objects perspective, we mean any platform out there that interfaces with data such as names, addresses, emails, phone, numbers, IP addresses and so on. These are systems that, at their very basic core, allow users to interact with data in their systems in a human-readable way. Beyond the basics, those platforms have much more functionality built into them that is tailored to solve specific problems and address specific needs.

It is this broad range of functionality that makes third-party platforms rely on good data and supplemental data points. As the old phrase goes, “garbage in is garbage out”. Without good data you cannot rely on your underlying business processes to be carried out effectively. Companies spend countless hours and money into implementing their business logic into systems and the platforms they build or subscribe to. They are constantly refining those processes so they can adapt to changes in business practices and increases in efficiency. If the data in your system is “garbage”, all the hard work put towards creating great processes cannot be realized. That’s where Service Objects steps in, ensuring your data is as clean, up-to-date and accurate as possible.

Adding value to your data

On the other hand, your data may be accurate out the gate, but you may want to append additional data points to it or evaluate how these data points work together so you can properly feed a downstream process. Service Objects’ web services help with that too. When we validate a record, we return many additional data points, not just validation, that can be used to add value to your records.

Here a few examples of how the added value can be used:

  • Your organization may be trying to feed leads to sales staff in the most appropriate time zone. Our service appends time zone data to your name and phone records, so you can add this logic to your lead distribution.
  • With phone validations, we can return the date of porting, line type (e.g. whether it is a landline, wireless or VOIP), contacts associated with the phone line, and SIC codes.
  • For addresses we can provide fields like barcode digits – so you can match duplicate addresses, carrier route, congress codes, latitude/longitude, designated market codes and address fragments.

These are just a few examples in a couple of our services, but they contain so much more. A quick way to take a look at the value we can add to your organization is to take a look through our developer guides or try out our services with our look up pages.

There is an enormous amount of functionality that businesses require that often does not come out of the box in third-party platforms. The way organizations keep their data well-tuned and supplemented is by utilizing many of the built-in features of third-party platforms that allow you to integrate much more complex external logic not possible (or not easily possible) with the out-of-the-box product.

Support for a wide range of interfaces

Nowadays third-party platforms are opening their systems up to more and more customization; for example, Salesforce has been a leader in this area since the early 2000’s. The way platforms typically open up their systems is by allowing external connections to web resources such as web APIs. With these APIs, organizations are given the ability to push and pull data from the platform. This means that instead of trying to build logic into a third-party platform, organizations can rely on experts in their respective fields to do the heavy lifting for them, and they can deal with a simple call to an API.

With Service Objects, you can be connected in minutes to validating and appending data to your records while we deal with the complexities under the hood. Other examples of platforms that Service Objects web service APIs can be called from are Marketo, Zoho, SugarCRM, Eloqua, Dell Boomi, and many others. We have or can create step-by-step documentation for most of these, so if there is a platform your organization needs help integrating us with, simply reach out to us.

Not all platforms expose the ability to call web service APIs directly from their applications. Many of these, though, provide an API themselves that you can program against. In this case, you could write up a bit of code to pull records from the third-party platform from outside of it, call our API to validate the records, and then push them back into the platform. This way is a little bit more work, but should also be a task that can be completed in a short time and isn’t really resource-heavy either. Hubspot is a good example of a platform that only allows you to connect to them externally.

There is also another kind of platform out there that acts more like a hub for API’s. The most obvious example of this is Mulesoft. Mulesoft can do many things, but from our vantage point it is a platform that acts as a hub for your internal and external API’s and allows you to maintain a fine-grained level of control over their various uses and usage. When paired with Service Objects, you can inject our data validation services into any of your connected applications in Mulesoft and implement logic around it.

Need programming support? We’ve got you covered.

The last thing we want to touch on is our sample code and our programming language support. Off the top, we support and have sample code for C#, Java, PHP, Ruby, Python, NodeJS,, Classic ASP, Apex for Salesforce and Cold Fusion. We also have sample code for integrating right into Microsoft SQL Server. We handle a lot of interfaces, and our capabilities are so wide-ranging that if you need a sample we don’t directly have, we can typically build one for you on demand. Want to learn more about interfacing to your third party platform? Visit our online developer’s portal, or contact us anytime.

A Shakespearean Sonnet to Data Quality

A little bit of summertime fun with the Bard and data quality.

“What’s in a NAPE?
That which we call a rose
By any other name would smell as sweet.”
-William Shakespeare, Romeo and Juliet

OK, Shakespeare said ‘name’ not NAPE. But the sentiment rings true for data quality. Whatever you call it, good data quality practices will make your marketing perform sweeter. For us, NAPE is a simple acronym we use to describe the four most important elements of a contact record, and where you want to make sure your contact data is accurate:

N – Name

A – Address

P – Phone

E – Email

It is pretty clear to see why, having someone’s name wrong is not a great way to start off a relationship and their address, phone and email need to be correct to ensure your messages reach them. Now, let’s look at why each of these NAPE categories is important to you, and see how our services can support them.

How Macbeth would handle your data problems

“Fair is Foul, Foul is Fair”
-William Shakespeare, Macbeth

This quote is a good metaphor for the life cycle of contact data and its quality, because good leads age over time (Fair to Foul), but our automated services can turn bad leads into good ones again, or flag them for removal (Foul to Fair).

Here are some of the ways we do this:

Name. Not every marketing lead or order you get has a legitimate name, for reasons ranging from fat-fingered data entry, sidestepping lead collection to committing fraud. Our DOTS Name Validation product flags possible fraudulent names – including William Shakespeare! – by providing individual scores for vulgarity, celebrity, bogus, garbage and dictionary names, as well as provides an overall quality score, Name Score.

Address. Bad address data can occur as a result of factors such as data entry error, fraudulent data, or the natural decay of contact data records over time. Knowing where a contact record is from is critical for global compliance issues such as the European Union’s GDPR. Our flagship Address Validation products for US, Canadian or international addresses will correct, validate and append mailing addresses from over 240 countries, to improve accuracy and increase customer satisfaction.

Phone. Contacting wrong or changed phone numbers wastes your time, annoys prospects or customers, and can lead to huge compliance fines in the wake of laws such as the recently updated Telephone Consumer Protection Act (TCPA). Our Phone Validation products can provide contact information for over 400 million US and Canadian numbers, as well as line type, porting and geocoding information for numbers around the world.

Email. Email validation improves deliverability, decreases bounce rates, helps preserve your sender reputation, and can be important for compliance issues such as the US CAN-SPAM act. Our DOTS Email Validation takes a five stage approach to validating email addresses, weeds out invalid, undeliverable, and bogus email addresses worldwide, while correcting for common domain errors.

Finally there is cross-validation between all of these items – for example, making sure your order for Stratford-On-Avon isn’t coming from an IP address in an obscure third-world country. Tools such as DOTS Lead Validation and DOTS Order Validation will check your leads or orders against more than 130 data points to return overall quality and confidence scores.

Of course, data validation tools like these have gone from being a luxury to a necessity in recent years, for reasons that include marketing cost-efficiency and yield, competitive factors, customer reputation, and regulatory compliance. If you ignore good data hygiene nowadays then alas, poor Yorick, you face consequences ranging from stiff penalties to loss of market share.

How to avoid a Shakespearean data tragedy

In The Merchant of Venice, the Bard penned, “I am not bound to please thee with my answers.” We take a very different approach. Our friendly technical staff loves to give answers to your questions and explore customized solutions with you, with no sales pressure at all. You can even test-drive our tools online, explore all of our documentation, or get a free trial key for up to 500 transactions. Contact us anytime and let us help you sleep, perchance to dream, better about your contact data.

Explaining Data Quality to Third Graders, Upper Management, and Other Non-Experts

Data quality is very important. But that doesn’t always mean it is clear to everyone. In this blog article, I’d like to offer some strategies for discussing it with all the less-technical people in your life – ranging from your boss to your children.

Our field has a terminology all its own, not to mention the underlying technology behind it. Unfortunately, for some people terms like API sound more like the name of a hot new crime drama (“API – Las Vegas”), and things like SOAP and REST are about showering and going to bed. Worse, sometimes these are the very same people who control your budget for data quality. So let’s look at some strategies for reaching them better:

Start with terminology. Lose the acronyms and the tech-speak. For example, an API becomes “capabilities that are embedded in our marketing automation platform,” or simpler yet “plug-in tools that add value.” A good way to think about this is how you would describe something to a third grader: figure that out, and you are a long way towards describing it to your upper management, or other non-experts. Use words of two syllables or less where possible, stick to plain English, and never stop trying to simplify and summarize what you are saying.

Work backwards from benefits. Who cares about things like JSON? You do, of course, if you’re a programmer. But your boss might too, if he or she realizes that standard output formats make it easier to integrate the latest data quality tools with your current automation platforms. You will always have more success selling the idea THAT something is important if you can articulate WHY it is important.

Get out of the weeds. When the World Wide Web became big in the 1990s, some people saw it as a protocol for transferring information between computers. Others saw it as a tool that could enable everything from e-commerce to a world of information on demand. (And still others couldn’t explain it to people at all!) In much the same way, your ability to start with the big picture of data quality adds depth and credibility to whatever you are presenting.

Use analogies and metaphors. A tool like Address Validation makes more sense to people when you can describe it as a filter that keeps bad data out. Likewise, geocoding is often best described visually on a map. These are just two examples of how putting things in everyday terms open people’s minds to what you are saying.

From technical data to telling stories

There is one other important reason for learning to describe technology in human terms – it builds your skills as a good storyteller, and in turn, a better expert. We naturally think in terms of stories, and your ability to weave a good one often makes you stand out as a leader. Think back to some of the most influential speeches you have heard, on television or in person, and I’ll bet that a good story was at the heart of them.

The reality is that, no matter how technical we are, we are all salespeople: every single one of is in the business of selling ideas to other people. Learning to communicate these ideas well to non-experts can become an important part of our professional toolkit, and ultimately help us accomplish more of our goals as technology experts.

Why Is Pricing So Hard?

Pricing is – by far – the most common question asked when we start talking with prospective customers. And of course, we understand that you need to know the costs as part of your decision-making process. For us, we want to make sure you have the right solution for your needs and then the best pricing for your solution. In this article, I would like to share a little more about how our pricing works.

It starts with us understanding what features and capabilities are best for your specific needs, so we can work together to save you money. Let’s start with a few examples.

1. Getting the right products for your use case

We have a total of 24 different products, whose costs vary and some capabilities overlap. We are always happy to share the prices of individual products with you – but we have learned over time that it is even more important to start from your specific business problems, as opposed to a price list. This way we can ensure that you are spending as little as possible, but getting all the capabilities you need.

Here are some common examples that highlight this:

Example 1: Your customer and lead addresses are one of your most valuable assets, and you want the best available address validation capabilities to ensure they are at the highest level of data quality and accuracy.

Our most advanced – and expensive – address validation service is DOTS Address Detective, a sophisticated product that can resolve and validate even some of the most difficult or fatal addressing issues. But we would actually suggest a much more cost-effective solution that will give you everything you need: namely, running 100% of this data through our flagship DOTS Address Validation – US product first. This will often validate and correct around 95% of your data, and then you can run the remaining 5% through Address Detective. This way you save money by using a less expensive product for the majority of your addresses and only use the more costly solution for the more problematic 5% of your data.

Example 2: You have international addresses in your data set, but we determine that only 25% of these addresses are international and 75% are US-based.

Here we would recommend using Address Validation – US for the 75% (as it is less expensive) and DOTS Address Validation – International for the other 25%. In addition, the US validation service also comes with additional features that do not apply to international addresses, such as Residential Delivery Indicator (RDI), delivery point validation (DPV), generation of ZIP+4 codes, and more.

Example 3: You ask for pricing for DOTS GeoPhone, but after speaking to our product experts, learn you want mobile data as well.

This is one case where we would suggest a more expensive solution than what you were looking for because our DOTS GeoPhone Plus product offers the mobile data that you need.

Example 4: You might think you need “lead validation,” but your lead forms only collect names and email addresses.

In a case such as this, we would actually steer you away from our DOTS Lead Validation product; since you do not collect addresses or phone numbers, you would not be able to take advantage of the cross-validation checks this product offers and would be overpaying for a solution that would be more cost-effectively solved with DOTS Name Validation and DOTS Email Validation.

As you can see, the potential combinations of needs are endless, but this is where our expertise with both our products and their prices come in. More often than not, once our product experts have a better understanding of what you are trying to accomplish, we can suggest products or combinations of products that save you more money and/or better meet your needs.

2. Understanding product pricing

We are often asked why our validation products are priced differently – and why they are sometimes priced very differently from competitors.

First, a lot of this has to do with the features bundled with our products. So when comparing us to our competitors, we might first appear more expensive. BUT – the devil is in the details – or in this case – the extra costs are in our competitors’ add-ons. Be sure you are comparing apples to apples when you can. (For example, our Address Validation – US product includes many features that are sold at additional costs by other companies.) We find that once you clearly define your needs and understand that many of our competitors charge extra for additional features that you need, we are very cost-competitive. Then factor in our commitment to customer support, guaranteed uptime and overall product quality, we are a clear top option.

Second, you get what you pay for. When it comes to our competitors, not all data validation companies are created equal. Like most industries, there is a range to the quality of service and support that is provided. With almost two decades of experience in the space, we proudly sit at the top end of the quality range and continuously commit resources to stay there.

Third, all lookups are not created equal. From a purely practical point of view, for example, phone data is far more expensive to gather and maintain than BIN data. Most of our products leverage sophisticated third-party databases and other resources. As a result, our costs vary per product based on factors ranging from licensing costs to update frequency – so, for example, your dollar goes farther with DOTS Email Validation than it does with the more sophisticated DOTS Lead Validation – International (which includes our Email Validation service). We believe in breaking these costs out by product so that those who need fewer capabilities can save more money while still having a high-quality data validation solution.

3. Are you a power user?

If you process a lot of data, that’s great! In this case, we can save you even more money, because we are able to offer volume discounts on our products. These discounts vary per product – talk with us for more details.

4. You are paying for quality as well as capabilities

Most of our customers use our products in mission-critical applications, where uptime and responsive support are crucial. This is why we offer responsive technical support 24/7 for urgent issues, as well as the industry’s only financially-backed Service Level Agreement (SLA) guaranteeing 99.999% uptime or better. When you make pricing decisions for important data quality applications, it is important to factor in the potential risks and very real costs of poor support and implementation problems from other vendors.

5. We can get creative

We love to work with our clients to understand what they are trying to accomplish, and if needed, come up with some pretty creative custom solutions. Our customers are experts in their fields, and we are experts in ours. Put the two of us together and we can come up with some pretty aweSOme solutions – ones that are cost-effective and move the needle.

Questions? We’re here to help

Our whole company is based around a service-oriented model. From putting our entire technical documentation online, to letting you test-drive our products live on our website (at 3 in the morning, if you wish!), to having friendly technical people to consult with you about your specific needs, our goal is always to help you understand our products AND our prices, so you can choose the best solutions.

So please – jump on a call with us, tell us your needs, and give us an opportunity to come up with a tailored solution for you.

How Data Quality Tools Help Save Mother’s Day

We have many mission-critical applications for our products. But every year in May, one of the most critical ones is keeping moms happy on Mother’s Day. After all, customer satisfaction is important in any industry, but no one EVER wants to disappoint their mom on her special day.

Do you know how many flowers are sent every year on Mother’s Day? A lot! This ‘very technical’ answer comes from my first-hand experience as a teenager.  When I was young, my girlfriend’s parents owned a floral shop, and Mother’s Day was the biggest day of their year.  It took us all week to prepare orders and plan deliveries.

A more precise answer comes from the Society of American Florists: one article notes that Mother’s Day flower sales were nearly $2 billion US in 2016, edging out Valentine’s Day by volume and representing 64 percent of all gifts to mom that day.

What you might not know is that Service Objects plays a major role in the logistics of Mother’s Day. Delivering perishable floral products on a specific date, with little margin for error, is a complex challenge requiring bulletproof data quality. We serve many important customers within the floriculture and floral industry and wanted to share some of our tools they use to help make moms smile all over the US.

How We Power the Flowers

Here are some of the key tools our customers use to help ensure a successful Mother’s Day:

Address Validation. Integrating our DOTS Address Validation – US product within your order entry process makes sure that the Mom’s address is correctly captured at the time of ordering. This is particularly important for a holiday like Mother’s Day, where addresses are not generally being entered by the recipient and are more prone to error.

Our Address Validation service matches and, where possible, corrects addresses in real time at the point of data entry, using up-to-date USPS and proprietary databases. Our USPS CASS-CertifiedTM database engine also flags residential versus business addresses and returns a full ZIP+4 postal code.

Delivery Point Validation (DPV). This feature is part of the output from Address Validation, and it is critically important when a third party enters the address: Delivery Point Validation ensures that an address is not only correct but also recognized as being deliverable by the USPS. A simple example is knowing that a unit number is required for a multi-unit building and flagging the address as incomplete.

Address Geocoding. Sometimes a delivery address isn’t a mailing address. For example, a rural address may not be recognized as deliverable by the USPS (because its mail goes to a PO box or general delivery), but it still corresponds to a real, physical location where Mom actually resides and other delivery services can reach. Our DOTS Address Geocoding product can determine the latitude and longitude of a delivery address with up to a 99.8% property-level match rate accuracy, ensuring that flowers get delivered to the front door, as well as, helping delivery drivers with efficient delivery route planning.

Address Detective. What happens when an address is incorrect or undeliverable? When it is being entered by a well-meaning but down-to-the-last-minute, hasty son, too often it could result in a lost sales opportunity. Our DOTS Address Detective saves many of these sales by using fuzzy logic with available data points to correct and append address data for “bad” addresses – so that Mom still receives her much-deserved bouquet.

50 Million Mothers Can’t Be Wrong

With an average of almost $40 per person spent on flowers and delivery, Mother’s Day is more than just a day of recognition – it is big business, serving roughly 50 million customers in just one weekend. It has come a long way since US President Woodrow Wilson issued a proclamation for the first national Mother’s Day in 1914.

We are proud to play a key role in ensuring the delivery of all these flowers every year – after all, we love to celebrate our moms too.

And we have a present for you too: a free access key that lets you try out 500 transactions for any of these or other capabilities in your own applications. Want to learn more? Contact our friendly and knowledgeable product team and we’ll take it from there.

Happy Mother’s Day – Mom ?

Privacy concept: text PRIVACY over background of cityscape at night

Data Privacy and Security: The Next Big Thing for the US?

Unless you’ve been living under a rock for the past couple of years, you know that data privacy and security laws have become a big thing worldwide. Between Europe’s GDPR regulation, Canada’s PIPEDA laws and others, consumer’s rights over their own personal data became one of biggest issues of 2018 for CIOs and CDOs who do business internationally. But what about here in the United States?

Now we have some numbers behind public opinions on this issue, thanks to a recent survey from software giant SAS. The results show that many of the same concerns that led to regulations such as GDPR are top-of-mind among Americans, and should inform the way data professionals look at their contact data assets in 2019 and beyond.

What the survey says

In July 2018, SAS surveyed over 500 adult US consumers from a variety of socioeconomic levels about their opinions on data privacy. Here are some of the key conclusions from this survey:

People are concerned. Nearly three-quarters of respondents are more concerned about data privacy than they were a few years ago, with more than two-thirds also feeling their data is less secure. The biggest areas of concern? Identity theft, fraud, and personal data being used or sold without consent.

They want more regulation. 67% of respondents felt that government should do more to protect data privacy, while fully 83% would like the right to tell an organization not to share or sell their personal information. A large majority would also like the right to know how their data is being used, and to whom it is being sold.

Consumers are more savvy about privacy. Roughly two-thirds of respondents (66 percent) acknowledge that primary responsibility for their data security rests with them, and a majority are able do things like changing privacy settings. Notably, close to a third of people have reduced their social media usage and online shopping over these concerns.

Trust must be earned. Trust in organizations for keeping personal data secure vary widely, from highs of 46-47% for healthcare and banking organizations to roughly 15% for travel companies and social media.

Age matters. Older consumers value privacy more than young ones and are least willing to provide personal information in return for something (36% for Baby Boomers versus 45% for Millennials). However, this does not mean that young consumers live in a post-privacy world, with 66% of Millennials expressing concern over the security of their personal data.

What this means for data privacy – and for you

One important take-away from this study is that, whether or not we have a US version of GDPR some day – a direction favored by these survey results – the trend is clearly toward increasing consumer concerns over data privacy and security over time. This means that data professionals need to prepare for the very real possibility of increased regulation and compliance issues on the horizon.

These survey results also mean that even in the absence of regulation, your organization’s data policies can have a very real and tangible impact on brand image and consumer trust, which in turn affect your bottom line. The fact that some people are reducing their social media use and online shopping, for example, should be a warning for everyone to start paying more attention to data privacy and security concerns.

Finally, these results are another sign that more than ever, businesses need to get serious about contact data quality in 2019. Tools from Service Objects such as address, email and phone validation can help ensure that your contact data assets are accurate, and prevent unsolicited marketing contacts to mistaken or bogus entities – and in the process, give you higher quality leads and contacts.

Want to learn more? Contact us to speak with one of our knowledgeable product experts about improving your data quality in the new year.

Data Migration and Compass on Keyboard

Why Data Quality is Important for Data Migration

Change is a constant in life. And when it comes to your data, even more so. Given enough time, migrating your data to new systems and platforms will be a fact of life for most businesses. Whether it involves a corporate merger, a new application vendor, or other reasons, data migration is one of those predictable “stress points” that can put your contact data assets at risk without the right strategy.

According to a recent post by Dylan Jones on Data Quality Pro, data quality issues are one of the key reasons for the high failure rate of data migration projects. He cites a recent survey showing that 84% of these projects run over time and/or budget – and in his view, an important part of avoiding this involves advance planning and modeling.

Data quality best practices for contact data migration

From our perspective, there are at least four best practices you should consider for preserving contact data quality during a data migration process:

Measure twice, cut once. Jones describes the use of what he calls landscape analysis, or a “migration simulation” in plain English, to anticipate problems before the migration begins in earnest. This involves testing a subset of your database against planned conversion rules and protocols, to help ensure that the results are likely to go as planned.

Validate addresses before conversion. The term “garbage in, garbage out” applies here, with clean data being an important factor on the front end of the migration process.

Validate addresses after conversion. What is the one thing that is worse than bad data? Reformatting data to make it even worse. Bad things can happen when old data goes into new fields, and if John Smith at 123 Mayberry Street becomes “John” at “Smith 123 Mayberry” in the new database, his value as a contact can go completely out the window. Never blindly trust converted data without a validation and review step to flag bad contact data in real time.

It ain’t over until it’s over. Yogi Berra’s famous baseball saying applies equally to data migration, because you aren’t finished with data quality when the initial conversion is done. Contact information is a perishable resource that goes stale over time, as people come, go, and change jobs and addresses. This means that your contact data migration isn’t really finished until you have implemented and tested an infrastructure for ongoing contact data validation and database cleaning.

Data migration: Not just a technical problem

According to a recent Infosys white paper, treating data migration as an “IT problem” is often a fatal mistake in terms of data quality – in their words, “Business has to not just care about data migration, but command it.” Put another way, no one can sweat the details on data quality like the stakeholders who will be using this data in the long run.

This raises one more important issue: a major data migration might also be the time to start thinking about a more formal data governance strategy, if one isn’t in place already. We’ve discussed this issue on our blog before, and it is particularly relevant here: major changes such as a data migration can often serve as a catalyst to build professional data expertise at a business-wide level. Either way, putting data quality front and center is one of the most important factors in creating a smooth transition to a new environment.

Have any additional questions on the important role data quality plays in the data migration process? Contact us and we will be happy to answer your questions.

Photo of a judge's gavel in front of a Canadian

Canada’s New PIPEDA Law: What It Means for You

If you do business with customers in Canada, an important new privacy law has taken effect as of November 2018: The Personal Information Protection and Electronic Documents Act (PIPEDA). People are already starting to refer to PIPEDA as Canada’s version of GDPR, the sweeping privacy regulations implemented in May 2018 by the European Union.

There are some common denominators between PIPEDA and GDPR. Both mandate acquiring explicit customer permission for the use of personal information, as well as disclosure of how this information will be used. Both also require breach notification in cases where personal information has been compromised: in Canada’s case, notification must be made to that country’s Privacy Commissioner a well as to affected parties. Other common threads include requirements to maintain accurate and secure data, giving individuals access to their own data, and the need for a formal compliance officer.

Getting started with PIPEDA

The Canadian government has published a downloadable guide to help organizations understand and become compliant with the new PIPEDA law, entitled Privacy Toolkit: A Guide for Businesses And Organizations. It provides an overview of the law and its principles, together with descriptions of its complaint handling procedures and audit provisions.

PIPEDA compliance revolves around ten principles that businesses must follow:

1. Accountability. Comply with these principles, appoint an individual responsible for compliance, protect information handled by you and third parties, and develop policies and practices for personal information.

2. Identifying purposes. Document and inform individuals why information is being collected, before or at the time it is collected.

3. Valid, informed consent. Specify what information is being collected, used or disclosed along with its purpose, and obtain explicit consent – before collection, and again if a new use of their personal information is identified.

4. Limiting collection. Do not collect personal information indiscriminately, or deceive or mislead individuals about the reasons for collecting personal information.

5. Limiting use, disclosure, and retention. Use or disclose personal information only for the purpose for which it was collected or consented to, keep personal information only as long as necessary, and have policies for the retention and destruction of information that is no longer required.

6. Accuracy. Minimize the possibility of using incorrect information when making a decision about a person or when disclosing information to third parties.

7. Safeguards. Protect personal information against loss or theft, as well as unauthorized access, disclosure, copying, use or modification.

8. Openness. Inform customers, clients and employees that you have policies and practices for the management of personal information, and make them understandable and easily available.

9. Individual access. Provide individuals with access to their personal information on file with you, along with how and to whom it has been disclosed, as well as the ability to correct or amend this information.

10. Challenging compliance. Develop simple and easily accessible complaint procedures, inform complainants of their avenues of recourse, investigate all complaints received, and take appropriate measures to correct information handling practices and policies.

Some important distinctions

While the goals of PIPEDA are very similar to those of other privacy regulations such as GDPR – and many of the same compliance strategies will apply to both markets – there are some key differences with Canada’s new regulations. Here are two of the more important ones:

A focus on mediation. Compared with other global privacy regulations, which often carry stiff financial penalties, PIPEDA is designed to enforce privacy laws through mediation where possible. However, this does not mean that the law is without teeth: both complainants and Canada’s Privacy Commissioner can apply for a Federal Court hearing and potential damage awards. In addition, specific violations such as intentional destruction of requested personal information or whistleblower retaliation may be prosecuted as offenses.

Limits on scope for employee data. Unlike GDPR, the PIPEDA law’s application to employee data only applies to federally regulated entities such as banks, airlines and shipping companies (although some provinces have stricter provincial privacy laws). For consumer data, however, PIPEDA applies to personal data from all Canadians.

Knowing the location of customers is key to PIPEDA compliance

Contact data quality is no longer an option when dealing with the Canadian market. Service Objects has been at the forefront of helping firms with their compliance efforts for data privacy regulations, including flagging the geographic location of customers and prospects, which is key to getting started with any compliance effort.

Contact us for more information about how our data quality solutions can help your business.

Text: 2018 Year End Message from Our CEO

A Year End Message From Our CEO

With 2018 coming to a close in just a few short weeks, we enter our busiest time of year validating millions of deliveries for our customers every day. Just like Santa’s elves, we’ve been preparing for the season all year long.

2018 has been a good year at Service Objects, with strong growth and record-setting validations for our customers. Our mission continues, helping businesses weed-out fraud and operate more efficiently through data quality.

We made substantial improvements in many facets of our services. We achieved certification for GDPR Privacy Shield. We introduced a new version of Address Insight. We implemented ambitious security policies. Most importantly, we achieved a customer satisfaction score (NPS) of 66. Although “66” may sound low, it ranks us among the top 1% of all technology companies nationwide. Our NPS score proves we have a personal connection with our customers.

Of course, these wonderful results are only possible due to the work of each and every team member here, and the core values in which we share in our work together. This year we welcomed Naoko Rico, Richard Delsi, Samantha Haentjens, Joy Adelente, Cary Carlton, and Sophia Graniela to the company. We are SO happy to have them onboard. Our team, and the values we foster, underpins all the good we do. It’s through our collective efforts that we make the greatest difference.

I’m very excited about 2019. We have so many exciting projects on deck, such as global geocoding, AI-based name validation, and much more.

The success we have had in 2018 is a credit to you, our customers. Thank you for allowing us to participate in your business. It is our privilege to be a part of your work.

From all of us at Service Objects, we wish you a very happy and healthy holiday and a bright and hopeful new year.

Happy holidays!

Geoffrey W. Grow
Founder and CEO

Bad Email Addresses: A Rogue’s Gallery

Once upon a time, many businesses simply lived with bad email addresses as an inevitable cost of doing business. Today this has changed dramatically, in the face of increasing costs and regulatory consequences. According to figures from Adestra’s 2017 Email Marketing Industry Census, nearly 80% of firms proactively cleanse their email marketing lists.

What can go wrong with email contact addresses? Plenty. And what you don’t know can really hurt you. Here are just a few examples of bad email addresses and their consequences:

The faker:

You build a list of leads by offering people something of value in return for their email address. Unfortunately some people want the goodie, but have no intention of ever hearing from you again. So they make up a bogus address that goes nowhere, wasting your time and resources.

The fat-fingered:

Someone gives you their email address with the best of intentions, but types it in wrong—for example, misspelling “” as “”. So your future correspondence to them never arrives, with consequences ranging from lost market opportunities to customer dissatisfaction.

The trap:

Here you have rented a list of email addresses, or worse, taken them from publicly available sources. But some of these addresses are “honeypots”: fake addresses designed to trap spammers. Send an email to it, and you will get blacklisted by that entire domain—which could be a real problem if this domain is a major corporation or source of leads and customers.

The fraudster:

Someone places an expensive order with you, using a stolen credit card—and a bogus email address that never existed and cannot be traced. Better to flag these fraudulent orders ahead of time, instead of after the horse has left the barn—or in this case, the shipping dock.

Of course, this isn’t an exhaustive list. (By the way, none of these sample email addresses are real.) But these are all good examples of cases where email validation can save your time, money and reputation.

What is email validation?

Think of it as a filter that weeds out the good email addresses from the bad ones. In reality, a good email validation service will examine multiple dimensions of what can go wrong with an email address—and in some cases, can even fix erroneous addresses to make them usable. But at its root, email validation takes your email contact data and makes it clean, safe and usable.

Here are some of the specific things that Email Validation can do for your business:

Make sure the format is correct

Our standard service uses server-side scripting on Web forms to check if email address data includes a name, the “@” symbol, and a valid top-level domain (TLD).

See if the address works

Instead of relying on stagnant, aggregated lists for verification, our real-time email validation checks the authenticity of email contact data instantaneously through kinetic and responsive two-way communication with email service providers (ESPs).

Perform advanced validation checks

Advanced checks can examine things such as:
• Checking if valid data exists on both sides of the “@” symbol, in both the username and the domain name
• Verifying that the domain in the email address exists and has a valid MX record associated with it
• Testing the mailbox to determine if it actually receives mail
• Detecting and flagging bogus, vulgar or malicious addresses that may be cluttering up your list

Correct fixable errors

The email hygiene component of this service identifies and corrects invalid emails with fixable errors such as typos, extraneous text, common domain misspellings and syntax problems.

Real-time email validation improves lead quality, saves time processing and pursuing leads, and protects your company from blacklists and regulatory traps. Download our free whitepaper, The ROI of Real-Time Email Validation, to learn more about bad email contact data and how email validation can help you correct inaccurate contact data and reject bogus email addresses.

The Benefits of Email Marketing

Few marketing channels share the power of email. It is immediate, urgent, personalized and inexpensive. And according to the Direct Marketing Association, it has the highest ROI of any marketing channel: an amazing 4300%.

Here are some of the key benefits of good email campaigns:

  • The cost per contact of email is extremely low compared with other channels.
  • Email is easily personalized by customer, market segment, or demographic.
  • Email marketing is much kinder to the environment, versus using natural resources such as direct mail.
  • Your email assets can help you make more informed decisions, develop more effective marketing strategies and strengthen customer/prospect relationships.

That said, your email marketing strategy is only as good as the quality of your email list.

The importance of data quality and email

The allure of email has always been its scalability: with one press of the “Return” key, your message can go out to dozens, hundreds, or even millions of people. Once you absorb the cost of acquiring email contact data, the costs of its re-use are minimal. So once upon a time, not that many years ago, marketers simply accepted a certain percentage of bad or misdirected email addresses as part of the process.

Today this is no longer the case. As people’s in-boxes have become flooded with spam, and marketers compete more than ever for busy peoples’ eyeballs, the quality of your email contact list has become extremely important. Here are just a few of the reasons why:

Time and money

As email lists continue to grow and expand, the human costs of processing bad data and updating contact lists continues to grow as well.

Brand image and customer reputation

Mis-directed email is almost universally unwelcome and perceived as spam, which in turn affects the public reputation of your brand and organization.

Wasted effort

When someone provides a bogus email address such as “,” particularly in conjunction with other contact information, adding them to your list of leads potentially wastes marketing resources in all of your channels.

Regulatory compliance

Laws and regulations such as the US CAN-SPAM act or the European Union’s General Data protection Regulation (GDPR) restrict unsolicited email marketing nowadays, with potentially severe penalties.

Lost marketing opportunities

Send unwanted email to the wrong address, and you could be blacklisted from an entire corporate domain— losing access to all of their prospects and customers. These last two reasons are especially important, because bad data now has the potential to do real harm to your business. And all of these factors add up to a future where more accurate and careful email marketing has become an increasing necessity. Email lists are a valuable business asset, but data quality— particularly authenticity and accuracy—always wins out over quantity.

That’s where real-time email validation comes in to separate the good emails from the bad ones.

An advanced email address validation and verification service, such as Service Objects’ DOTS Email Validation, uses sophisticated algorithms and dozens of rules and tests to instantly weed out invalid email addresses. It will also cross reference proprietary data for known bogus emails or spamtraps. Every email validation system should also check for the following:

  • Email address syntax
  • Individual domain specific mailbox rules
  • Improbable names (vulgar, famous, bogus, or suspicious keystroke sequences)
  • Mail exchange record of domain is valid and accepting mail
  • SMTP server for domain
  • Mailbox is accepting mail (when possible)

Want to learn more? Download our free whitepaper, The ROI of Real-Time Email Validation, to explore how to get the most profitability and customer engagement from your email marketing. The strategies presented will not only improve your response rates and effectiveness, they will help protect your organization from a host of issues including fraud, blacklisting and regulatory concerns.

Garbage In, Garbage Out – How Bad Data Hurts Your Business

The old saying “garbage in, garbage out” has been around since the early days of modern computing. Code for operator error or bad data, the adage implies that the output of a program is only as good as the input supplied by the user. With more data being collected, stored, and used than ever before, data quality at the point of entry should be a top priority for all organizations.

Now that data is informing more aspects of our businesses, it’s not difficult to imagine a future where data accuracy is vital. Think of delivery drones, which have been tested all over the US and UK in recent years. If a contact’s bad address information goes unchecked, it could feed a drone the wrong coordinates resulting in misdeliveries and lost products.

Data quality affects every aspect of your business, from sales and development to marketing and customer care. Yet a 2017 survey by Harvard Business Review found that “47% of newly-created data records have at least one critical (e.g., work-impacting) error.” So, what constitutes garbage input? It depends on the data and how it enters your system.

What Is Bad Data?

Of course, there are many kinds of data an organization may choose to collect, but here we will focus on one of the most critical – contact data. This includes a contact’s name, address, phone number and email, all of which are crucial to marketing, sales, fulfillment, and service.

Some common issues that make a contact record bad:

  • Inaccuracies like bad abbreviations or missing zip code
  • Typos caused by speed or carelessness
  • Fraudulent information
  • Moving data from one platform to another without appropriate mapping
  • Data decay as contacts move, get new phone numbers, and change positions

So, how does a bad contact record make it into your database?

Contacts entering their data online, whether downloading a whitepaper or ordering a product, are usually the first to commit a data quality error. Filling out an inquiry form on a mobile device, rushing through a purchase, or providing inexact information (such as missing “West” before a street name) are all examples of bad data leading to inaccurate contact records.

Your sales and customer service team can compound poor data quality by manually updating information that looks incorrect and making mistakes in the process. Good business practices can help mitigate operator error, but if a record was poor to begin with that likely won’t matter – you already know what happens to garbage when it ages.

What Is the Cost of Bad Data?

Much like garbage, bad data only gets worse over time.

Poor data quality can cause an organization’s sales team to waste time and effort chasing bad leads. According to a 2018 study by SiriusDecisions, B2B databases contain between 10% to 25% of critical contact data errors. That means up to 1 in 4 leads could have bad phone or email information attached, so follow-up communications may never reach the intended contact.

The customer service department loses time and money first in dealing with unhappy customers – even if they provided poor contact information, they’ll still blame your business for a package that never arrives. Additional time is spent troubleshooting problems and clarifying bad information, leading to major inefficiencies and frustration.

Overall costs to businesses include reputation related losses that occur when upset customers take to the internet to air their grievances. Time is lost due to the hidden data factories that arise within an organization when individuals working with bad data take it upon themselves to make “corrections” without understanding why or how the data is incorrect. Lastly, poor data robs a business of the ability to take full advantage of business tools like marketing, sales automation systems, and CRM.

How Can My Business Fix Bad Data?

Tightening up business policies around collecting and managing data is a start, but implementing data validation services will help ensure your data is as genuine, accurate, and up-to-date as possible and keep contacts current through frequent updates. Contact data validation can be integrated in a number of ways to best meet the needs of individual organizations, including:

  • Real-time RESTful API – cleanse, validate and enhance contact data by integrating our data quality APIs into your custom application.
  • Cloud Connectors – connect with major marketing, sales, and ecommerce platforms like Marketo, Salesforce, and Magento to help you gather and maintain accurate records.
  • List Processing – securely cleanse lists for use in marketing and sales to help mitigate data decay.
  • Quick Lookups – spot-check a verbal order or cleanse a small batch.

Service Objects’ validation services correct contact data including name, address, phone number, and email and cross-checks it against hundreds of databases to avoid garbage input. The result: cleaner transactions and more efficient processes across all aspects of your business.

Contact our team to determine which of our services can help you collect and maintain the highest quality data and kick the garbage to the curb.

The Role Data Quality Plays in Master Data Management

Enterprises are dealing with more data than ever before, which means that proper information architecture and storage is crucial. What’s more, the quality of the data you store affects your business more than ever nowadays. This goes double for your contact data records, because you need the most accurate and up-to-date lead and customer data to ensure that marketing, sales, and fulfillment all run smoothly.

Master Data Management, or MDM, involves creating one single master reference source for critical business information such as contact data. The goal of MDM is to reduce redundancies and errors by centrally storing records, as well as increasing access. No matter how well organized your database might be, the quality of its data will ultimately determine the effectiveness of your MDM efforts.

Why is master data management important?

Organizations look to MDM to improve the quality of their key data, and ensure that it is uniform and current. By bringing all data together in a hub, a company can create consistency in record formatting and ensure that updates are available company-wide.

MDM sounds simple in theory, but think of how (and how much) information is created and collected within an entire organization. Let’s take a single customer contact as an example. Over the course of a month, John Smith provides information to your company in three different instances:

  1. First, he is an existing customer of your services division, and his data exists in their customer records.
  2. Second, he visits your website and downloads a whitepaper, getting added to your marketing department’s database of leads.
  3. Third, he calls the sales team for one of your specialty products, and gets added to their database of contacts.

In a typical organization with departmental “silos,” John Smith is now part of as many as three separate databases in your company, none of which have a longitudinal view of his relationship with you as both a customer and a lead. Worse, slight differences in contact records could even turn John into three separate people in a master database. If each of these touch points get assigned to different automation drips and your sales reps are calling and emailing, you have a real disconnect in your efforts and a high likelihood of spamming your contact.

Having good data collection and storage practices is the first step to a good MDM program, but that alone may not solve John Smith’s problem. Ensuring the data he provides is correct, current and most importantly consistent at the point of entry is what guarantees that the best quality contact information is stored as master data. Implementing a lead validation service could help solve this issue at the point of entry by cross-validating each contact record with multiple databases in real time, facilitating accurate merge/purge operations later.

Lead validation not only corrects and appends contact data, it can also feed into your CRM and automation tools to help you further qualify your leads, so your sales team is only dealing with the highest quality information and pursuing genuine prospects. Once your prospects become customers, their contact data records will be passed on to other departments to complete any transactions – and through your MDM they will also have the most up to date and accurate information to conduct their business.

The costs of poor data quality

In a 2018 study conducted by Vanson Bourne, 74% of respondents agreed that while their organizations have more data than ever, they are struggling to generate useful insights. Some say this is because they are not effectively sharing data between departments, but only 29% of respondents say they have complete trust in the data their organization keeps.

Every aspect of your organization can be impacted by poor data quality, especially if your contact data is lacking. If 71% of your sales team feel that their lead information is bad, how effectively can they do their jobs?

Here are just a few of the ways that bad data can hurt your business:

  • Marketing – sending marketing materials to existing customers for discounts for new subscribers, spending extra money creating mailers for addresses that don’t exist
  • Sales – Multiple sales reps calling the same lead, the same sales rep calling the same lead multiple times, wasting time on bogus leads
  • Order fulfillment – bad or incomplete address data causing failed deliveries and chargebacks
  • Accounts receivable – sending invoices to incorrect addresses or old business contacts, billing addresses not matching credit card records
  • Business development – making bad decisions for growth in a particular market based on inaccurate data

Each of these issues could be solved by creating a master data management system and making sure it is only fed validated contact data from the point of entry. Further, frequently updating the MDM system by validating name, address, phone, email, IP, and other key pieces of information ensures that each data record is as current as possible.

Finally, one major cost of poor data quality – and a key reason for the growth of MDM – is the compliance risk as new data privacy and security regulations have emerged in recent years. For example, the European Union’s recent General Data Protection Regulation (GDPR) and the US Telephone Consumer Protection Act (TCPA) both have severe penalties for unwanted marketing contacts, even when such contact is inadvertently due to data quality issues like bad contact data or changed phone numbers. Accurate, centralized data is now an important part of compliance strategies for any organization.

Trends in master data management and data quality

The competitive advantages of consistent centralized data – together with the advent of cloud-based tools for data quality – have made MDM a growing trend for organizations of all sizes. Moreover, David Jones of the Forbes Technology Council predicts that increasing regulation and the rise of cybersecurity risks are converging to change the way data is managed at the business level. We now live in a world where things like validating contact data, rating lead quality, verifying accurate and current phone or email data, and other data quality tools have now become a new standard for customer-facing businesses.

Today, adding a contact data validation service to your processes before storing master data will improve overall data quality and increase efficiencies among all of your teams. Service Objects offers 24 contact data validation APIs that can stop bad data from undermining your master data management system. Want to learn more? Contact our team to see which API is best for your business type.

Data Quality and the 2020 Census

We talk a lot on these pages about how data quality affects your business. But once in a while, we also feel it is important to look at how data quality affects society as a whole. And one of the best examples of this in recent memory is the upcoming 2020 United States Census.

Every 10 years, the United States goes through a demographic headcount of its inhabitants. The results of this survey are pretty far-reaching, involving everything from how the Federal government allocates more than $600 billion in funding to who represents you in Congress. But this year, for the first time ever, technology and data quality loom among the biggest issues facing the next Census.

2020 census data quality doubts

These concerns are serious enough that the American Academy of Family Physicians, a healthcare advocacy organization, recently introduced a resolution entitled “Maintaining Validity and Comprehensiveness of U.S. Census Data” that has now been accepted by the American Medical Association together with other healthcare groups. It breaks down a number of data quality concerns currently facing the Census, including the following:

  • This will be the first year that a majority of responses are planned to be collected online, introducing possible sources of data error.
  • Sampling and data quality errors may disproportionately affect vulnerable populations subject to health care disparities, such as minorities and women.
  • In addition to human and data errors, there are concerns that mistrust of technology and privacy may prevent some people from completing the Census survey.
  • Above all, there are concerns over the impact of scaled-back funding for the 2020 Census, together with the departure of its director, in terms of how this will affect preparations for new technologies and survey methods.

Where data and politics converge

It isn’t just stakeholders like healthcare providers who are raising a red flag about the next Census: the government itself shares many of the same concerns. In its 2020 Census Operational Plan, the U.S. Department of Commerce points to data quality as one of its key program-level risks, stating that “If the innovations implemented to meet the 2020 Census cost goals result in unanticipated negative impacts to data quality, then additional unplanned efforts may be necessary in order to increase the quality of the census data.”

This is a case where political issues also intersect with data concerns: in addition to the ongoing battle over funding levels for the 2020 Census, others have raised concerns over a proposed new citizenship question that is potentially a hot button for areas with large Hispanic and immigrant populations. According to the Brookings Institute, both of these issues may have far-reaching impacts on the quality of this next decennial Census, and recently the Attorney Generals of several states drafted a joint letter raising these as potential quality issues.

The impact of contact data quality

Finally, in an area near and dear to our hearts, the 2020 Census serves as an example of where contact data quality will have a huge impact on both costs and quality – because many addresses change over the course of a decade, and the current practice of canvassing non-responders on foot (up to six times) can be costly, time-consuming and error-prone. In 2015 the government responded to this issue by conducting address validation tests across a limited population sample, and to be fair, they must also contend with many non-standard locations (such as people living in basements, illegally subdivided units, or homelessness). But clearly, accurate address validation and geolocation will loom larger than ever for the census of the future.

These concerns are examples of some of the potential social impact of data quality issues, as society bases more of its decisions and funding choices on collected data. At a deeper level, they point to a world where data scientists may even ultimately have as much impact on these social issues as politicians and voters do. Either way, technology is playing more of a role than ever in social change.

The takeaway for all of us – in business, and increasingly in life itself – is that our world is increasingly becoming data-driven, and paying strategic attention to the use of this data is going to become progressively more important over time. And in the near future, this will include making sure that every American is accurately and properly counted in the next Census.

Why Data Quality Isn’t a Do-It-Yourself Project

Here is a question that potential customers often ask: is it easier or more cost-effective to use data validation services such as ours, versus building and maintaining in-house capabilities?

We are a little biased, of course – but with good reason. Let’s look at why our services are a lot easier to use versus in-house coding, and save you money in the process.

Collecting and using information is a crucial part of most business workflows. This could be as simple as taking a name for an order of a burger and fries, or as complex as gathering every available data point from a client. Either way, companies often find themselves with databases comprised of client info. This data is a valuable business asset, and at the same time often the subject of controversy, misuse, or even data privacy laws.

The collection and use of client information carries an important responsibility when it comes to security, utility, and accuracy. If you find yourself tasked with keeping your data genuine, accurate, and up-to-date, let’s compare the resources you would need for creating your own data validation solution versus using a quality third-party service.

Using your resources

First, let’s look at some of the resources you would need for your own solution:

Human. The development, testing, deployment, and maintenance of an in-house solution requires multiple teams to do properly, including software engineers, quality assurance engineers, database engineers, and system administrators. As the number of data points you collect increases, so does the complexity of these human resources.

Financial. In addition to the inherent cost of employing a team of data validation experts, there is the cost of acquiring access to verified and continually updated lists of data (address, phone, name, etc.) that you can cross-validate your data against.

Proficiency. Consider how long it would take to develop necessary expertise in the data validation field. (At Service Objects, for example, our engineers have spent 15+ years increasing their proficiency in this space, and there is still more to learn!) There is a constant need to gain more knowledge and translate that into new and improved data validation services.

Security. Keeping your sensitive data secure is important, and often requires the services of a team. An even worse cost can be failing to take the necessary steps to ensure privacy, which can even lead to legal troubles. Some environments need as much as bank grade security to protect their information.

Using our resources

The points above are meant to show that any data validation solution, even an in-house one, carries costs with it. Now, let’s look at what you get in return for the relatively small cost of a third-party solution such as ours for data validation:

Done-for-you capabilities. When you use services such as our flagship Address Validation, Lead Validation, geocoding, or others, you leverage all the capabilities we have in place: CASS-certifiedTM validation against continually updated USPS data, lead quality scores computed from over 100 data points, cross-correlation against geographic or demographic data, global reach, and much, much more.

Easy interfacing. We have multiple levels of interfacing, ranging from easy to really easy. For small or well-defined jobs, our batch list processing capabilities can clean or process your data with no programming required. Or integrate our capabilities directly into your platform using enterprise-grade RESTful API calls, or cloud connectors interfacing to popular marketing, sales and e-commerce platforms. You also spot-check specific data online – and sample it right now if you wish, with no registration required!

Support and documentation. Our experts are at your service anytime, including 24/7/365 access in an emergency. And we combine this with extensive developer guides and documentation. We are proud of our support, and equally proud of all the customers who never even need to contact us.

Quality. Our services come with a 99.999% uptime guarantee – if you’re counting, that means less than a minute and a half per day on average. We employ multiple failover servers to make sure we are here when you need us.

We aren’t trying to say that creating your own in-house data validation solution can’t be done. But for most people, it is a huge job that comes at the expense of multiple company resources. This is where we come in at Service Objects, for over 2500 companies – including some of the nation’s biggest brands, like Amazon, American Express, and Microsoft.

The combination of smart data collection/storage choices on your end and the expert knowledge we’ve gained over 15 years in the data validation space can help to ensure your data is accurate, genuine and up-to-date. Be sure to look at the real costs of ensuring your data quality, talk with us, and then leave the fuss of maintaining software and security updates and researching and developing new data validation techniques to us.

Saving More of Your Labor this Labor Day

Labor Day is much more than the traditional end of summer in America: it pays tribute to the efforts of working people. It dates back well over a century, with one labor leader in the 1800s describing it as a day to honor those “who from rude nature have delved and carved all the grandeur we behold.” And we aren’t forgetting our friends in Europe and elsewhere, who celebrate workers as well with holidays such as May Day.

As we celebrate work and the labor movement – and enjoy a long holiday weekend – we wanted to take a look at some of the ways that we help you save labor, as you try to carve grandeur from your organization’s data. Here are some of the more important ones:

Validation and more.

Let’s start with the big one. For nearly two decades, the main purpose of our existence has been to take the human effort out of cleaning, validating, appending, and rating the quality of your contact and lead data. Whether your needs involve marketing, customer service, compliance or fraud prevention, these tools save labor in two ways: first, by saving you and your organization from re-inventing the wheel or doing manual verification, and second, by saving you from the substantial human costs of bad data.

Ease of integration.

What is the single worst data quality solution? The one that gets implemented badly, or not at all. One of the biggest things our customers praise us for is how easy it is to implement our tools, to work almost invisibly in their environment. We offer everything from API integration and web hooks with common platforms, all the way to programming-free batch interfaces for smaller or simpler environments – backed by clear documentation, free trial licenses and expert support.

Speed and reliability.

As one customer put it, “milliseconds matter” – particularly in real-time applications where, for example, you are validating customer contact data as they are in the process of entering it. Our APIs are built for speed and reliability, with a longstanding 99.999% uptime and multiple failover servers, as well as sub-second response times for many services – so you don’t waste time tearing your hair out or troubleshooting responsiveness issues.

Better analytics.

Your contact data is a business asset – put it to work as a tool to gain business insight for faster, more informed decision-making and market targeting. You can target leads by demographics or geocoding, enhance your leads with missing phone or contact information, or leverage your customer base for better decision support, among many other applications.

Customer support.

We recently interviewed a major longtime customer about using our products, and when we asked them about support they gave us the highest compliment of all: “We never need to call you!” But those who do call know that our best-in-class support, staffed by caring, knowledgeable experts who are available 24/7/365, represents a large savings of time and effort for our clients.

We hope you enjoy this Labor Day holiday. And when you get back, contact one of our product experts for a friendly, pressure-free discussion about how we can create less labor for you and your organization!


Data Quality, AI, and the Future

What do you think of when you hear the term “artificial intelligence” (or AI for short)? For many people, it conjures up images of robots, science fiction, and movies like “2001 – A Space Odyssey,” where an evil computer wouldn’t let the hero back on his spaceship to preserve itself.

Real AI is a little less dramatic than that, but still pretty exciting. At its root, it involves using machine learning – often based on large samples of big data – to automate decision-making processes. Some of the more public examples of AI are when computers square off against human chess masters or diagnose complex problems with machinery. And you already use AI every time you ask your phone for directions or a spam filter keeps junk mail from reaching your inbox.

In the area of contact marketing and customer relationship management, some experts are now talking about using AI for applications such as predictive marketing, automated targeting, and personalized content creation. Many of these applications are still in the future, but product introductions aimed at early adopters are already making their way to the market.

Data quality is key in AI

One thing nearly everyone agrees on, however, is that data quality is a potential roadblock for AI. Even a small amount of bad data can easily steer a machine learning algorithm wrong. Imagine, for example, you are trying to do demographic targeting – but given the percentage of contact data that normally goes bad in the course of a year, your AI engine may soon be pitching winter coats to prospects in Miami.

Here are what some leadership voices in the industry are saying about the data quality problem in AI:

  • Speaking at a recent Salesforce conference, Leadspace CEO Doug Bewsher described data quality as “AI’s Achilles heel,” going on to note that its effectiveness is crippled if you try using it with static CRM contact data or purchased datasets.
  • Information Week columnist Jessica Davis states in an opinion piece that “Data quality is really the foundation of your data and analytics program, whether it’s being used for reports and business intelligence or for more advanced AI and related technologies.”
  • A recent Compliance Week article calls data quality “the fuel that makes AI run,” noting that centralized data management will increasingly become a key issue in preventing “silos” of incompatible information.

The ROI of accurate and up-to-date contact data is larger than ever

Naturally, this issue lies right in our wheelhouse. For years, we have been preaching the importance of data quality and data governance for contact data – particularly given the costs of bad data in time, human effort, marketing effectiveness, and customer reputation. But in an era where automation continues to march on, the ROI of good contact data is now growing larger than ever.

We aren’t predicting a world where your marketing efforts will be taken over by a robot – not anytime soon, at least. But AI is a very real trend, one which deserves your attention from here. Some exciting developments are on the horizon in marketing automation, and we are looking forward to what evolves over the next few years.

Find out more about how data quality and contact validation can help your business by visiting the Solutions section of our website.

Marketing Strategies for the New Digital Privacy Era

In a world of big data, information for sale, and people oversharing on social media, this past decade has lulled many marketers into believing in a post-privacy era of virtually unfettered access to consumer and prospect data.

Even consumers themselves share this perception: according to an Accenture survey, 80% of consumers between the ages of 20 and 40 feel that total digital privacy is a thing of the past. But today this Wild West scenario is becoming increasingly regulated, with growing constraints on the acquisition and use of people’s personal data. Directives such as the European Union’s GDPR and ePrivacy regulations, along with other initiatives around the globe, are ushering in a new landscape of privacy protections.

Much has been written about how to comply with these new regulations and avoid penalties, on this blog and elsewhere. But this new environment is also a marketing opportunity for savvy organizations. Here, we examine some specific ways you can position yourself to grow in a changing world of privacy.

Leverage data quality with these five key marketing strategies

Be transparent. In their 2018 State of the Connected Customer survey, found that 86% of customers would be more likely to trust companies with their information if they explain how it will provide them with a better experience.

Offer value. The Accenture survey mentioned above notes that over 60% of customers feel that getting relevant offers is more important than keeping their online activity private, with nearly half saying that they would not mind companies tracking their buying behavior if this led to more relevant offers.

Give customers what they want. According to European CRM firm SuperOffice, the post-GDPR world represents an opportunity to create segmented customer lists, through techniques such as separate website pop-ups for different areas of interest and content marketing via social media.

Look at the entire customer life cycle. Many firms offer a one-time free incentive, such as a report or webinar, in exchange for contact data and marketing permission. However, this can lead to fraudulent information being offered to get the goodie (we can help with that), or even a real but never-checked “wastebasket” email address. Instead, consider offering a regular stream of high-value information that keeps customers connected with your brand.

Change your perspective. This is perhaps the most important strategy of all: start looking at your customers as partners instead of prospects. Recent regulations are, at their root, a response to interruptive marketing strategies that revolve around bugging the many to sell to the few. Instead, focus on cultivating high-value client relationships with people who want products and services you offer.

More consumer privacy can be a good thing

Whether businesses are ready or not, they are increasingly facing a world of marketing to smaller prospect lists of people who choose to hear from them for specific purposes, starting with Europe and spreading elsewhere. But this can be a good thing, and indeed a market opportunity. By changing your selling focus from a numbers game to one of deeper and mutually beneficial customer relationships, you can potentially gain more loyal customers and lower marketing expenses. In the process, this new era of consumer privacy could possibly end up being one of the best things that happen to your business.

Protecting your customers’ privacy and creating a mutually beneficial relationship starts with having the most genuine, accurate and up-to-date data for your contacts.  Download our white paper, Marketing with Bad Contact Data, to learn more about how quickly customer data ages and the impact on your business.

Customer Expectations are Getting…Younger

Being based in the college town of Santa Barbara, California, we notice something interesting: the students seem to get younger every year. Of course, it is actually our own ages that continue to change. But this illusion contains a valuable marketing lesson for all of us.

The rise of the generational customer

According to the latest State of the Connected Customer survey from, consumers really are getting younger, as markets shift over time from older customers such as Baby Boomers to Generations X/Y and the Millennials. In fact, this year marks the first time that adult consumers exist who have never lived in the 20th century.

This trend means a lot more than having customers who don’t remember the 9/11 attacks, or realize that Paul McCartney was in a band before Wings. Some of the key points from this survey include:

  • Millennials and Generation Z live in an omnichannel world, using an average of 11 digital channels versus nine for traditional/Baby Boomer customers.
  • Nearly twice as many Millennials prefer to use mobile channels versus traditional/Baby Boomer customers (61% versus 31%), with 90% of Millennials using this channel versus 72% of older customers.
  • Traditional and Baby Boomer customers use less technology than their younger counterparts, but they aren’t dead yet: over 70% of them use channels such as mobile, text/SMS, and online portals and knowledge bases. However, usage falls off sharply with age for newer channels such as social media and voice-activated personal assistants like Siri and Alexa.
  • Between 77% and 86% of survey respondents believe that technologies such as chatbots, voice-activated assistants, and the Internet of Things (IoT) will transform their expectations of companies. The most important ones? AI and cybersecurity, at 87% each.
  • Over two-thirds of all consumers surveyed (67%) prefer to purchase through digital channels.

Overall, one of the key takeaways from this survey was the growing importance of customer experience. Eighty percent of respondents stated that the experience provided by a company was every bit as important as its products and services. This in turn involves greater connectivity between companies and their customers, with 70% of customers noting that connected processes are very important to winning their business.

It all comes down to data

What does this mean for the future of marketing? For one thing, it is clearly becoming more data-driven. While your oldest consumers still remember ordering from catalogs, your youngest ones expect to engage you on your tablets and smartphones, with little tolerance for error. This also means that both your marketing and your customer service are increasingly becoming electronic.

We welcome this trend at Service Objects: our company was originally founded in 2001 around reducing the waste stream from direct mail. But this trend also creates a mandate for us – and for you – to keep looking beyond simple contact data validation, into a world of data analyses that range from demographic screening to compliance with growing privacy laws. It is a major challenge, but also an opportunity for all of us – and frankly a big part of what keeps us young.

Thoughts on this Independence Day

If you live in the United States, you probably think that Independence Day is synonymous with the fourth of July. In reality, Independence Day holidays are celebrated around the world, commemorating freedom under a variety of dates and names. Whether it is our neighbor to the north’s Canada Day (July 1), Singapore’s National Day (August 9), Brazil’s Dia da Independência (September 7), or a host of similar holidays, much of the globe marks and cherishes the birth of their nation’s own self-determination.

Declaring our independence – from waste

At Service Objects, our own Independence Day dates back to 2001. That was the year our founder Geoff Grow – a mathematician and an environmentalist – looked at this nation’s flood of wasted and misdirected junk mail, along with its cost to the environment, and realized that he didn’t have to just sit back passively and accept it. This led to the beginnings of our flagship Address Validation service, a first step in helping clients deliver to valid addresses every time.

Of course, we have grown substantially since then: not just in a revenue sense, but in the breadth of products we offer. Nowadays our solutions encompass areas that include fraud prevention, regulatory compliance, marketing optimization and customer insight. But they all tie back to one core concept: automated tools for data quality. And in turn, a concern for the environment.

Nowadays we are proud of serving over 2,500 customers and processing over three billion transactions. But we are also proud of serving the environment as well as our great clients. You’ll see this in a company where most people ride bikes to work, recycling is a fact of life, and our products continue to reduce the waste stream in this country – on the order of over a million trees and nearly half a billion gallons of water saved. And to this day, we still plant ten trees for every new customer we serve.

Declaring your independence – from bad data

Naturally, we enjoy having the opportunity to serve you too. And this Independence Day, we invite you to look at some of the data quality problems you can break free from, including:

Whatever stands between you and leveraging the full power of your contact data assets, we probably have a solution for it, ranging from worldwide address validation to US tax rates. Whether it is a specific data quality problem, or a planned strategy for effective data hygiene, we have cost-effective solutions to make your life easier.

We also can set you free from implementation worries. Our services can be integrated directly with your CRM, marketing automation or other systems, using real-time API integration or cloud connectors. We can also provide convenient batch list processing for your databases, without the need for systems integration. And you can try out our products for free, with either real-time output right on our website and/or free API keys.

Whatever is holding your business back, we’re glad to help get your Independence Day party started – just contact us for a free consultation, with no sales pressure. (Just remember that we’re closed on July 4th! Except for our 24/7 support, of course.) We look forward to helping you declare your own independence from data quality problems.

data privacy laws

A New Data Privacy Challenge for Europe – and Beyond

New privacy regulations in Europe have recently become a very hot topic again within the business community. And no, we aren’t talking about the recent GDPR law.

A new privacy initiative, known as the ePrivacy Regulation, deals with electronic communications. Technically a revision to the EU’s existing ePrivacy Directive or “cookie law,” and pending review by the European Union’s member states, it could go into effect as early as this year. And according the New York Times, it is facing strong opposition from many technology giants including Google, Facebook, Microsoft and others.

Data privacy meets the app generation

Among other things, the new ePrivacy Regulation requires explicit permission from consumers for applications to use tracking codes or collect data about their private communications, particularly through messaging services such as Skype, iMessage, games and dating apps.  Companies will have to disclose up front how they plan to use this personal data, and perhaps more importantly, must offer the same access to services whether permission is granted or not.

Ironically this new law will also remove the previous directive’s need for the incessant “cookie notices” consumers now receive, by using browser tracking settings, while tightening the use of private data. This will be a mixed blessing for online services, because a simple default browser setting can now lock out the use of tracking cookies that many consumers routinely approved under the old pop-up notices. As part of its opposition to these new rules, trade groups are painting a picture of slashed revenues, fewer free services and curbs on innovation for trends such as the Internet of Things (IoT).

A longstanding saying about online services is that “when something is free, you are the product,” and this new initiative is one of the more visible efforts for consumers to push back and take control of the use of their information. And Europe isn’t alone in this kind of initiative – for example, the new California Consumer Privacy Act, slated for the late 2018 ballot, will also require companies to provide clear opt-out instructions for consumers who do not wish their data to be shared or sold.

The future: more than just European privacy laws

So what does this mean for you and your business? No one can precisely foretell the future of these regulations and others, but the trend over time is clear: consumer privacy legislation will continue to get tighter and tighter. And the days of unfettered access to the personal data of your customers and prospects are increasingly coming to an end. This means that data quality standards will continue to loom larger than ever for businesses, ranging from stricter process controls to maintaining accurate consumer contact information.

We frankly have always seen this trend as an opportunity. As with GDPR, regulations such as these have sprung from past excesses the lie at the intersection of interruptive marketing, big data and the loss of consumer privacy. Consumers are tired of endless spam and corporations knowing their every move, and legislators are responding. But more important, we believe these moves will ultimately lead businesses to offer more value and authenticity to their customers in return for a marketing relationship.

Freshly Squeezed…Never Frozen

Data gets stale over time. You rely on us to keep this data fresh, and we in turn rely on a host of others – including you! The information we serve you is the product of partnerships at many levels, and any data we mine or get from third party providers needs to be up-to-date.

This means that we rely on other organizations to keep their data current, but when you use our products, it is still our name on the door. Here at Service Objects, we use a three-step process to do our part in providing you with fresh data:

Who: We don’t make partnerships with just anyone.  Before we take on a new vendor, we fully vet them to be sure this partnership will meet our standards, now and in the future. To paraphrase the late President Reagan, we take a “trust but verify” approach to every organization we team up with.

What: We run tests to make sure that data is in fact how we expect it to be. This runs the gamut from simple format tests to ensuring that results are accurate and appropriate.

When: Some of the data we work with is updated in real time, while other data is updated daily, weekly, or monthly.  Depending on what type of data it is, we set up the most appropriate update schedule for the data we use.

At the same time, we realize this is a partnership between us and you – so to get the most out of our data, and for you to have the best results, we always suggest that you make sure to re-check some of your data points periodically, regardless of whether you are using our API or our batch processing system. Some of the more obvious reasons for this are that people move, phone numbers change, emails change, areas get redistricted, and so on. To maintain your data and keep it current, we recommend periodically revalidating it against our services.

Often business will implement our services to check data at the point of entry into their system, and also to perform a one-time cleanse to create a sort of baseline. This is all a good thing, especially when you make sure that data is going into your systems properly and is as clean as possible. However, it is important to remember that in 6-12 months some of this data will no longer be current.  Going the extra step to create a periodic review of your data is a best practice and is strongly recommended.

We also suggest keeping some sort of time stamp associated with when a record was validated, so that when you have events such as a new email campaign and some records have not been validated for a long time – for example, 12 months or more – you can re-run those records through our service.  This way you will ensure that you are getting the most out of your campaign, and at the same time protect your reputation by reducing bounces.

Finally, here is a pro tip to reduce your shipping costs: in our Address Validation service, we return an IsResidential indicator that identifies an address as being residential or not.  If this indicator changes, having the most recent results will help your business make the most cost-effective shipping decisions.

For both us and you, keeping your data fresh helps you get the most out of these powerful automation tools. In the end there is no specific time span we can recommend for verification that will suit every business across the board, and there will be cases where it isn’t always necessary to keep revalidating your data: the intervals you decide to use for your application will depend mostly on your application. But this is still an important factor to keep in mind as you design and evaluate your data quality process.

To learn more about how our data quality solutions can help your business, visit the Solutions section of our website.

Around the World with Data Privacy Laws

If you work with data, you have certainly heard by now about GDPR: the new European Union laws surrounding consumer data privacy that went into effect May 25, 2018. But how about PIPEDA, NDB, APPI, CCPA, and SHIELD?

These acronyms represent data privacy regulations in other countries (in these cases for Canada, Australia, Japan, California and New York respectively). Many are new or recently expanded, and all are examples of how your legal responsibilities to customers don’t stop with GDPR. More importantly, they represent an opportunity for you and your business to use data quality and 21st century marketing practices to differentiate yourself from your competition.

Data protection and privacy laws are becoming increasingly popular

Let’s discuss some of these new regulations. According to authentication vendor Auth0, there are a wide range of reasons for their recent proliferation. First, the rollout of GDPR has implications for other countries, including whether their personal data can flow into the EU – meaning that their data quality and protection regulations must align sufficiently with EU rules to be “whitelisted” by them. New laws now being adopted by other countries address issues such as breach notification, the use of genetic and biometric data, and the rights of individuals to stop their data from being sold.

Moreover, data privacy and security doesn’t stop with Europe and GDPR. Other countries are now starting to explore the rights of consumers in this new era of online information gathering and big data. For example, Japan and other countries now have additional regulations surrounding the use of personal information codes to identify data records, and there is increasing scrutiny on personal data that is gathered through means such as social media.

Contact data plays a key role in compliance

Now, let’s talk about your contact data. It often isn’t ready for global data regulations, through actions such as not gathering country information at the point of data entry, or having onerous location data entry requirements (like putting “United States” at the end of a long pull-down menu of countries) that encourage false responses. Worse, existing contact data often has serious information gaps or incorrect information, and it goes bad very quickly: for example, nearly 20% of phone numbers and 35% of email addresses change every year.

Finally, let’s talk about you. In the face of a growing list of data privacy and security regulations, your job isn’t just to become GDPR-compliant. It is to build and maintain a best-practices approach to data quality, which in turn keeps you up to date with both today’s consumer data laws and tomorrow’s.

Data quality best practices are a competitive differentiator

Taking a step back from this flood of new regulations, we would also suggest that an ideal goal isn’t just compliance – it is to leverage today’s data quality environment as a competitive opportunity. Why do these new laws exist? Because of consumer demand. People are tired of interruptive broad-brush marketing, invasive spam, and unwanted telemarketing. When you build your own marketing strategy around better targeting, curated customer relationships, and respect for the consumer, your focus can shift from avoiding penalties to growing your brand and market share faster.

We can help with both of these objectives. For starters, we now offer our Country Detective service, which can process up to 500 contact records and append correct countries to them to help guide your compliance efforts. And for the longer term we offer a free Global Data Assessment, where our team will consult with you at no charge about strategies for data quality in today’s new regulatory and market environment. Interested? Contact us to get the ball rolling, and take the next step in your global market growth.

Instead of focusing on “cleaning dirty customer data,” organizations should focus on the connection between investments in data quality and customer service metrics.

Data Quality and Customer Experience

Once upon a time, customer service and support operations were viewed as the “complaint department” – a back-office function, a necessary evil, and above all a cost center whose role should be reduced as much as possible. These days, it has become increasingly clear that businesses must prioritize data quality. As Thomas Redman advised is a recent guest post, “Getting in front on data quality presents a terrific opportunity to improve business performance.”

While some organizations still have a break/fix mentality about customer support, the very best organizations now view their customer contact operations as the strategic voice of the customer – and leverage customer engagement as a strategic asset. Thanks to tools ranging from CRM and social media, many businesses manage their customer experience as closely as they manage their products and services.

The Strategic Role of Data Quality

This leads us to an important analogy about data quality. Like the “complaint department” days of customer service, many organizations still view data quality as little more than catching and fixing bad contact data. In reality, our experience with a base of nearly 2500 customers has taught that data quality plays a very strategic role in areas like cost control, marketing reach, and brand reputation in the marketplace.

This worldview is still evolving slowly. For example, according to a 2017 CIO survey by Talend, data quality and data governance remain the biggest concerns of IT departments, at 33 and 37 percent respectively – and yet their top priorities reflect more trendy objectives such as big data and real-time analytics. And back in 2012 Forrester vice president Kate Leggett observed that data quality often remains the domain of the IT department, and data projects for customer service rarely get funded.

Meanwhile, data quality has also become an important component of customer experience. Leggett notes that instead of an IT-driven process of “cleaning dirty customer data,” organizations should reframe the conversation towards the impact of data quality on customer-facing functions, and understand the connection between investments in data quality and customer service metrics.

Here at Service Objects, we see three key areas in the link between data quality and customer experience:

Customer Engagement

When you have good data integrated with effective CRM, you have the ability to market appropriately and serve customers responsively. You can target your messages to the right people, react responsively in real time to customer needs, and create systems that satisfy and delight the people you serve.

Service Failures

Mis-deliver a package because of a bad address, and you make a customer very unhappy. Do things like this even a small percentage of the time, and you gain a reputation as a company that doesn’t execute. Keep doing it and even many customers who haven’t been wronged yet will seek other options where possible, because of the “herd mentality” that builds around those who do complain publicly and on social media.

Strategic Visibility

Your customer data is an important asset that gives you the ability to analyze numerous aspects of your customer relationships and react appropriately. It holds the knowledge of everything from demographics to purchasing patterns, as well as their direct feedback through service and support. Having accurate customer data is central to leveraging this data strategically.

One heartening trend is that more organizations than ever now see the connection between data quality and their customer relationships. For example, one 2017 article in cited a European customer data survey showing that nearly three-quarters of life sciences respondents feel that having a complete and real-time view of customers is a top priority – while only 40% are satisfied with how well they are doing this. We are seeing similar figures across other industries nowadays, and view this as a good sign that we are moving over time towards a smoother, more data-driven relationship between organizations and their customers.

Recognizing the vital role contact data quality plays in GDPR compliance, Service Objects is offering affected businesses a free data quality assessment.

Free Data Quality Assessment Helps Businesses Gauge GDPR Compliance Ahead of May Deadline

As the May 25, 2018, deadline looms, Service Objects, the leading provider of real-time global contact validation solutions, is offering a GDPR Data Quality Assessment to help companies evaluate their if they are prepared for the new set of privacy rules and regulations.

“Our goal is to help you get a better understanding of the role your client data plays in GDPR compliance,” says Geoff Grow, CEO and Founder, Service Objects. “With our free GDPR Data Quality Assessment, companies will receive an honest, third-party analysis of the accuracy of their contact records and customer database.”

Under the GDPR, personal data includes any information related to a natural person or ‘Data Subject’ that can be used to identify the person directly or indirectly. It can be anything from a name, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer IP address.

Even if an organization is not based in the EU, it may still need to observe the rules and regulations of GDPR. That’s because the GDPR not only applies to businesses located in the EU but to any companies offering goods or services within the European Union. In addition, if a business monitors the behavior of any EU data subjects, including the processing and holding of personal data, the GDPR applies.

Recognizing the vital role contact data quality plays in GDPR compliance, Service Objects decided to offer a free data quality assessment to help those industries affected by the regulation measure the accuracy of their contact records and prepare for the May 2018 deadline.

The evaluation will include an analysis of up to 500 records, testing for accuracy across a set of inputs including name, phone, address, email, IP, and country. After the assessment is complete, a composite score will be provided, giving businesses an understanding of the how close they are to being compliant with GDPR’s Article 5.

Article 5 of the GDPR requires organizations collecting and processing personal information of individuals within the European Union (EU) to ensuring all current and future customer information is accurate and up-to-date. Not adhering to the rules and regulations of the GDPR can result in a fine of up to 4% of annual global turnover or €20 Million (whichever is greater).

“To avoid the significant fines and penalties associated with the GDPR, businesses are required to make every effort to keep their contact data is accurate and up-to-date,” Grow added. “Service Objects’ data quality solutions enable global businesses to fulfill the regulatory requirements of Article 5 and establish a basis for data quality best practices as part of a broader operational strategy.”


For more information on how to get started with your free GDPR Data Quality Assessment, please visit our website today.

When that data is incomplete, poorly defined, or wrong, there are immediate consequences: angry customers, wasted time, and difficult execution of strategy. Employing data quality best practices presents a terrific opportunity to improve business performance.

The Unmeasured Costs of Bad Customer and Prospect Data

Perhaps Thomas Redman’s most important recent article is “Seizing Opportunity in Data Quality.”  Sloan Management Review published it in November 2017, and it appears below.  Here he expands on the “unmeasured” and “unmeasurable” costs of bad data, particularly in the context of customer data, and why companies need to initiate data quality strategies.

Here is the article, reprinted in its entirety with permission from Sloan Management Review.

The cost of bad data is an astonishing 15% to 25% of revenue for most companies.

Getting in front on data quality presents a terrific opportunity to improve business performance. Better data means fewer mistakes, lower costs, better decisions, and better products. Further, I predict that many companies that don’t give data quality its due will struggle to survive in the business environment of the future.

Bad data is the norm. Every day, businesses send packages to customers, managers decide which candidate to hire, and executives make long-term plans based on data provided by others. When that data is incomplete, poorly defined, or wrong, there are immediate consequences: angry customers, wasted time, and added difficulties in the execution of strategy. You know the sound bites — “decisions are no better than the data on which they’re based” and “garbage in, garbage out.” But do you know the price tag to your organization?

Based on recent research by Experian plc, as well as by consultants James Price of Experience Matters and Martin Spratt of Clear Strategic IT Partners Pty. Ltd., we estimate the cost of bad data to be 15% to 25% of revenue for most companies (more on this research later). These costs come as people accommodate bad data by correcting errors, seeking confirmation from other sources, and dealing with the inevitable mistakes that follow.

Fewer errors mean lower costs, and the key to fewer errors lies in finding and eliminating their root causes. Fortunately, this is not too difficult in most cases. All told, we estimate that two-thirds of these costs can be identified and eliminated — permanently.

In the past, I could understand a company’s lack of attention to data quality because the business case seemed complex, disjointed, and incomplete. But recent work fills important gaps.

The case builds on four interrelated components: the current state of data quality, the immediate consequences of bad data, the associated costs, and the benefits of getting in front on data quality. Let’s consider each in turn.

Four Reasons to Pay Attention to Data Quality Now

The Current Level of Data Quality Is Extremely Low

A new study that I recently completed with Tadhg Nagle and Dave Sammon (both of Cork University Business School) looked at data quality levels in actual practice and shows just how terrible the situation is.

We had 75 executives identify the last 100 units of work their departments had done — essentially 100 data records — and then review that work’s quality. Only 3% of the collections fell within the “acceptable” range of error. Nearly 50% of newly created data records had critical errors.

Said differently, the vast majority of data is simply unacceptable, and much of it is atrocious. Unless you have hard evidence to the contrary, you must assume that your data is in similar shape.

Bad Data Has Immediate Consequences

Virtually everyone, at every level, agrees that high-quality data is critical to their work. Many people go to great lengths to check data, seeking confirmation from secondary sources and making corrections. These efforts constitute what I call “hidden data factories” and reflect a reactive approach to data quality. Accommodating bad data this way wastes time, is expensive, and doesn’t work well. Even worse, the underlying problems that created the bad data never go away.

One consequence is that knowledge workers waste up to 50% of their time dealing with mundane data quality issues. For data scientists, this number may go as high as 80%.

A second consequence is mistakes, errors in operations, bad decisions, bad analytics, and bad algorithms. Indeed, “big garbage in, big garbage out” is the new “garbage in, garbage out.”

Finally, bad data erodes trust. In fact, only 16% of managers fully trust the data they use to make important decisions.

Frankly, given the quality levels noted above, it is a wonder that anyone trusts any data.

When Totaled, the Business Costs Are Enormous

Obviously, the errors, wasted time, and lack of trust that are bred by bad data come at high costs.

Companies throw away 20% of their revenue dealing with data quality issues. This figure synthesizes estimates provided by Experian (worldwide, bad data cost companies 23% of revenue), Price of Experience Matters ($20,000/employee cost to bad data), and Spratt of Clear Strategic IT Partners (16% to 32% wasted effort dealing with data). The total cost to the U.S. economy: an estimated $3.1 trillion per year, according to IBM.

The costs to businesses of angry customers and bad decisions resulting from bad data are immeasurable — but enormous.

Finally, it is much more difficult to become data-driven when a company can’t depend on its data. In the data space, everything begins and ends with quality. You can’t expect to make much of a business selling or licensing bad data. You should not trust analytics if you don’t trust the data. And you can’t expect people to use data they don’t trust when making decisions.

Two-Thirds of These Costs Can Be Eliminated by Getting in Front on Data Quality

“Getting in front on data quality” stands in contrast to the reactive approach most companies take today. It involves attacking data quality proactively by searching out and eliminating the root causes of errors. To be clear, this is about management, not technology — data quality is a business problem, not an IT problem.

Companies that have invested in fixing the sources of poor data — including AT&T, Royal Dutch Shell, Chevron, and Morningstar — have found great success. They lead us to conclude that the root causes of 80% or more of errors can be eliminated; that up to two-thirds of the measurable costs can be permanently eliminated; and that trust improves as the data does.

Which Companies Should Be Addressing Data Quality?

While attacking data quality is important for all, it carries a special urgency for four kinds of companies and government agencies:

Those that must keep an eye on costs. Examples include retailers, especially those competing with Inc.; oil and gas companies, which have seen prices cut in half in the past four years; government agencies, tasked with doing more with less; and companies in health care, which simply must do a better job containing costs. Paring costs by purging the waste and hidden data factories created by bad data makes far more sense than indiscriminate layoffs — and strengthens a company in the process.

Those seeking to put their data to work. Companies include those that sell or license data, those seeking to monetize data, those deploying analytics more broadly, those experimenting with artificial intelligence, and those that want to digitize operations. Organizations can, of course, pursue such objectives using data loaded with errors, and many companies do. But the chances of success increase as the data improves.

Those unsure where primary responsibility for data should reside. Most businesspeople readily admit that data quality is a problem, but claim it is the province of IT. IT people also readily admit that data quality is an issue, but they claim it is the province of the business — and a sort of uneasy stasis results. It is time to put an end to this folly. Senior management must assign primary responsibility for data to the business.

Those who are simply sick and tired of making decisions using data they don’t trust. Better data means better decisions with less stress. Better data also frees up time to focus on the really important and complex decisions.

Next Steps for Senior Executives

In my experience, many executives find reasons to discount or even dismiss the bad news about bad data. Common refrains include, “The numbers seem too big, they can’t be right,” and “I’ve been in this business 20 years, and trust me, our data is as good as it can be,” and “It’s my job to make the best possible call even in the face of bad data.”

But I encourage each executive to think deeply about the implications of these statistics for his or her own company, department, or agency, and then develop a business case for tackling the problem. Senior executives must explore the implications of data quality given their own unique markets, capabilities, and challenges.

The first step is to connect the organization or department’s most important business objectives to data. Which decisions and activities and goals depend on what kinds of data?

The second step is to establish a data quality baseline. I find that many executives make this step overly complex. A simple process is to select one of the activities identified in the first step — such as setting up a customer account or delivering a product — and then do a quick quality review of the last 100 times the organization did that activity. I call this the Friday Afternoon Measurement because it can be done with a small team in an hour or two.

The third step is to estimate the consequences and their costs for bad data. Again, keep the focus narrow — managers who need to keep an eye on costs should concentrate on hidden data factories; those focusing on AI can concentrate on wasted time and the increased risk of failure; and so forth.

Finally, for the fourth step, estimate the benefits — cost savings, lower risk, better decisions — that your organization will reap if you can eliminate 80% of the most common errors. These form your targets going forward.

Chances are that after your organization sees the improvements generated by only the first few projects, it will find far more opportunity in data quality than it had thought possible. And if you move quickly, while bad data is still the norm, you may also find an unexpected opportunity to put some distance between yourself and your competitors.


Service Objects spoke with the author, Tom Redman, and he gave us an update on the Sloan Management article reprinted above, particularly as it relates to the subject of the costs associated with bad customer data.

Please focus first on the measurable costs of bad customer data.  Included are items such as the cost of the work Sales does to fix up bad prospect data it receives from Marketing, the costs of making good for a customer when Operations sends him or her the wrong stuff, and the cost of work needed to get the various systems which house customer data to “talk.”  These costs are enormous.  For all data, it amounts to roughly twenty percent of revenue.

But how about these costs:

  • The revenue lost when a prospect doesn’t get your flyer because you mailed it to the wrong address.
  • The revenue lost when a customer quits buying from you because fixing a billing problem was such a chore.
  • The additional revenue lost when he/she tells a friend about his or her experiences.

This list could go on and on.

Most items involve lost revenue and, unfortunately, we don’t know how to estimate “sales you would have made.”  But they do call to mind similar unmeasurable costs associated with poor manufacturing in the 1970s and 80s.  While expert opinion varied, a good first estimate was that the unmeasured costs roughly equaled the measured costs.

If the added costs in the Seizing Opportunity article above doesn’t scare into action, add in a similar estimate for lost revenue.

The only recourse is to professionally manage the quality of prospect and customer data.  It is not hyperbole to note that such data are among a company’s most important assets and demand no less.

©2018, Data Quality Solutions


Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Salesforce Data Quality Tools Integration Series – Part 4 – Lightning App

We are back now with the fourth blog in our Salesforce Data Quality Tools Integration Series.  In previous blogs,  we covered various topics like; creating a plug-in that could be dropped on a flow, a trigger and Apex classes.  From these blogs, you can learn how  Service Objects APIs will help improve your data quality and in turn,  the performance of your Salesforce instance. Like the VisualForce app demonstration, this demo shows how you’ll be able to extend our services for your own purposes in Salesforce’s Lightning framework.  By the end of this blog, you’ll have all the code you’ll need to get started, so don’t worry about implementing this step by step.

This Lightning app is going to serve as a quick email contact input and validation tool.  We will use separate custom objects and custom fields to make this app stand alone from your other objects in Salesforce.  With that said, please note that everything I demonstrate is totally customizable.  For the purposes here, that means you can adjust which objects and fields (standard or custom) you want to use.  You will also be able to customize the call to our Email Validation API and insert your own business specific logic.  There are a lot of code files in this project but do not be discouraged.  This only means that the code is broken down into byte size parts and abstracted to keep the logic a UI separate.

First things first, we are going to start with some basic setup.  Unlike VisualForce, before we can work with the Lightning framework, we will need to turn on My Domain in Salesforce.  You need to have your own sub domain for your Salesforce org, and that is what My Domain does, it allows you to create a custom sub domain.  You can find the settings under Company Settings when using the Lightning Experience.  This link will take you through the details on setting it up.  After you have activated it, you may need to wait several minutes before it is ready. Salesforce will email you when it is done.

The next part of the setup is setting up the Service Objects endpoint.  I am not going to go over it this time because I go over it in the first and second parts of this series.  So, if you need help with setting this up or want a description of what this is, then I would refer you to those blogs.  If you have been following along from the first two blogs, then you would’ve had completed this part already.

In the VisualForce demo, we jumped right into creating the custom fields, however, this time we need first to create the custom object that will house our custom fields.  In the Object Manager, click on Create and select Custom Object.  From there, you will be prompted to fill in several details of our new custom object.  Here is what you will need to add:

  • Label
    • Email Contact
  • Plural Label
    • Email Contacts
  • Starts with vowel sound
    • Check
  • Record Name
    • Name
  • Launch New Custom Tab Wizard after saving this custom object
    • Check

That last check for launching the New Custom Tab Wizard will create a custom tab for you to be able to add/edit/delete during testing before we create the final home for our app.

Next, you will need to add the following custom fields to the newly created Email Contact object.  If you are customizing this for your own purposes, you will want to add these fields to the object you are working with.  If you want to map more of the fields that we return from our service, you’ll have to create the appropriate fields on the object if there isn’t an existing field at your disposal.  If you are using existing objects for these fields, you will want to take into consideration the names of the fields, to prevent conflicts or confusion in your system.

  • Field name
    • Internal Salesforce name
    • Type
    • Service Objects field name
  • Email
    • Email__c
    • Email (Unique and required)
    • EmailAddress
  • Status
    • Status__c
    • Picklist
      • Review (Default)
      • Accepted
      • Rejected
    • None
  • Score
    • Score__c
    • Number (1,0)
    • Score
  • Notes
    • Notes__c
    • Text (255 and default set to “None”)
    • NotesDescription
  • Warnings
    • Warnings__c
    • Text (255 and default set to “None”)
    • WarningDescriptions
  • Errors
    • Errors__c
    • Text (255 and default set to “None”)
    • Type, Error.Description
  • Is Deliverable
    • Is_Deliverable__c
    • Picklist
      • Unknown (Default)
      • True
      • False
    • IsDeliverable
  • Is Catch All Domain
    • Is_Catch_All_Domain__c
    • Picklist
      • Unknown (Default)
      • True
      • False
    • IsCatchAllDomain
  • Is SMTP Server Good
    • Is_SMTP_Server_Good__c
    • Picklist
      • Unknown (Default)
      • True
      • False
    • IsSMTPServerGood
  • Is SMTP Mailbox Good
    • Is_SMTP_Mailbox_Good__c
    • Picklist
      • Unknown (Default)
      • True
      • False
    • IsSMTPMailBoxGood

Ok, so I didn’t promise some of this wasn’t going to be tedious, but at least it wasn’t hard.  This time around, we are adding more custom fields with default values and picklists.  Do not skip the default values; they are key in some of the ways the code works.

Now that we have that out of the way, let’s take a look at what we are going to build (I fudged some of the values so we could see all states of a validated email on the interface).

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Below is a better view of the functionality.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Simply put, we will be building out a UI that with take a name and an email as an input and validate the email against Service Objects’ Email Validation API and display the results to the screen.  The validation portion will automatically evaluate the score of the email from the service and assign the appropriate status flag to the record.  In our scenario, I decided to let all the emails with scores of 0 and 1 automatically be accepted and all those with scores of 3 and 4 automatically be rejected.  The emails that scored 2, I left in the review state for the status field so that the end user can make the final analysis and assign the appropriate status flag.

In order to figure out all the different components we are initially going to need, we will re-examine the layout from above.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Taking a closer look, there are five parts identified from the UI that need to be created.  There are a couple more parts beyond what I have highlighted in the screenshot, but I will get to those as we go.  At this point we have identified the following components are needed:

  • Email Contact Container
    • A place to put all the parts
    • Filename
      • cmp
    • Email Contact Header
      • Self explanatory
      • Filename
        • cmp
      • Email Contact Form
        • Somewhere to add information
        • Filename
          • cmp
        • Email Contact List
          • A place to add the results
          • Filename
            • cmp
          • Email Contact Item
            • A single result
            • Filename
              • cmp

From there, the UI could be broken down into even smaller parts, but for this demo, this is as far as we will need to go.  Note, in retrospect, I would have used different filenames that don’t start with “Add” but what is done is done.  Next, I am going to go through each of these and briefly talk about the code.

I am actually going to start with a file I didn’t mention above.  It is the file that the app will run through:  It is simply three lines of markup that says what resources (slds for styling the page) the page is going to use and an element which is our app container, the AddEmailContactContainer component.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Well, that makes an easy transition to the next page, the AddEmailContactContainer.cmp page.  The first thing you will notice is that we assigned the EmailContactUtil as the controller for this component.  The EmailContactUtil.apxc is the Apex class file that will act as the server side controller where, in contrast, a page called AddEmailContactContainerController.js will serve as the client side controller for the component.  Another important thing to note is the list after “implements.”  I am not going to go into it here, but you will need this if you expect to drag and drop this component in the Lightning App Builder in the future.  Be sure to set access to global as well.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

This component takes in a list of EmailContact custom objects in the first attribute element and the list will be used to display the current EmailContact records.  After the attribute element, we have three handler elements named init, updateEmailContactItem and createNewEmailContactItem. Those handlers are used for initializing/updating the page (init) and registering two events that will occur later on in the other components.  This component will listen for creating a new EmailContactItem and updating an EmailContactItem.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

The init handler is important to mention now because it is how the app gets the list of EmailContacts to display.  The init handler calls the doInit method in the AddEmailContactContainerController.js page.  Asynchronously, the method creates an action to call the server side controller to pull back the needed records.  It makes sense that we are calling the server side controller since we need to be retrieving stored records.  So this method makes a call to the getEmailContacts method on the server side controller and creates a callback that will update our emailContactItems attribute on the main controller so that we can display the records found.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

This process will be called on the page being loaded initially and whenever there is some sort of post back, which is useful since we want to display the latest list when new items are added.  And that’s it for the init functionality.

I mentioned the two controller files earlier, but there is one more file associated to the AddEmailContactContainer component, a helper file called AddEmailContactContainerHelper.js.  I will run through these associated files later.  The remainder of this markup page simply references the other components that this page is composed of: AddEmailContactHeader, AddEmailContactForm, and AddEmailContactList.  The AddEmailContactList component takes the input list from this page as a parameter as we will see coming up.

Next, I will quickly run through the header page.  There really is not much to it.  Besides some out of the box styling and accessing an icon from the Salesforce predefined set of icons, it is not much more than a title.  Which makes sense since this is a simple header component.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

The AddEmailContactForm is a simple form component.  The parts of this page are the attribute or parameter that we will save the input data, registering an event and the form input section itself.  The attribute will be the variable that holds the input data and will be of the EmailContact type even though we are only populating the Name and Email portions of the object.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Next, we register the event that this component will be firing off whenever the button is clicked. This is the same event that the AddEmailContactContainer component was set to listen for and to handle.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Events need an event file, something that I would describe more as a signature for the event.  It’s a very basic file, so I am going to pop in and out of it quickly.  It is defined within the event element and has an attribute defined with a type, which is our EmailContact object, and a name.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Now back to the form file.  The form section will have input areas for the Name and Email of the contact followed by a button to submit the entry to the system.  In this case, both inputs are required and will provide errors when they are missing or if the email is not in the proper email format.  This component also has a controller and a helper file, AddEmailContactFormController.js and AddEmailContactFormHelper.js respectively.  Both files are there to do basic form validation, help setup and fire the button click event and clear the form.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

AddEmailContactFormController first checks to see if the inputs to the form had basic validity.  You’ll notice that the inputs in the form had the same aura:id, which is allowing us, at this point, to pull all of the inputs back at once in an array.  Upon the inputs being valid, we pull the newEmailContactItem variable that was populated with the inputs and send it off to the helper page.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

The helper file instantiates the event and sets the parameter for it, which, in this case, is the newEmailContactItem.  Then the event is fired, and the input attribute that fills the form is reset to make the form ready to take in more contacts from the user.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

In order to continue with the flow of what is happening with the event, I am going to jump back to the main component and look at its controller and the helper files.  At this point, the user has entered data in the form, clicked the button and the code has setup and fired the event.  Now, the AddEmailContactContainer takes over because it has been listening for this event and is ready to handle it with the createNewEmailContactItem handler mentioned earlier.  The action on the handler is to pass control to the handleAddEmailContactUpdate method in the client side controller.  AddEmailContactContainerController, which pulls in the EmailContact variable from the event that was fired, sends the EmailContact variable off to the addEmailContact method in the helper file.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Now on to the AddEmailContactContainerHelper.js file.  This app is going to both create email contacts and update them.  Since updating and creating are very similar in our scenario, we will have both update and create functions call the one save function.  Much of the code is the same, so it makes sense to reuse code here. Since we are following the path of the create event, we will look at it from that perspective here.  The two methods we will run through are the addEmailContact method and the saveEmailContact method.  The addEmailContact method immediately calls the saveEmailContact method, but it provides parameters for indicating if we are doing an update or create and it also supplies a callback function.  The callback gets the list of new records and sends them to the AddEmailContactContainer after the new EmailContact record is created.  The new Email Contact record is created in the server side controller and is called from the saveEmailContact method.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

And that completes the code flow of creating a new Email Contact record.  Next, I am going to jump into the last portion of the code, and that starts with the AddEmailContactList component, where we will look at the displaying of the Email Contact items and the event to update the Status field when there is a button click.

The AddEmailContactList component is another simple and straight forward page.  It handles displaying the list of Email Contacts sent to it through the emailContactItems attribute.  The code iterates over the emailContactItems attribute dynamically displaying the AddEmailContactItem to the screen.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Moving to the AddEmailContactItem, we see that it is a little longer than the reset of the code but in its parts, it is still pretty straightforward.   So what are the parts?  We see there is a graphic and a title, a couple of fields, some buttons, a status and, finally, some fine details.  We could have broken this out into smaller components, but for this demo, this is enough.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

In the code two things happen before the header markup is added.  First, just like the event we registered for the button that adds a new contact, we need to register an event for the buttons specific to each EmailContactItem on the page.  The purpose of these buttons will be to allow the user to accept or reject an email based on the information from the Service Objects Email Validation API operation.  The buttons should not be visible unless the score returned from the service is 2.  Scores of 0 are no-brainers, we can, without hesitation, mark those as accepted.  Those are valid emails.  A score of 1 has a very high certainty to be valid, so we will also automatically set their status as accepted.  Those with a score of 4 are bad emails, again, we automate these, but this time we set the status to be rejected.  And 3’s are near certainty that they are bad emails, so those are also automatically marked as bad.  The grey area is those that get validated with a score of 2.  This score represents unknown for various reasons, and we suggest some kind of manual review or inspection.  For more information about the score field and deeper details about service, you can check out our developer guides here.  After we register the event, we setup the attribute element, creating a local variable input that takes a single EmailContactItem.  This should be familiar by now based on some of the other pages we discussed.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Now we get to the graphic/icon and title code.  Nothing too interesting here really, so I am going to skip over it.  But I do want to mention; this code will display our item icon and the title, you will notice that it actually houses the item parts.  All of the rest of the items components will go in here.  If you want to change the icon, you can see a list of what is available at this link.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

The next bit of code does several things.  First, it displays the contact name and the associated email.  Then, based on the score determined by Service Objects’ Email Validation operation, we will, in the markup, decide if we should display the buttons used for accepting or rejecting the email address.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Next, a similar markup and logic is used to determine the background color on the status and score sections.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

There are several techniques to do this dynamic display instead of using the if/else markup elements but I just learned about them, and I wanted to give them a shot.  I would have probably used a different solution involving Javascript, and this code would have been much shorter.

The last part of this component is the Lightning accordion section.  I added a two-part accordion, so I had a place to display some details that came back from the Email Validation operation without consuming too much space on the screen.  I have it default to opening the second section of the according first.  The first part contains the warnings, notes and/or errors.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

The second part contains other various details like if the email is deliverable or is the email on a catch-all domain, as well as, is the SMTP Mailbox good or is the SMTP Server good.  If you wanted to adjust the code and display more data returned from the Email Validation operation, I would recommend adding those fields to these sections or adding more accordion sections with more fields to display.  This is the code for the second section.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

That does it for the markup pages.  Now, let’s just follow through the code for when either one of the buttons is clicked.  The rest of the functionality rests on the controller and helper of the AddEmailContactItem page but also some of the pages we already went through from the first event we discussed.

The accept and reject buttons each have their own method they call when clicked.  I could have used just one method here and checked the name or id of the calling event, but I thought this would keep things clear.  These are really just pass-through methods that indicate if the accept or reject button was clicked by passing the true or false to the updateEmailContactItem method in the helper file.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

In the AddEmailContactItemHelper.js file, the method updateEmailContactItem starts by setting the status field for the contact item based on the input parameter.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Once that is set, we create the event, fire it and hide the buttons from the screen.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

You may have noticed that we are not creating a new event file.  Since the signature for the create and update events are the same, we are able to reuse the one we already have.  Now, like before, the AddEmailContactContainer is listening for the event, but this time it is listening for the update event.  When the event is fired, the main controller will catch the event and call the handleUpdateEmailContactUpdate method in the AddEmailContactContainerController.  The handleUpdateEmailContactUpdate method pulls out the EmailContactItem that was passed in the event and sends it along to the updateEmailContact method in the helper.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Earlier, when we were running through the create functionality, we came to a point where we wrote one save method because we knew that the save method could server both the save and update purposes.  Now we get to utilize what we setup earlier and piggyback off of the save code.  In the AddEmailContactContainerHelper the method for updating, the updateEmailContact method, will call our saveEmailContact from before but this time we are not sending along a call back method.  The saveEmailContact method then sets up a couple of parameters and calls the saveEmailContact on the server side controller, and that controller saves or updates the record.

Now that we have gone over all the code you can see that none of this is terribly difficult to understand, but there are a lot of files for a simple app.  We could have reduced the file count if we put all the helper code back up into the controller code.  We also could have reduced the number if we created fewer components and grouped everything together.  And though you can do all that, there are reasons I didn’t.  One reason to break the application out this way is so that we can focus on smaller parts and make each page more basic.  Another reason is that as you extend this solution to meet your own requirements you may be adding a lot more code and keeping the code compartmentalized like this will help with code readability and maintenance.  When you look at the files for this project, you will see that there were several CSS files that we didn’t go over here.  You can review those as you see fit.  Another thing that I didn’t do, is add more error checking which you will certainly want to do, as well as, writing your own unit tests.  The last thing you need to make sure of, is to get a license key (live or trial) from us and add it to the EmailContactUtil.apxc file, otherwise you will not be able to validate email addresses against our API.  If you get a trial key, you need only update the key in the code.  However, if you get a live key, then you will want to update the endpoints in the code in addition to the key.  With the live key, you will want the first endpoint to be and the second to be  In the example below, I changed the key to contain X’s instead of real values.

Salesforce Data Quality Tools Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

So, where do you go from here?  One thing you will likely want to do is customize the solution.  Maybe your want to incorporate some of the standard objects in Salesforce or perhaps you want to use or create your own custom objects.  Maybe there is additional business logic that you want to apply to the results our Email Validation.  After that, you will likely be adding this component to a Lightning app.  Or maybe you’ll just use the code for the Email Validation alone for something completely different.  A side note to make here is that you can even apply this demonstration to many of our other services like Address Validation, Address Detective, Lead Validation, Name Validation and more.  When you look at this solution, you should notice that though the part of the code that dealt with calling our Email Validation API was very short, it had a huge impact on the utility of the app.  If you want some specific code that can help with this solution or applying this solution to our other services then simply reach out, and we’ll do our best to accommodate.

Address Suggestion with Validation: A Match Made in Heaven

In an ideal world, data entry would always be perfect. Sadly, it isn’t – human errors happen to end users and call center employees alike. And while we make a good living cleaning existing bad data here at Service Objects, we would still much rather see your downstream systems be as clean as possible.

To help with that, many organizations are getting an assist with Google, in the form of the Autocomplete with their Places API.  If you setup your form properly and use their API you can have address suggestions appear in a dropdown for your end users to take advantage of to help enter correct data into your system. That’s great, isn’t it?

It does sound great on the surface, but when you dig a little deeper there are two problems:

  • First, Google Places API often does not often suggest locations to the apartment or suite level of detail. The point is that a considerable segment of the population lives in apartments or does business on separate floors, suites or buildings.
  • Second, the locations the Google Places API suggests are often not mail deliverable. For instance, a business may have a physical location at one address and receive mail at a completely different address.  Or sometimes Google will just make approximations as to where an address should be.

For example, check this address out on Google Maps: 08 Kings Rd, Brighton BN1 1NS, UK.  It looks like a legitimate address, but as the street view shows, it does not seem to correspond to anything.

These issues can leave gaping holes in your data validation process.  So, what can you do? Contact us at Service Objects, because we have the perfect solution: our Address Suggestion form tool. When combined with the Google Places API you will have a powerful tool that will both save time and keep the data in your system clean and valid.

This form tool is a composite of the Google Places API and our Address Validation International service.  The process consists of the data entry portion, the Google Paces API lookup, then the Address Validation International service call, finally displaying selectable results to the end user.

Let’s start by discussing the Google API Key, and then the form, and finally the methods required to make that Google Places API call.

Google Places API requires a key to access it.  You can learn more about this API here.  Depending on your purposes you may be able to get away with using only Google’s free API key but if you are going to be dealing with large volumes of data then the premium API key will be needed.  That doesn’t mean you can’t get started with the free version: we in fact use it to put our demos together, and it works quite well.

When setting up your key with Google, remember to also turn on the Google Maps Javascript API, or else calls to the Places API will fail.  Also, pay particular attention to the part about restricting access with the API key.  When you have a premium key this will be very important because it will allow you to set the level at which you want the key to be locked down, so that others can’t simply look at your Javascript code and use your key elsewhere.

The form we need to create will look like a standard address data entry form, but with some important details to note.  First let’s look at the country select box: we recommended that this be the first selection that the user makes. Choosing a country first benefits both you and the user, because it will limit suggested places to this country, and will also reduce the number of transactions against your Google Places API key.  Here is a link to how Google calculates its transaction totals.

Another important note is that we need to have the Apt/Suite data entry field.  As mentioned earlier, the Google Places API often does not return this level of resolution on an address, so we add this field for the information be provided by the end user.

The rest of the fields are really up to you in how you display them.  In our case, we display the parsed-out components of the results from selected address back into the rest of the address fields.  We keep all the address input fields editable so that the end user can make any final adjustments they want.

The methods associated with this process can be summarized by a set of initializations that happen in two places: first, when a country is selected, and second, when the focus is on the Address field by a user clicking into it.  For our purposes we default the country selection to the United States, however when the country is changed the Autocomplete gets reinitialized to the selected country. And when a user clicks into the Address field, the initialization creates a so-called bias, e.g. Autocomplete returns results based on the location of your browser.  For this functionality to work, the end user’s browser will ask to let Google know its location.  If the user does not permit Google to know this the suggestion is turned off and does not work.

This bias has a couple of interesting features.  For instance, you can change the code to not utilize the user’s browser location but instead supply a custom latitude and longitude.  In our example, the address suggestion does not end up using the user’s current position when the selected country is not in the same country as the user.  But when the user is in the same country as the selected country then the results returned by the Google Places API are prioritized to your location.  This means that if you are in Santa Barbara, CA and select the United States as the country, when you start typing a United States address you will first see matching addresses in Santa Barbara, and then work outward from there.

You can customize the form bias to any particular location that you have a latitude and longitude for.  The ability to change this bias is very useful in that setting the proper bias will reduce the number of lookups against the Google Places API before finding an address match, and will also save manual typing time.

Now let’s discuss the Address Validation International service API call, which consists of a key, the call to the service and setting up best practices for failover.

Let’s start with the key.  You will need to either have a live or free trial license key from us, the latter of which can be gotten here.  For this example, a trial key will work fine for exploring and tinkering with this integration.  One of the great things about using our service is that when you want to turn this into a live production-ready solution, all you have to do is switch out the key from the trial to the production key and switch the endpoint to the production URL, both of which can be done in minutes.

The call to the Address Validation International service will be made to either the trial or production endpoints, which will depend on the key that you are using.  The details of the service and how to integrate with it can be found in our developer guides.  In the Javascript code you will round up all the data in the fields that were populated by the address suggestion selection and send them off to the service for validation.  The code that manages the call to the Address Validation International service needs to be executed on some back-end server client.

It is strongly discouraged to make the call to the service directly from Javascript, because it will expose your license and allow someone to take it and use your transactions maliciously.  You can read more about those dangers here.  Also, here is a blog about how to make a call to another one of our services using a proxy.  The basic idea is that your Javascript call will call the proxy method that contains your license key, essentially hiding it from the public.  This proxy method will make the final call to the Address Validation International service, get the results from it and pass those results back to the original call in the Javascript.  In this situation, the best place to implement failover is in the proxy method.

So what is failover? Failover, from the perspective of an end user developer, is just a secondary data center to call in the unlikely event that one of our primary data centers go down or does not respond in a timely manner.  Our developer guides can again help with this topic.  There you will also find code snippets that demonstrate our best practice failover.

Once this call is set up, all that is left is evaluating the results and displaying the appropriate message back to the end user. While you can go through our developer guides to figure this out, the first important field to examine in the response from the Address Validation International service is the Status field – here is a table of what is expected to be returned:

Address Status

Name Description
Invalid For addresses where postal and/or street level data is available, and the street was not found, bad building number, etc.
InvalidFormat For addresses where Postal data is unavailable and only the address format is known.
InvalidAmbiguous For possibly valid addresses that may be correctable, but ultimately failed validation.
Valid For addresses with postal level data that passed validation.
ValidInferred For addresses where potentially far reaching corrections/changes were made to infer the return address.
ValidApproximate For addresses where premise data is unavailable and interpolation was performed, such as Canadian addresses
ValidFormat For addresses where Postal data is unavailable and only the address format is known.


Another important field will be the ResolutionLevel, which can be one of the three following values: Premise, Street and City.  The values returned in these two fields will help you make a decision in the code with respect to what exactly you want to display back to the end user.  What we do in our demo is display the Status and ResolutionLevel to the end user along with the resulting address.  Then we give the user a side-by-side view of both the resulting address just mentioned and the original address the user entered.  This way the end user can make a decision based on everything we found. In the case shown here, for example, we updated Scotland to Dunbartonshire and validated to the premise resolution level.

There are many customizations that can be made to this demo, such as the example we mentioned earlier about setting up the bias.  Additionally, instead of using the Address Validation International service you could also create an implementation of this demo using our Address Validation US or our Address Validation Canada products.

Want to try this out for yourself? Check out our Address Suggestor video tutorial or contact one of our Account Executives to get the code for this demo – we’ll be glad to help.

Service Objects Processes 3 Billion Transactions and Saves 1.6 Million Trees

In March 2018, Service Objects reached a major milestone in the company’s history by validating over 3 billion transactions. Since 2001, Service Objects has enabled businesses located around the world to validate their contacts’ name, location, phone, email address and device against hundreds of authoritative international data sources, all with the goal of reducing waste and fraud.

The fast-paced growth of the company, which has over 2,500 customers such as Amazon, LendingTree, American Express, MasterCard, and Verizon, has been an instrumental part of reaching this record number of transactions. Processing over 3 billion transactions is of particular significance to Service Objects because corporate conservation is one of the reasons the company was founded. The positive impact of achieving this milestone has had on the environment includes:

  • 193 million total pounds of paper saved
  • 6 million trees saved
  • 6 million gallons of oil saved
  • 670 million gallons of water saved
  • 393 million kilowatt hours of energy saved
  • 295 thousand cubic yards of landfill space saved

According to Geoff Grow, Founder and CEO of Service Objects, the idea to start Service Objects came from his desire to apply math and big data to the problem of waste caused by incorrect contact information. Surpassing the 3 billion transaction mark translates into saving 193 million pounds of paper. He would never have dreamed that 16 years ago Service Objects would be able to fulfill its commitment to corporate conservation in such a significant way.

Corporate conservation is embedded in the company’s culture, as evidenced by its commitment to recycling, using highly efficient virtualized servers, using sustainable wind power, purchasing sustainable office supplies, encouraging employees to bike to work and embracing a paperless philosophy. Every employee is conscious of how they can positively impact conservation efforts.

To read the entire Service Objects’ story, click here.

A Daisy Chain of Hidden Customer Data Factories

I published the provocatively-titled article, Bad Data Costs the United States $3 Trillion per Year in September, 2016 at Harvard Business Review. It is of special importance to those who need prospect/customer/contact data in the course of their work.

First read the article.

Consider this figure: $136 billion per year. That’s the research firm IDC’s estimate of the size of the big data market, worldwide, in 2016. This figure should surprise no one with an interest in big data.

But here’s another number: $3.1 trillion, IBM’s estimate of the yearly cost of poor quality data, in the US alone, in 2016. While most people who deal in data every day know that bad data is costly, this figure stuns.

While the numbers are not really comparable, and there is considerable variation around each, one can only conclude that right now, improving data quality represents the far larger data opportunity. Leaders are well-advised to develop a deeper appreciation for the opportunities improving data quality present and take fuller advantage than they do today.

The reason bad data costs so much is that decision makers, managers, knowledge workers, data scientists, and others must accommodate it in their everyday work. And doing so is both time-consuming and expensive. The data they need has plenty of errors, and in the face of a critical deadline, many individuals simply make corrections themselves to complete the task at hand. They don’t think to reach out to the data creator, explain their requirements, and help eliminate root causes.

Quite quickly, this business of checking the data and making corrections becomes just another fact of work life.  Take a look at the figure below. Department B, in addition to doing its own work, must add steps to accommodate errors created by Department A. It corrects most errors, though some leak through to customers. Thus Department B must also deal with the consequences of those errors that leak through, which may include such issues as angry customers (and bosses!), packages sent to the wrong address, and requests for lower invoices.

The hidden data factory

Visualizing the extra steps required to correct the costly and time consuming data errors.

I call the added steps the “hidden data factory.” Companies, government agencies, and other organizations are rife with hidden data factories. Salespeople waste time dealing with erred prospect data; service delivery people waste time correcting flawed customer orders received from sales. Data scientists spend an inordinate amount of time cleaning data; IT expends enormous effort lining up systems that “don’t talk.” Senior executives hedge their plans because they don’t trust the numbers from finance.

Such hidden data factories are expensive. They form the basis for IBM’s $3.1 trillion per year figure. But quite naturally, managers should be more interested in the costs to their own organizations than to the economy as a whole. So consider:

There is no mystery in reducing the costs of bad data — you have to shine a harsh light on those hidden data factories and reduce them as much as possible. The aforementioned Friday Afternoon Measurement and the rule of ten help shine that harsh light. So too does the realization that hidden data factories represent non-value-added work.

To see this, look once more at the process above. If Department A does its work well, then Department B would not need to handle the added steps of finding, correcting, and dealing with the consequences of errors, obviating the need for the hidden factory. No reasonably well-informed external customer would pay more for these steps. Thus, the hidden data factory creates no value. By taking steps to remove these inefficiencies, you can spend more time on the more valuable work they will pay for.

Note that very near term, you probably have to continue to do this work. It is simply irresponsible to use bad data or pass it onto a customer. At the same time, all good managers know that, they must minimize such work.

It is clear enough that the way to reduce the size of the hidden data factories is to quit making so many errors. In the two-step process above, this means that Department B must reach out to Department A, explain its requirements, cite some example errors, and share measurements. Department A, for its part, must acknowledge that it is the source of added cost to Department B and work diligently to find and eliminate the root causes of error. Those that follow this regimen almost always reduce the costs associated with hidden data factories by two thirds and often by 90% or more.

I don’t want to make this sound simpler than it really is. It requires a new way of thinking. Sorting out your requirements as a customer can take some effort, it is not always clear where the data originate, and there is the occasional root cause that is tough to resolve. Still, the vast majority of data quality issues yield.

Importantly, the benefits of improving data quality go far beyond reduced costs. It is hard to imagine any sort of future in data when so much is so bad. Thus, improving data quality is a gift that keeps giving — it enables you to take out costs permanently and to more easily pursue other data strategies. For all but a few, there is no better opportunity in data.

The article above was originally written for Harvard Business Review and is reprinted with permission.

In January 2018, Service Objects spoke with the author, Tom Redman, and he gave us an update on the article above, particularly as it relates to the subject of data quality.

According to Tom, the original article anticipated people asking, “What’s going on?  Don’t people care about data quality?”

The answer is, “Of course they care.  A lot.  So much that they implement ‘hidden data factories’ to accommodate bad data so they can do their work.”  And the article explored such factories in a generic “two-department” scenario.

Of course, hidden data factories take a lot of time and cost a lot of money, both contributing to the $3T/year figure.  They also don’t work very well, allowing lots of errors to creep through, leading to another hidden data factory.  And another and another, forming a sort of “daisy chain” of hidden data factories.  Thus, when one extends the figure above and narrows the focus to customer data, one gets something like this:

I hope readers see the essential truth this picture conveys and are appalled.  Companies must get in front on data quality and make these hidden data factories go away!

©2018, Data Quality Solutions

Is Your Data Quality Strategy Gold Medal Worthy?

A lot of you – like many of us here are Service Objects – are enjoying watching the 2018 Winter Olympics in Pyeongchang, Korea this month. Every Olympics is a spectacle where people perform incredible feats of athleticism on the world stage.

Watching these athletes reminds us of how much hard work, preparation, and teamwork go into their success. Most of these athletes spend years behind the scenes perfecting their craft, with the aid of elite coaches, equipment, and sponsors. And the seemingly effortless performances you see are increasingly becoming data-driven as well.

Don’t worry, we aren’t going to put ourselves on the same pedestal as Olympic medalists. But many of the same traits behind successful athletes do also drive reliable real-time API providers for your business. Here are just a few of the qualities you should look for:

The right partners. You probably don’t have access to up-to-the-minute address and contact databases from sources around the world. Or a database of over 400 million phone numbers that is constantly kept current. We do have all of this, and much more – so you can leverage our infrastructure to assure your contact data quality.

The right experience. The average Olympic skater has invested at least three hours a day in training for over a decade by the time you see them twirling triple axels on TV, according to Forbes. Likewise, Service Objects has validated nearly three billion transactions since we were founded in 2001, with a server uptime reliability of 99.999 percent.

The right strategy. In sports where success is often measured in fractions of a second, gold medals are never earned by accident: athletes always work against strategic objectives. We follow a strategy as well. Our tools are purpose-built for the needs of over 2500 customers, ranging from marketing to customer service, with capabilities such as precise geolocation of tax data, composite lead quality scores based on over 130 criteria, or fraud detection based on IP address matching. And we never stop learning and growing.

The right tools. Olympic athletes need the very best equipment to be competitive, from ski boots to bobsleds. In much the same way our customers’ success is based around providing the best infrastructure, including enterprise-grade API interfaces, cloud connectors and web hooks for popular CRM, eCommerce and marketing automation platforms, and convenient batch list processing.

The right support. No one reaches Olympic success by themselves – every athlete is backed by a team of coaches, trainers, sponsors and many others. We back our customers with an industry-leading support team as well, including a 24×7 Quick Response Team for urgent mission-critical issues.

The common denominator between elite athletes and industry-leading data providers is that both work hard to be the best at what they do and aren’t afraid to make big investments to get there. And while we can’t offer you a gold, silver, or bronze medal, we can give you a free white paper on how to make your data quality hit the perfect trifecta of being genuine, accurate and up-to-date. Meanwhile, enjoy the Olympics!

Why Our Customers Love Data Quality

Every year, February 14th is a time when our thoughts turn to things like true love, flowers, chocolates … and data quality.

In fact, there is more in common between these things than you might think. If you look at the history of Valentine’s Day, St. Valentine’s intention was to protect his fellow man. In ancient Rome, St. Valentine accomplished this by secretly marrying couples so that the husbands would not have to go to war. This is how his name became synonymous with love and marriage. Along those lines, Service Objects tries to also help our fellow man– admittedly less romantically – by ensuring your data accuracy, automating regulatory compliance, and protecting you from fraud.

Nowadays, our customers love how our data quality solutions solve the following problems:

High quality contact data. When you communicate with your prospects or customers, the cost directly links to the accuracy and validity of your contact list. When you automate the quality of this contact data – often your biggest and most valuable data asset – the ROI will warm the hearts of the toughest CFOs.

Lead validation. Does she love me or does she not? Better to find out early in the relationship whether she gave you a fake email address, bad contact information, or is otherwise giving you the slip, with validation tools that check over 130 data points to give you a numerical rating of lead quality.

Delivery accuracy. Nothing will make your customers fall out of love with you quicker than misdirected deliveries – even though in the US alone, 40 million of them don’t help matters by changing their addresses every year, while many others mistype their address at the time they order. When you automatically verify these addresses against continually updated postal databases, you help ensure a good relationship.

Compliance strategies. When government regulators come calling, they aren’t bringing you flowers. New rules on consumer privacy in the US and Europe have changed the game of outbound marketing, including stiff financial penalties for non-compliance, and sales tax policies are constantly changing. Automated compliance verification tools can help prevent problems from happening in the first place, and also provide quantifiable proof of your efforts.

Fraud prevention. Cupid isn’t the only one aiming his arrows at you. Fraudsters are constantly trying to separate you from your inventory and money, particularly during your busiest periods. We can help with solutions ranging from address, BIN, email and IP validation to tools that provide you an overall order quality score, to help keep the bad guys out.

Finally, it turns out we have one other connection to St. Valentine. In Italy, he is still commemorated by a charm known as St. Valentine’s Key, which is supposed to unlock the hearts of lovers. We have a key for you as well: a free trial key to any of our API products, yours for the asking. Happy Valentine’s Day!

How to Convince Your Boss Your Business Needs a Data Quality Solution

Many developers come to Service Objects because they recognize their company has a need for a data quality solution. Maybe you were tasked by a manager to help find a solution, perhaps someone in Marketing mentioned one of their pain points, or maybe you just naturally saw opportunities for improvement. Whatever the reason was that got you looking for a data quality solution, you most likely need to get someone in management to sign off on your solution. Often, especially in larger organizations, this can be a challenge. So how are others accomplishing this?

Service Objects has been in business for over 16 years and in that time frame, we have helped potential customers just like you get sign off for our services. In addition to being armed with information from your Account Executive on how you can achieve ROI for the service you have chosen, the following recommendations are the ones we have found to be the most useful.

Test out our services

If you haven’t already, get a free trial key for the service you are interested in.  We allow you to try out as many of our 23 data quality solutions as you would like.  My blog, Taking Service Objects for a Test Drive, can help you figure out which method of testing our services is best for you.

One of the values in testing our services is that it reinforces your own understanding of our products. Testing allows you to quickly visualize and show others in your organization the benefits of utilizing a data quality solution.  Showing your boss an actual test integration you did with our API is more powerful than just explaining how it works.  Or, if you decide to run a test batch with us, you will see first hand how we can improve your contact data.

You can also leverage our quick look up page and set up on a screen share with a larger team and run live examples for them.  Let your team choose some data to run live and see the results first hand.

Data summaries

Data summaries are a great visual aid to take a look at how our services help.  Take a subset of any of your data sets and send them over for a batch test.  Each batch test contains a comprehensive summary, providing you with how we validated your data and what the results mean.  Your Account Executive can review this and guide you through the results. Having a detailed report that clearly shows the current state of your data is a great tool to have when you are ready to go forward with any type of presentation to your team. A recent blog shows what one of our reports includes.  All data has a story, and with our batch summaries we try to tell that story.

Integration plan

Having an integration plan is an asset when getting sign off for our services.  You are bound to get some questions after showing your plan, such as “who?” “how?” and “how long?”.  It is a good idea to be prepared to answer these questions.

Integrating with us is not a complicated task and we have several resources to guide you.  Our sample code can quickly get you up and running and we can even customize it for your specific use case. We also have best practices available to help you with your integration. For instance, you may need to check if there are any network or firewall issues your IT team needs to complete. If you are switching from another solution or vendor, you may need to have your integration timeline in sync with turning off an old service and turning our service on.  And never forget about testing, which is one of the most important parts of any integration.  Finally, you may need to account for any training you want to provide to your team about how to work with our data or your new system.  Having answers to these questions can help arm you with everything you need to keep the ball rolling.

Customer service and support

Support.  It is just one word but it is key.  Sometimes even the most important word.  You can purchase a top-notch product or service, but if you can’t get adequate customer support when you need it, your purchase loses most of its value to you.  With Service Objects, this is never the case.  In fact, customer service is one of our core values. We care about your success and truly want you to be succeed.  If you want to get buy in from your team, then it is very important to discuss our customer service expertise. Having a dedicated customer support resource, comprised of engineers, will ease your transition to our offerings and in the end, will save your organization significant time and money.

Value proposition

At Service Objects, our value proposition is very clear. Our data quality solutions ensure your data is as genuine, accurate and up-to-date as it can possibly be. A large percentage of our customers have been with us for many years, and this is due to the effort we put into the quality, accuracy, speed, reliability and expertise we deliver.

Getting buy-in from colleagues

It is always helpful to get people on your side when you want to present a new service to your company. Talking it over with colleagues and managers is great and many of the things written in this blog can help get them on your side to support your idea.  It also goes a long way in educating your team on the data quality issues you are facing and how a Service Objects’ solution can alleviate many of those problems.

I hope some of these ideas or tips can help you along the way in presenting our services to your team or manager.  The last thing you want to do is have a conversation with your boss without adequate preparation.  Your ideas are good, so take your time and plan the right action in getting them implemented.  In the end, the results our services will provide for your company will be impactful and worth your time.

Visit our product page to Get started testing any of our services.

To Be Customer-Centric, You Have To Be Data-Centric

In today’s fast-paced world, customers have become more demanding than ever before. Customer-centric organizations need to build their models after critically analyzing their customers, and this requires them to be data-centric.

Today, customers expect companies to be available 24/7 to solve their queries. They expect companies to provide them with seamless information about their products and services. Not getting such features can have a serious impact on their buying decision. Customer-centric organizations need to adopt a data-centric approach not just for targeted customer interactions, but also to survive competition from peers who are data-centric.

Customer-centric organizations need to go data-centric


Today, customers enquire a lot before making a decision. And social media enquiries are the most widely used by customers. A satisfactory experience is not limited to prompt answers to customer queries alone. Customers need a good experience before making the purchase, during the installment of product or deployment of service, and even after making the purchase. Thus, to retain a customer’s loyalty and to drive in regular profits, companies need to be customer-centric. And that can only happen if companies adopt a data-centric approach. Big data comes handy in developing a model that gives optimum customer experience. It helps build a profound understanding of the customer, such as what they like and value the most and the customer lifetime value to the company. Besides, every department in a data-centric organization can have the same view about its customer, which guarantees efficient customer interactions. Companies like Amazon and Zappos are the best examples of customer-centric organizations that heavily depend on data to provide a unique experience to their customers. This has clearly helped them come a long way.


Companies can collect a lot of information that can help them become customer-centric. Here are some ways in which they can do so:

  • Keep a close eye on any kind of new data that could help them stay competitive, such as their product prices and the money they invest in logistics and in-product promotion. They need to constantly monitor the data that tells them about the drivers of these income sources.
  • Reach out to customers from different fields with varying skill sets to derive as many insights as possible so as to help make a better product.
  • Develop a full-scale data project that will help them collect data and apply it to make a good data strategy and develop a successful business case.

Today, there is no escaping customer expectations. Companies need to satisfy customers for repeat business. Customer satisfaction is the backbone of selling products and services, maintaining a steady flow of revenue, and for the certainty of business. And for all of that to happen, companies need to gather and democratize as much information about their customers as possible.

Reprinted with permission from the author. View original post here.

Author: Naveen Joshi, Founder and CEO of Allerin

What Role Does Data Quality Play in the General Data Protection Regulation (GDPR)?

With the arrival of 2018, many businesses are realizing the deadline to comply with the General Data Protection Regulation (GDPR) is looming in the not too distant future. In fact, according to Forrester Research, 80% of firms affected by the GDPR will not be able to comply with the regulation by the May 2018 deadline. GDPR has many different components that a business will need to meet to achieve compliance; one of which is making sure contact data is kept accurate and up-to-date.

Understanding the important role that data quality plays in complying with GDPR, Service Objects will host a webinar to provide information to those looking to get started. This live, informational webinar on the topic of “The Role of Data Quality in the General Data Protection Regulation (GDPR),” will take place on January 24, 2018 at 9:00am.

The interactive event will feature data quality expert Tom Redman, President of Data Quality Solutions, who will provide an overview of the regulation recently enacted by the EU, and why US businesses need to pay attention. He will also discuss how engaging in data quality best practices not only makes it easier to achieve compliance, but also makes good business sense.

According to Geoff Grow, Founder and CEO of Service Objects, businesses can incur fines up to $20 million Euros or 4% of a company’s annual revenue for not complying with the GDPR. The recently enacted EU regulation requires that every effort is taken to keep contact data accurate and up-to-date. Service Objects’ data quality solutions can help businesses meet this requirement, as well as identify which contacts reside in the EU and therefore fall under the GDPR.

To register for this complimentary webinar, visit  All those who register will also receive a link to the recording after the event.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Salesforce Data Quality Tools Integration Series – Part 1 – Apex Insert Trigger

With Salesforce being the dominant platform in the Customer Relationship Management (CRM) universe, we are excited to demonstrate how our tools can benefit the data quality of your contact data in this robust system.  Salesforce is highly customizable and one of the great things about it is that you don’t need to be a developer to create a rich user experience on the system.  Although, being a developer, will help with understanding some of the coding concepts involved with some of the components. With all the data in your organization don’t you want it to be clean, accurate, and as monetizable as possible? The Service Objects suite of validation API tools are here to help you achieve those goals. Our tools make your data better so you can gain deeper insights, make better decisions and achieve better results.

This blog will demonstrate various ways that a Salesforce administrator or developer can boost the quality of their data.  We will look into various flows, triggers, classes, Visualforce components, and Lightning components, and more as we demonstrate the ease of integration and the power of our tools in this series.

To get started, we will focus on demonstrating an Apex trigger with a future class.  I will show you the code, part by part, and discuss how it all fits together and why.  You will also learn how to create an Apex trigger, if you didn’t already know.

Apex triggers allow you to react to changes on a Salesforce object.  For our purposes, those objects will typically be objects that have some kind of contact information like a name, address, email or phone, IP, etc.  Additionally, the objects can contain various combinations of these as well.  In this example, we will use the Contact Salesforce object and specifically the mailing address fields, though the Account object would have made just as much sense to demo.  Service Objects has services for validating United States, Canadian and international addresses.  In this example, I am going to use our US address validation API.

We are going to design this trigger to work with both a single Contact insert and bulk Contact inserts.  This trigger will have two parts: the Apex trigger and an Apex future class.  We will use the trigger to gather all of the Contact Ids being inserted and the future class will process them with a web service call to our address validation API.

It is important to note a couple things about the future class method and why it is used.  First, using a future call tells the platform to wait and process the method at the next convenient time for the platform. Remember that Salesforce is a multi-tenanted platform, meaning organizations on the platform share, among others, processing resources.  With that in mind, the platform tries to govern processing so that everyone on the platform gets the same experience and not one organization can monopolize the system resources. Typically, the future calls get initiated very quickly but there are no guarantees on timing but you can be sure that the future method will process soon after calling it. Second, callouts to a web service cannot be executed within a trigger, so the future method acts more like a proxy for the functionality.  There are plenty more details and ways that a web service can be called and you can dive deep into the topic by going through the Salesforce documentation. If for no other reason, it forces you to separate the call to the web service from your trigger, in turn, exposing your future call to other code you may want to write.

Once we are finished the trigger and the future class, we will test out the functionality and then you will have some working code ready to deploy from your sandbox or development edition to your live org.  But wait, don’t forget to write those unit tests and get over 75% coverage…Shoot for 100%.  If you don’t know what I am talking about with unit tests I suggest you review that documentation on Salesforce.

The results of the future method will update mailing address fields and custom fields on the Contact object.  For this example, here are the fields you will need to create and their respective data types.  We will not need these until we get to the future method but it is a good idea to get them created and out of the way first.

  • Field name
    • Internal Salesforce name
    • Type
    • Service Objects field name
  • dotsAddrVal_DPVScore
    • AddrValDPVScore__c
    • Number(1, 0)
    • DPV
  • dotsAddrVal_DPVDescription
    • AddrValDPVDescription__c
    • Text(110)
    • DPVNotesDesc
  • dotsAddrVal_DPVNotes
    • AddrValDPVNotes__c
    • Long Text Area(3000)
    • DPVNotesDesc
  • dotsAddrVal_IsResidential
    • AddrValIsResidential__c
    • Checkbox
    • IsResidential
  • dotsAddrVal_ErrorDescription
    • AddrValErrorDescription__c
    • Long Text Area(3000)
    • Desc
  • dotsAddrVal_ErrorType
    • AddrValErrorType__c
    • Text(35)
    • Type

The last thing we will need to setup before we get started is registering the Service Objects’ endpoint with Salesforce so that we can make the web service calls to the address validation API.  The page to add the URL to is called “Remote Site Settings” and can be found in Home->Settings->Security->Remote Site Settings. This is the line I added for this blog.


Be aware that this will only work for trial keys.  With a live/production key you will want to add one for and  You’ll want both endpoints with a live key and we’ll explain more about that later.  We named this one ServiceObjectsAV3 but you can name it whatever you want.

Let’s get started with the trigger code.  The first thing needed is to setup the standard signature of the call.

The method will be acting on the Contact object and will execute after an insert.  Next, we will loop through the Contact records that were inserted pulling out all the associated Contact Ids.  Here you can add logic to filter out contacts or implement other business logic before adding the contact Id to the Id list of contacts to update.

Once we have gathered all the Ids, we will send them to the future method which is expecting list of Ids.

As you can see, this will work on one-off inserts or bulk inserts.  Since there is not much code to this trigger, I’ll show you the entire sample code for it here.

So that was painless, let’s move to the future class and we will see how easy it is to make the call to the web service as well.

Future methods need to be static and return void since they do not return and values.  They should also be decorated with the @future annotation and callout=true.

It will be more efficient to update the newly inserted records all at once instead of one at a time and with that in mind, we will store the results from our address validation web service in a new list of Contacts.

Based on the results from the service, we will either update the mailing address on the Contact record and/or the DPV note descriptions or errors, as well as, the Is Residential flag.  Some of these fields are standard on the Contacts object and some are custom ones that we created at the beginning of this project.  Here is a sample of initiating the loop in order to loop through the Contact Ids that we passed into this method from the trigger and then the SOQL call to retrieve the details.

In case you are wondering why we just didn’t create a list of Contacts and send those in from the trigger instead of building up the list of Contact Ids, the reason is there is a limitation to @future calls. You can only pass in primitive objects such as Ids, strings, integers and so on.  So we went with a list of Ids where in Salesforce Id is its own primitive type.

Demonstrated in the code, which is shown in the next screen shot, are our best practices for service failover to help ensure 100% uptime when making calls to the API.  Note, that with a live production key for our API, the URL for the first would need to be and the second one, the one inside the “if” statement, would need to be

I left both of these as because most people starting out will be testing their integration with a free trial key.  In the screen shot you will see that I have added the API license key to the call “ws-72-XXXX-XXXX”.  This is factitious. You will need to replace that with your proper key based on the live/production or trial endpoint your key is associated with.  A best practice suggestion for the key is to “hide” it in a custom variable or custom configuration instead of exposing here in this method.

Once we get a response back from the call to the API and everything is okay, we setup some variables and start parsing the response.  There are several ways to parse JSON and definitely better ways than the one described here but this is not an exercise in parsing JSON, it is an example in how to integrate.  In this example, we loop through the JSON looking for the field names that we are interested in.  We are looking for:

  • Address1
  • City
  • State
  • Zip
  • DPV
  • DPVDesc
  • DPVNotesDesc
  • IsResidential
  • Type
  • Desc

But the service returns many other valuable fields, which you can learn about from our comprehensive developer guide found here, which has other helpful information along with the fields mentioned.  Remember, if you do end up using more fields from the service and you want to display them or have them saved in your records, you will need to create corresponding custom fields for them.  The next screen shot is just a part of the code that pulls the data from the response.

In practice, you may want to make decisions on when to update the original address using more criteria, but in this example we are basing that decision on the DPV score result alone.  You can find out more about the different DPV codes back in the documentation.  When the DPV value is 1 then we are returning a valid mailing address.  Corrections to the address may have occurred so it would be best to update the address fields on the Contact record and that is what we are doing here just before adding the updated Contact to our new list of Contacts.

Once we have looped through the entire list of Ids that we sent into this method, we are ready to do the update in Salesforce.  Before this point, nothing yet would have been saved.

And there you have it, go ahead and add some new contacts with addresses and test it out.  Over at the Contacts tab I add a new contact and then refreshed the page to see the results.  I will purposely make an error in the address so we can see more of the validation results.

The address we added is for our office and there are several errors in the street name, city and zip code.  Let’s see if our system gets it right.

The address validation API fixed the address and returned that the fixed address is the correct address to use based on the inputs.  Next, we will demonstrate a bad, non-salvageable address. You will see more than a few things wrong here.

There were so many problems that the address was not salvageable at all.

Let’s try one more, but this time instead of a completely bad address, we will add a bad (not completely bad) address but missing key parts.

The input address is still not good enough to be good but this time we were able to get details back that can help with fixing the problems.

From the results, we can see that the address is there but perhaps a unit number or something key to the address is missing to get full delivery by the USPS.

In conclusion, there are a few things to take into consideration when integrating data quality tools into Salesforce. First, you need to do more error checking.  These were simple examples to show how to integration our service and the error checking was the bare minimum and not something I would expect to see in a production environment. So, please…please, more error checking. Second, don’t forget to write the unit tests and try to get 100% coverage.  Salesforce only requires 75% to be able to deploy your code, but we highly recommend striving for 100%, it is definitely attainable.  Salesforce has this requirement for several reasons. One being that when Salesforce makes updates to their platform, they can run all the units in all the organizations on the platform and ensure they are not going to break anyone’s system. It is just good practice to do so.  There is tons of documentation on Salesforce that will help you down the right path when it comes to testing. Lastly, we didn’t make any considerations in the code for the situation where a contact being inserted doesn’t have an address to validate or enough address components.  Clearly, you would want to add a check to the code to see if you have a valid combination of data that will allow for an address validation against our API.  You will want to see if any of these combinations exist. These represent the minimum or required fields.

  • Combination 1
    • Address
    • City
    • State
  • Combination 2
    • Address
    • Zip code

You can find the sample code for this blog on our web site with the file names TestTrigger.trigger and TestUtil.cls at this link.

The Direct and Indirect Costs of Poor Data Quality

Imagine that your company runs an online ad for a product. But when your customer gets to your website, this product has actually been discontinued. And from thereon in, every time the customer surfs the web they are now constantly served with banner ads for this non-existent product.

This scenario really happens more often than you think. And it is a perfect example of marketing at its worst: spending your budget to lose customers, ruin your service reputation, and destroy your brand.

We often talk about data quality on this blog, but this time I would like to focus on the results of lack of data quality. In the case above, poor linkages between inventory data and marketing lead to a bad customer experience AND wasted marketing dollars. Much the same thing is true with our specialties of contact data and marketing leads: bad data leads to a wellspring of avoidable costs.

First there are tangible costs. Bad leads and incorrect customer addresses lead to specific, measurable outcomes. Wasting human effort. Throwing away precious marketing capital. Penalties for non-compliance with telephone and email contact laws. Re-shipment costs and inventory losses. According to one Harvard Business Review article, the real costs of poor data quality now exceed $3 trillion per year in the United States alone.

Then there is the part few people pay enough attention to: the indirect costs of poor data quality. In a recent piece for CustomerThink, data management expert Chirag Shivalker points to factors such as sales that never materialize, potential customers who turn away, and the subtle loss of repeat business. Whether it is a misdirected marketing piece, an unwanted pitch on someone’s cell phone, or a poor customer experience, some of the biggest consequences of poor data quality are never quantified – and, perhaps more importantly, never noticed.

Finally there is the fact that data, like your car, is a depreciating asset. Even the most skillfully crafted database will degrade over time. Contact information is particularly vulnerable to decay, with estimates showing that as much as 70% of it goes bad every year. A recent article from insideBIGDATA put the scope of this in very stark terms: each and every hour, over 500 business addresses, 800 phone numbers and 150 companies change – part of a growing ball of data that, per IDC, will swell to over 160 zettabytes (for the uninitiated, a zettabyte is one sextillion, or 10 to the 21st power, bytes). And the bill for validating and cleaning up this data can average $100-200K or more for the average organization. So an ongoing approach is needed to preserve the asset value of this data, as well as prevent the costs and negative consequences of simply letting it age.

A recent data brief from SiriusDecisions breaks down these costs of poor data quality into four key areas: hard costs, productivity costs, hidden costs, and opportunity costs. And it is not like these costs are exactly a surprise: according to Accenture, data inaccuracy and data inconsistency are the leading concerns of more than three-quarters of the executives they surveyed. Their solution? Better use of technology, combined with process improvements such as formal data governance efforts.

The people in our original example above probably had no idea what kind of lost business and negative brand image they were creating through poor data quality. And ironically, I would say let’s keep it that way. Why? Because it is a competitive advantage for people like you and me who pay attention to data quality – or better yet, build processes to automate it. If you are ready to get started, take a look at some of our solutions and let’s talk.

Service Objects’ Application Engineers: Helping You Get Up and Running From Day 1

At Service Objects, one of our Core Values is Customer Service Above All. As part of this commitment, our Application Engineers are always available to answer any technical questions from prospects and customers. Whether users are beginning their initial investigation or need help with integration and deployment, our Engineers are standing by. While we continually make our services as easy to integrate as possible, we’d like to touch on a few common topics that are particularly helpful for users just getting started.

Network Issues

Are you are experiencing networking issues while making requests to our web services? It is a very common problem to face where outbound requests are being limited by your firewall and a simple rule update can solve the issue. When matters extend beyond simple rule changes, we are more than happy to schedule a meeting between our networking team and yours to get to the root cause and solve the issue.

Understanding the Service Outputs

Another common question revolves around the service outputs, such as how they should look and how they can be interpreted. From a high level, it is easy to understand what the service can provide but when it comes down to parsing the outputs, it can sometimes be a bit trickier. Luckily there are sets of documentation for every service and each of their operations. Our developer guides are the first place to check if you are having trouble understanding how individual fields can be interpreted and applied to your business logic. Every output has a description that provides insight into what that field means. Beyond the documentation, our Application Engineering team is available via multiple channels to answer your questions, including r email, live chat, and phone.

 Making the Move from Development to Production

Eventually everyone who moves from a being a trial user to a production user undergoes the same steps. Luckily for our customers, moving code from development to production is as easy as changing two items.

  • The first step is swapping out a trial license key to a production key.
  • The second step is to point your web service calls from our trial environment to our production environment. Our trial environment mirrors the exact outputs that you will find in production so no other code changes are necessary.

We understand that, even though we say it is easy, making the move to production can be daunting. That is why we are committed to providing your business with 24/7/365 technical support. We want the process to go as smoothly as possible and members of our team are standing by to help at a moment’s notice.

We have highlighted only a few broad cases that we have handled throughout our 16 years of providing genuine, accurate, and up-to-date data validation. Many technical questions are unique and our goal is to tackle them head on. If a question arises during your initial investigation, integration, move to production, or beyond, please don’t hesitate to contact us.

Baseball and Data Quality: America’s National Pastimes

By the time October rolls around, the top Major League baseball teams in the country are locked in combat, in the playoffs and then the World Series. And as teams take the field and managers sit in the dugout, everyone has one thing on their mind.


Honestly, I am not just using a cheap sports analogy here. Many people don’t realize that before my current career in data quality, I was a young pitcher with a 90+ MPH fastball. I eventually made it as far as the Triple-A level of the Pittsburgh Pirates organization. So I know a little bit about the game and how data plays into it. We really ARE thinking about data, almost every moment of the game.

One batter may have a history of struggling to hit a curve ball. Another has a good track record against left-handed pitching. Still another one tends to pull balls to the left when they are low in the strike zone. All of this has been captured as data. Have you noticed that position players shift their location for every new batter that comes to the plate? They are responding to data.

Long before there were even computers, baseball statisticians tracked everything about what happens in a game. Today, with real-time access to stats, and the ability to use data analytics tools against what is now a considerable pool of big data, baseball has become one of the world’s most data-driven sports. The game’s top managers are distinguished for what is on their laptops and tablets nowadays, every bit as much as for who is on their rosters.

And then there are the people watching the game who help pay for all of this – remember, baseball is fundamentally in the entertainment business. They are all about the data too.

A recent interview article with the CIO of the 2016 World Champion Chicago Cubs underscored how a successful baseball franchise leverages fan data at several levels: for example, tracking fan preferences for an optimal game experience, analyzing crowd flow to optimize the placement of concessions and restrooms, and preparing for a rush of merchandise orders in the wake of winning the World Series (although, as a lifelong Cubs fan, I realize that they’ve only had to do that once so far since 1908). For any major league team, every moment of the in-game experience – from how many hot dogs to prepare to the “walk up” music the organist plays when someone comes up to bat – is choreographed on the back of customer data.

Baseball has truly become a metaphor for how data has become one of the most valuable business assets for any organization – and for a competitive environment where data quality is now more important than ever. I couldn’t afford to pitch with bad data on opposing players, and you can’t afford to pursue bad marketing leads, ship products to wrong customer addresses, or accept fraudulent orders. Not if your competitors are paying closer attention to data quality than you are.

So, pun intended, here’s my pitch: look into the ROI of automating your own data quality, in areas such as marketing leads, contact data verification, fraud prevention, compliance, and more. Or better yet, leverage our demographic and contact enhancement databases for better and more profitable customer analytics. By engineering the best data quality tools right into your applications and processes, you can take your business results to a new level and knock it out of the park.

Testing Through Batches or Integration: At Service Objects, It’s Your Choice

More times than not, you have to buy something to really try it.  At Service Objects, we think it makes more sense to try before you buy.  We are confident that our service will exceed expectations and are happy to have prospects try our services before they spend any money on them.  We have been doing this from the day we opened our doors.  With Service Objects, you can sign up for a free trial key for any of our services and do all your testing before spending a single cent.  You can learn about the multiple ways to test drive our services from our blog, “Taking Service Objects for a Test Drive.” Today, however, I am focusing on batch testing and trial integration.

Having someone go through their best explanations to convey purpose or functionality can be worthwhile but, as the saying goes, a picture is worth a thousand words.  If you want to know how our services work, the best way to see them is simply try them out for yourself.  With minimal effort, we can run a test batch for you and have it turned around within a couple hours…even less time in most cases.  Another way we encourage prospects to test is by directly integrating our API services into their systems.  That way you see exactly how the services behave and get a better feel for our sub-second response times.  The difference between a test batch and testing through direct integration is the test batch will show the results and the test through integration will demonstrate how the system behaved to deliver results.


Test batches are great.  They give you an opportunity to see the results from the service first hand, including all the different fields we return.  Our Account Executives are happy to review the results in detail and you always have the support of the Applications Engineering team to help you along.  With test batches, you can quickly see that a lot of information is returned regardless of the service you are interested in.  Most find it is far more information than expected and often clients find that the additional information helps them solve other problems beyond their initial purpose.  Another aspect that becomes clearer is the meaning of the fields. You get to see the fields in their natural environment and obtain a better understanding than the strict academic definitions.  Lastly, it is important to see how your own data fairs through the service and far more powerful to show how your data can be improved rather than just discussing it conceptually.  That is where our clients get really excited about our services.


Testing through integration is a solid way to gain an understanding of how the service behaves and its results.  It is a great way to get a feel for the responses that come back and how long it takes.  More importantly, you can identify and fix issues in your process long before you start paying for the service.  Plus, our support team is here to assist you through any part of the integration process.  Our services are built to be straightforward and simple to integrate, with most developers completing them in a short period of time.  Regardless, we are always here to help.  Although we highly recommend prospects run their own records through the service, we also provide sample data to help you get started.  The important part is you have a chance to try the service in its environment before making a commitment.

Going forward with either of these approaches will quickly demonstrate how valuable our services are. Even more powerful is when you combine the two testing procedures with your own data for the best understanding of how they will behave together.

With all that said, if you’re still unsure how to best begin, just give us a call at 805-963-1700 or sign up for a free trial key and we’ll help you get started.

CASS and DPV: A Higher Standard for Address Accuracy

If you market to or serve people by mail, there are two acronyms you should get to know: CASS and DPV. Here is a quick summary of both of them:

  • CASS stands for the Coding Accuracy Support System™. As the name implies, its function is to support address verification software vendors with a measurable standard for accuracy. It also represents a very high bar set by the US Postal Service to ensure that address verification meets very strict quality standards.
  • DPV stands for Delivery Point Validation™. This is a further capability supported under CASS, making sure that an address is deliverable.

You may ask, “If an address is accurate, why do we have to check to make sure it is also deliverable?” The answer lies in the broader definition of what an address is – a placeholder for a residence or business that could receive mail. Not every address is, in fact, deliverable: for example, 45 Elm Street might be someone’s residence, while 47 Elm Street might currently be a vacant lot – or not exist at all. Another example is multi-unit dwellings that share an address: 100 State Street, Apartment 4 may be deliverable, while 100 State Street, Apartment 5 may not exist. So you want to ensure addressability AND deliverability for every address within your contact database.

Now, here is why you need to care about CASS and DPV in particular:

Rigorous. CASS certification is truly the data quality equivalent of Navy SEAL training. The first step is an optional (Stage I) test that lets developers run a sample address file for testing and debugging purposes. Next is Stage II, a blind 150,000-address test that only returns scores from USPS, not results. To obtain CASS certification, these scores must meet strict passing criteria ranging between 98.5% and 100% in specific categories.

Recurring. CASS certification is not a lifetime badge of honor. The USPS requires software providers to renew their certification every year, with a fresh round of testing required. Service Objects has not only been continuously CASS-certified for much of the past decade, but has also forged a unique partnership with USPS to update and refresh its CASS-certified address data every two weeks.

Reliable. DPV capabilities are based on the master list of delivery points registered with the USPS, which stores actual deliverable addresses in the form of an 11-digit code, incorporating data such as address, unit, and ZIP+4 codes. While the codes themselves can (and do) change frequently, the real key in address deliverability is having up-to-date access to current USPS data. Service Objects licenses DPV tools as an integral part of its address validation capabilities.

Our CASS-certified address engine and continuously updated USPS address data are two of the critical components behind our proprietary address database. Whether you run your addresses through our USPS address validation API in your application or use a convenient batch process, those addresses are instantly compared, validated, corrected, and/or appended to provide accurate results.

If you’ve read this far, it is probably clear that CASS certification and DPV capabilities are critically important for managing your contact data quality. So be sure to partner with a vendor that maintains continuous CASS certification with full support of DPV. Like Service Objects, of course. Contact us to learn what we can do for your contact addresses and marketing leads today!

Getting the Most Out of Data-Driven Marketing

How well do you know your prospects and customers?

This question lies at the heart of what we call data-driven marketing. Because the more you know about the people you contact, the better you can target your offerings. Nowadays smart marketers are increasingly taking advantage of data to get the most bang from their marketing budgets.

Suppose that you offer a deal on a new razor, and limit the audience to adult men. Or take people who already eat fish at your restaurant on Tuesdays, and promote a Friday fish fry. Or laser-target a new lifestyle product to the exact demographic group that is most likely to purchase it. All of these are examples where a little bit of data analytics can make a big difference in the success and response rate of a marketing campaign.

According to UK data marketing firm Jaywing, 95% of marketers surveyed personalize their offerings based on data, although less than half currently measure the ROI of these efforts, and less than 10% take advantage of full one-to-one cross-channel personalization. But these efforts are poised to keep growing, notes their Data Management Practice Director Inderjit Mund: “Data availability is growing exponentially. Adopting best practice data management is the only way marketers can maintain a competitive advantage.”

Of course, data-driven marketing can also go sideways. For example, bestselling business author and television host Carol Roth once found herself peppered with offers for baby merchandise – including an unsolicited package of baby formula – even though she is not the least bit pregnant. Her suspicion? Purchasing baby oil regularly from a major chain store, which she uses in the shower, made their data wonks mistakenly think that she was a new mother. Worse yet, this kind of targeted marketing also led the same chain to unwittingly tip off a father that his daughter was pregnant.

This really sums up the promise, and the peril, of using data to guide your marketing efforts. Do it wrong, and you not only waste marketing resources – you risk appearing inept, or worse, offending a poorly targeted segment of your market base. But when you do it right, you can dramatically improve the reach and efficiency of your marketing for a minimal cost.

This aligns very closely with our view of a marketing environment that is increasingly fueled by data. Among the best practices recommended by Jaywing for data-driven marketing, data quality is front and center with guidelines such as focusing on data management, having the right technology in place, and partnering with data experts. And they are not alone: according to a recent KPMG CEO survey, nearly half of respondents are concerned about the integrity of the data on which they base decisions.

There is a clear consensus nowadays that powering your marketing with data is no longer just an option. This starts with ensuring clean contact data, at the time of data entry and the time of use. Beyond that, smart firms leverage this contact data to gain customer insight in demographic areas such as location, census and socioeconomic data, to add fuel to their address or email-based marketing. With cost-effective tools that automate these processes inside or outside of your applications, the days of scattershot, data-blind marketing efforts are quickly becoming a thing of the past.

The Power of DOTS FastTax

What is DOTS FastTax?

DOTS FastTax web service provides sales and use tax rate information for all US areas based on several different inputs. The operations that are offered within FastTax take input parameters such as address, city, state, postal code. The service also provides an operation that will take your Canadian province and will return the proper Canadian tax rate information.

How can it be used?

At its core, FastTax is an address to tax rate lookup system. You provide the service with a location and it will return the tax rate or rates for the given area. From there you can use this data in conjunction with your own business logic to easily determine the proper tax rate you should be charging. A common use case is for online retail companies that need to determine the rate to charge for an order. Rates vary greatly depending on where the client is located and if the company has a sales tax nexus in that state.  Nexus, also known as sufficient physical presence, is a legal term that refers to the requirement for companies doing business in a state to collect and pay tax on sales in that state. Calculating the proper rate is as easy as determining where your company has nexuses and then performing a tax rate look up via the FastTax web service. These two steps can be done programmatically, thus streamlining your business workflow.

What makes FastTax so powerful?

It may seem like a simple task to take an address and perform a lookup on a tax rate database. In theory it is just identifying a location and then finding the relevant tax rates for it. However, in reality there are many more factors that need to be accounted for to ensure the tax rate being returned is accurate, up to date, and truly relevant for the input address. It is in these aspects where Service Objects’ FastTax goes above and beyond. On top of our tax rate databases that are actively maintained to provide the latest and most accurate tax rate data, our operations benefit from the other services we specialize in. Namely, our address validation and address geocoding services.

How does FastTax go above and beyond?

Through the use of our address validation engine we are able to take an input address and determine its correctness as well as standardize it into its most useable form. Having an address corrected and standardized allows us to more accurately match the location with its corresponding tax rate. On top of address validation, our use of geocoordinates and spatial data allow us to identify boundaries between areas. This could be the difference between charging the proper rate for an area or misidentifying it and missing rates such as county, country district, city district, or even special district rates. Another extremely important distinction that geocoordinates allow us to make is for areas that are unincorporated. FastTax provides an “IsUnincorporated” flag when an address is in an unincorporated area. This allows for your business logic to correctly tax this address by removing any city or city district rates.

FastTax in action

To see the power of FastTax in action it helps to take a look at Google Maps. Let’s take the city of Littleton, Colorado. In fig.1 the city perimeter is outlined in red and its contents shaded in. The Google Maps result shows the officially recognized city limits. Comparing that to the pin shown in fig.2 it is clear that the address in this example falls beyond the city limits. Technically it is identified as part of the city of Littleton but is part of an unincorporated area. Tax rates for this address need to properly account for this geospatial and city boundary information. FastTax excels in identifying these areas and can provide the “IsUnidentified” flag to indicate this address falls into its own special case. With the indicator flag in hand you can properly account for the difference in tax rates.

See how FastTax can help your business. Sign up for your free trial key or send us a list and test up to 500 transactions.

How Millennials Will Impact Your Data Quality Strategy

The so-called Millennial generation now represents the single largest population group in the United States. If they don’t already, they will soon represent your largest base of customers, and a majority of the work force. What does that mean for the rest of us?

It doesn’t necessarily mean that you have to start playing Adele on your hold music, or offering free-range organic lattes in the company cafeteria. What it does mean, according to numerous social observers, is that expectations of quality are changing radically.

The Baby Boomer generation, now dethroned as the largest population group, grew up in a world of amazing technological and social change – but also a world where wrong numbers and shoddy products were an annoying but inevitable part of life. Generation X and Y never completely escaped this either:  ask anyone who ever drove a Yugo or sat on an airport tarmac for hours. But there is growing evidence that millennials, who came of age in a world where consumer choices are as close as their smartphones, are much more likely to abandon your brand if you don’t deliver.

This demographic change also means you can no longer depend on your father’s enterprise data strategy, with its focus on things like security and privacy. For one thing, according to USA Today, millennials could care less about privacy. The generation that grew up oversharing on Instagram and Facebook understands that in a world where information is free, they – and others – are the product. Everyone agrees, however, that what they do care about is access to quality data.

This also extends to how you manage a changing workforce. According to this article, which notes that millennials will make up three quarters of the workforce by 2020, dirty data will become a business liability that can’t be trusted for strategic purposes, whether it is being used to address revenues, costs or risk. Which makes them much more likely to demand automated strategies for data quality and data governance, and push to engineer these capabilities into the enterprise.

Here’s our take: more than ever, the next generation of both consumers and employees will expect data to simply work. There will be less tolerance than ever for bad addresses, mis-delivered orders and unwanted telemarketing. And when young professionals are launching a marketing campaign, serving their customers, or rolling out a new technology, working with a database riddled with bad contacts or missing information will feel like having one foot on the accelerator and one foot on the brake.

We are already a couple of steps ahead of the millennials – our focus is on API-based tools that are built right into your applications, linking them in real time to authoritative data sources like the USPS as well as a host of proprietary databases. They help ensure clean data at the point of entry AND at the time of use, for everything from contact data to scoring the quality of a marketing lead. These tools can also fuel their e-commerce capabilities by automating sales and use tax calculations, or ensure regulatory compliance with telephone consumer protection regulations.

In a world where an increasing number of both our customers and employees will have been born in the 21st century, and big data becomes a fact of modern life, change is inevitable in the way we do business. We like this trend, and feel it points the way towards a world where automated data quality finally becomes a reality for most of us.

Making an (email) list and checking it twice: Best practices for email validation

For most organizations, one of the most critical assets of their marketing operations is their email contact database. Email is still the lingua franca of business: according to the Radicati Group, over a quarter of a trillion email messages are sent every business day, and the number of email users is expected to top 4 billion by 2021 – roughly half of the world’s population. This article will explore current best practices for protecting the ROI and integrity of this asset, by validating its data quality.

The title of this article is not just a cute play on words – and it has nothing to do with Santa. Rather, it describes an important principle for your game plan for email data quality. By implementing a strong two-step email validation process, as we describe here, you will dramatically reduce deliverability problems, fraud and blacklisting from your email marketing and communications efforts.

The main reason we recommend checking emails in two stages revolves around the time these checks take: many checks can be performed live using a real-time API, particularly as email addresses are entered by users, but server validation in particular may require a longer processing time and interfere with user experience. Here are 3 of the most important checks that are part of the email validation process:

• Syntax (FAST): This check determines if an email address has the correct syntax and physical properties of an email address.

• DNS (FAST): We can quickly check the DNS record to ensure the validity of the email domain (MX record) for the email address. (There are some exceptions to this – for example, where the DNS record is with a shoddy or poor registry and the results take longer to come back.)

• Email Server (VARIABLE, and not within the email validation tool’s control): Although this check can take from milliseconds to minutes, it is one of the most important checks you can make – it ensures that you have a deliverable address. This response time is dependent on the email server provider (ESP) and can vary widely: large ESPs like Gmail or MSN normally respond quickly, while corporate or other domains may take longer.

There are many more checks in Service Objects’ Email Validation tool, including areas such as malicious activity, data integrity, and much more – over 50 verification tests in all! We auto-correct addresses for common spelling and syntax errors, flag bogus or vulgar address entries, and calculate an overall quality score you can use to accept or reject the email address. (For a deeper dive, take a look at this article to see many of the features of an advanced EV tool.)

Here are the two stages we recommend for your email validation process:

Stage 1: At point of entry.

Here, you validate emails in real-time, as they are captured. This provides the opportunity for the user to correct mistakes in the moment such as typos or data entry errors. Here you can use our EV software to check for issues like syntax, DNS and the email server – however we recommend setting the API configuration settings to no more than a wait of a couple of seconds, for the sake of customer experience. At this stage either the user or validation software has a chance to update bad addresses.

Stage 2 – Before sending a campaign.

Validate the emails in your database – using the API – after the email has been captured and the user is no longer available in real-time to make corrections. In this stage, you have more flexibility to wait for responses from the ESPs, providing more confidence in your list.

It is estimated that 10-15% of emails entered are not usable, for reasons ranging from data entry errors to fraud, and 30% of email addresses change each year. Together these two steps ensure that you are using clean and up-to-date email data every time – and the benefit to you will be fewer rejected addresses, a better sender reputation, and a greater overall ROI from your email contact data.

The Top 7 Skills of Successful Marketing Professionals

Good marketing is the bedrock of most business’ revenue pipelines; their number one job in many instances is to generate high quality leads through a variety of channels that can be converted into sales. Add to that the responsibility for creating, managing and communicating the entire organization’s brand, and the importance of marketing’s role becomes clear.

So what are some of the ingredients of a successful marketing professional? Here are some of the key traits of the very best ones:

Creativity. We put this first for a reason. More than anything, marketing creates “a-ha” moments by framing what businesses do in a new light. Where did Apple’s call to “think different,” Progressive Insurance’s Flo, or Dos Equis’s Most Interesting Man in the World come from? From the minds of people who thought far beyond MP3 players, insurance policies, or beer.

Communication. Marketing inherently tells a story. And whether that story involves quality, productivity, or success, good marketers place customers in the middle of a credible narrative that improves their lives. When you searched on Google, purchased a book or a dust mop on Amazon, or drove off in a new Tesla, you bought into a story that promised to tangibly make your life better.

Project Management. When you watch a football game or a musical performance, you are seeing a team executing specific roles under the direction of a good coach or bandleader. Marketing is also a thoughtfully composed performance, led by people who can get stakeholders like product developers, data analysts, sales managers and operations staff to all play in harmony.

Flexibility. Marketing is the polar opposite of the person who makes the same widget for 20 years. Markets change, opportunities develop, and competition never stops. Hockey great Wayne Gretzky once said that the best players don’t skate to where the puck is, but to where the puck is going – and in much the same way, good marketing professionals are always thinking three steps ahead.

Results. Professional comedians make their craft look easy on stage, but in reality, their acts are refined from months or years of experience about what works best with their audience. Likewise, good brands are fueled by information, market research, and outcomes evaluation.

Market savvy. Whether it is a manufacturer selling airplanes to airlines, or a hipster hoping their product video goes viral, every market has its culture and norms. Good marketing professionals “get” things like what strategies work with what market segments, what the size and potential of their market are, and what their competitive landscape looks like.

Data savvy. We saved the best for last. Marketers from a generation ago would never recognize how much data drives the revenue stream of today’s businesses. Smart marketers recognize that they need tools to help them make better decisions about the customers they serve. In addition, to maximize the value of lead data and be effective in communicating with customers and prospects, marketer’s need to have data quality tools in place to be sure their contact information in their database in genuine, accurate and up-to-date.

This is where we come in. Service Objects came into being nearly a generation ago – and nearly 3 billion contact records ago – to do something about the estimated 1 in 4 contact records that are inaccurate, incomplete, fraudulent, or out-of-date. Our proprietary tools, which combine up-to-date USPS, phone and demographic databases with sophisticated capabilities for lead validation and customer insight, add power (and revenue) to your marketing efforts. We can validate contact information, append missing information, and even score leads for quality, across a suite of products that plug in to your application or data processing. Visit for more information.

Why Data Quality is Key to the Sales and Marketing Relationship

History is full of famous “frenemies,” from opposing politicians to the latest Hollywood gossip – people who work closely together but get under each other’s skin. But in your workplace, one of the most common frenemy relationships is between sales and marketing.

On paper, of course, both teams drive the revenue side of their organization. Their functions are critical to each other, and they support each other’s efforts. But scratch the surface, and you’ll often find some built-in sources of conflict:

“Marketing doesn’t give us enough good leads.”

“Sales piddles around and then blames us for not closing the deal.”

“Marketing doesn’t listen to our needs.”

“Sales is always making unrealistic demands about lead quality.”

In reality, both teams are linked to a common shared goal, and often frustrate each other when these goals don’t happen as planned. And very often, the culprit is data quality.

The problem in most organizations is that data quality is nobody’s job. Marketing is focused on lead acquisition, and sales is focused on closing contracts. Making sure that contact data is accurate, names aren’t fraudulent, or leads are qualified all take time away from people’s daily workflow. And over time, more than 70% of this data becomes even more incorrect as changes happen. Unfortunately, the result is that bad data is accepted as part of the status quo – or worse, leads to fingerpointing.

The solution to this problem is obvious: automate the process of data quality. Thankfully, solutions exist nowadays for turning your raw contact data into a stronger revenue generation engine. Here are some of the capabilities you can build right into your contact intake and marketing process:

  • Lead Validation can verify contact addresses against real-time USPS and Canada Post databases, cross-validate these addresses with phone, email and IP address data, and then return a lead quality of 1-100 from an analysis of over 130 data points.
  • Phone Append can take your contact data and find corresponding phone numbers, using a proprietary database of over 800 million consumer, business and government phone number listings, with up to 75% accuracy.
  • GeoPhone capabilities can produce latitude and longitude data from your phone contact data for geographically-based marketing efforts – or even find corresponding mailing and SMS/MMS addresses, for over 400 million available phone numbers in North America.
  • For outbound telemarketing campaigns, Phone Exchange can verify the accuracy and type of your phone contact records. In addition to lead accuracy, this can help you discover numbers that have changed hands since your last campaign, particularly wireless numbers – and help keep you from running afoul of the Telephone Consumer Protection Act (TCPA), where fines for unwanted calls can run as high as $1500 per violation.

Capabilities like these yield an immediate ROI for the effectiveness of your sales and marketing efforts, which are fueled by the quality of your contact data. In addition, as prospects turn into customers, they can play a key role in preventing fraud and maintaining customer satisfaction.

This is a situation where a little technology can make a real difference in the dynamics of your sales and marketing teams. Here is an analogy: with real life “frenemies,” family therapists generally try to find solutions that help both sides feel like they are winning. Data quality tools are like family therapy for your sales and marketing team: they take their most common points of conflict and turn them into revenue-building solutions that everyone can be happy with.

The Path to Data Quality Excellence

“In the era of big data and software as a service, we are witnessing a major industry transformation. In order to stay competitive, businesses have reduced the time it takes to deploy a new application from months to minutes.” – Geoff Grow, Founder and CEO, Service Objects

The big data revolution has ushered in a major change in the way we develop software, with applications webified and big data tools woven in. Until recently data quality tools that ensure data is genuine have not kept pace. As a result, developers have had little choice but to leave out data validation in their applications.

In this video, Geoff will show you why data validation is critical to reducing waste, identifying fraud, and maximizing operation efficiency – and how on-demand tools are the best way to ensure that this data is genuine, accurate, and up-to-date. If you develop applications with IP connectivity, watch this video and discover what 2,400 other organizations have learned about building data quality right into their software.

Celebrating Earth Day

April 22 marks the annual celebration of Earth Day, a day of environmental awareness that is now approaching its first half century. Founded by US Senator Gaylord Nelson in 1970 as a nationwide teach-in on the environment, Earth Day is now the largest secular observance in the world, celebrated by over a billion people.

Earth Day has a special meaning here in our hometown of Santa Barbara, California. It was a massive 1969 oil spill off our coast that first led Senator Nelson to propose a day of public awareness and political action. Both were sorely needed back then: the first Earth Day came at a time when there was no US Environmental Protection Agency, environmental groups such as Greenpeace and the Natural Resources Defense Council were in their infancy, and pollution was simply a fact of life for many people.

If you visit our hometown today, you will find the spirit of Earth Day to be alive and well. We love our beaches and the outdoors, this area boasts over 50 local environmental organizations, and our city recently approved a master plan for bicycles that recognizes the importance of clean human-powered transportation. And in general, the level of environmental and conservation awareness here is part of the culture of this beautiful place.

Earth Day

It also has a special meaning for us here at Service Objects. Our founder and CEO Geoff Grow, an ardent environmentalist, started this company from an explicit desire to apply mathematics to the problem of wasted resources from incorrect and duplicate mailings. Today, our concern for the environment is codified as one of the company’s four core values, which reads as follows:

“Corporate Conservation – In addition to preventing about 300 tons of paper from landing in landfills each month with our Address Validation APIs, we practice what we preach: we recycle, use highly efficient virtualized servers, and use sustainable office supplies. Every employee is conscious of how they can positively impact our conservation efforts.”

Today, as Earth Day nears the end of its fifth decade, and Service Objects marks over 15 years in business, our own contributions to the environment have continued to grow. Here are just a few of the numbers behind the impact of our data validation products – so far, we have saved:

  • Over 85 thousand tons of paper
  • A million and a half trees
  • 32 million gallons of oil
  • More than half a billion gallons of water
  • Close to 50 million pounds of air pollution
  • A quarter of a million cubic yards of landfill space
  • 346 million KWH of energy

All of this is an outgrowth of more than two and a half billion transactions validated – and counting! (If you are ever curious about how we are doing in the future, just check the main page of our website: there is a real-time clock with the latest totals there.) And we are always looking for ways to continue making lives better though data validation tools.

We hope you, too, will join us in celebrating Earth Day. And the best way possible to do this is to examine the impact of your own business and community on the environment, and take positive steps to make the earth a better place. Even small changes can create a big impact over time. The original Earth Day was the catalyst for a movement that has made a real difference in our world – and by working together, there is much more good to come!

The Impact of Data Quality on Your Direct Mail

Some things are – sadly – a fact of life. Less than a third of people floss their teeth every day. The average US household has over $16,000 in credit card debt.  And according to the United States Postal Service, undeliverable mail costs businesses roughly $20 billion every year.

Ironically, the quality of your contact data is far and away the most easily fixed of these three things. We can’t stop people from using credits cards, and we can’t make flossing your teeth more fun. But we can easily and inexpensively automate the quality of your mailings – and in the process, save you from some very real and tangible costs.

Let’s look at some of the real costs of poor data quality for direct mail:


Direct mailing remains a labor-intensive process, where sorted physical pieces of mail are prepared for delivery. And when some of these are addressed to an out-of-date lead who has moved – or someone gave you a fake name and address, like Bugs Bunny in Rabbitville, Wisconsin – you are wasting human effort at each step of the life cycle of the process, from mail preparation to updating undeliverable addresses in your database.


If bad contact data isn’t enough of a problem, according to Biznology over 70% of it changes every year as people move, change jobs, or get new contact information. Multiply this across the sunk costs of a direct mail campaign, from printing to postage to manpower, and you are looking at a substantial drain on your marketing budget.


Is your company “green”? Not if you aren’t strategically addressing your data quality. The USPS alone estimates that it handles over 6 billion pieces of undeliverable mail annually. Multiply this by the impact on trees, energy use, water and landfill space, and you have a huge and largely preventable impact on our environmental waste stream.

Customer satisfaction.

The impact of data quality gets even worse when you don’t deliver what you promised, and your customer reputation takes a hit. Add in the costs of inventory loss, re-shipping, and bad publicity on channels such as social media, and you risk a loss of customer good will, repeat business and market share.

Missed market opportunities.

They call them leads for a reason – and if your lead is sitting in a landfill somewhere because of bad contact data, they become the customer that never happened. And then the actual costs of this bad data get compounded by the loss of potential future business.

The worst thing about each of these costs is that they are all completely preventable. Real-time contact data validation is an easy, inexpensive capability that can be built right into your applications, or used directly on your lists via the Web. Once in place, they leverage the power of continually updated contact databases from the USPS and others, and you reap the financial benefits of good data quality forever after. It is truly a situation where an ounce of prevention is worth much more than a pound of cure.

Service Objects Lands on CIOReview’s Top 20 Most Promising API Solutions

Service Objects is very proud to have been recently selected as one of CIOReview’s Top 20 Most Promising API Solution Providers for 2016, judged by a distinguished panel comprised of CEOs, CIOs and VPs of IT, including CIOReview’s editorial board.

Now if you are reading this, you probably have one of two reactions: “Wow, that’s cool!” Or perhaps, “What’s an API?”

If it is the latter, allow us to explain. An API, short for an Application Programming Interface, is code that allows our data validation capabilities to be built into your software. Which means that applications ranging from marketing automation packages to CRM systems can reach into our extensive network of contact validation databases and logic, without ever leaving the application.

What this means for them is seamless integration, real time results and better data quality. Their databases have correct, validated addresses. Their leads are scored for quality, so they are mailing to real people instead of “Howdy Doody.” Their orders are scanned for potential fraud, ranging from BIN validation on credit cards to geolocation for IP addresses, so that you know when an order for someone in Utah is originating in Uzbekistan.

What this means for you is that the applications you use are powered by the hundreds of authoritative data sources available through Service Objects – even if you never see it. Of course, we have many other ways to use our products, including real-time validation of lists using our PC-based DataTumbler application, batch FTP processing of lists, and even the ability to quickly look up specific addresses via the Web. But we are proud of our history of providing world-class data validation tools to application developers and systems integrators.

Now, if APIs are old hat to you, this award represents something important to you too: it recognizes our track record within the developer community of providing SaaS tools with superior commercial applicability, data security, uptime and technical support. As a companion article in CIOReview points out, “Service Objects is the only company to combine the freshest USPS address data with exclusive phone and demographic data. Continuous expansion of their authoritative data sets allows Service Objects to validate billions of addresses and phone numbers from around the world, making their information exceptionally accurate and complete.”

There is much more coming in the future, for systems integrators and end users alike. Our CEO Geoff Grow shared with CIOReview that one key focus is “more international data, as many of our clients are doing business outside the United States and Canada … The European and Asian markets are becoming increasingly important places (and) it is important for us to expand our product offerings and our expertise in more regions of the world.” And of course, our product offerings continue to grow and expand for clients in each of the markets we serve.

If you are a developer, we make it easy to put the power of Service Objects’ data validation capabilities in your own applications. Visit our website for complete documentation and sample code, or download a free trial API key for one of our 25 data quality solutions. We know you will see why our peers rank us as one of the best in the industry!

My Bright Orange Swedish Pension

Our professions don’t exempt us from real life. Doctors get sick, contractors have leaky roofs – and people who work at Service Objects receive misaddressed mail, just like the rest of us. But one piece of junk mail that recently arrived at my house wasn’t just a mistake: it was a full-fledged tutorial on everything that can go wrong in a direct mail campaign.

For starters, it was a big orange envelope with a bold message on the front – in Swedish. Which I don’t speak. A quick visit to Google Translate revealed that by opening the envelope, I could discover how to see my entire Swedish pension online.

Alas, I don’t have a rich Swedish uncle who has left me a pension. However, the person who used to live in my house did speak Swedish. So this mailing might have been useful to her when she lived here. Unfortunately, that was over 12 years ago.

So now, let’s suppose that this was meant for her, and that she in fact would like to learn about her Swedish pension. The next problem was that her last name was incorrect. Or more accurately, it would have been correct had she not gotten married 18 years ago and taken her husband’s last name.

But that’s not all. The street address was incorrect as well. Actually, they kinda sorta got it right, which is why it probably ended up at my house. But the street name was translated into the same kind of pidgin Swedish that I haven’t seen since the prank subtitles in Monty Python and the Holy Grail. For example, “Saint” = Sankt and “Anne” should be “Ann”.

Mercifully, they did get my city of Santa Barbara, California correct. But it was written in European format, with the ZIP code first (e.g. 93109 Santa Barbara CA). And apparently they don’t do commas in Sweden.

Finally, they did at least make sure that this went to the United States. Because they put this no less than three times in the address, in three different styles (US, USA, and U.S.A.)

Of course, spending a little quality time with Service Objects could have fixed all of these problems, easily and automatically:

  • Our Address Validation 3 product would have turned this address into a correctly formatted, CASS-certified USPS address.
  • More important, our National Change of Address (NCOA) Live would have produced a current, up-to-the-minute address for the intended recipient.
  • Finally, our Lead Validation product could have validated their contact record and assessed the overall accuracy and viability before sending.

This incident was pretty funny. But at another level, it is also sad. Think of all the resources that were expended sending this piece of junk mail across the Atlantic. Now multiply this by all the other misaddressed pieces of mail that were probably sent out in this campaign. Then multiply it again by the amount of direct mail that crosses the globe every day. That sum could pay for a lot of Swedish pensions.

If there is one silver lining to this story – aside from hopefully entertaining our blog readers – it is that at least this piece of mail will not end up in a landfill somewhere. It now hangs proudly on our Wall of Shame here at Service Objects, as a reminder for why we do what we do. And how we can help YOU save money and resources.

Data Quality in Marketing: Trends and Directions for 2017

Asking whether data quality is important to your marketing efforts is a little like asking if apple pie and motherhood are important – of course, the answer will always be “yes.” Recently, however, some interesting quantitative research was published that shed light on just *how* important it has become.

Marketing research firm Ascend2 performed a survey of 250 mostly senior people in marketing, to see what they thought about data quality. Over 80% of the respondents were in management roles, with more than a quarter of the sample holding C-level positions. Fully half were associated with large companies with over 500 employees, and more that 85% had over 50 employees. Respondents were also equally split between short versus complex sales cycles.

The results showed very clearly that data quality has risen to become a critical consideration in marketing success nowadays. Here are some of their key findings:

Improving data quality is their most important strategic objective. With 62% of respondents rating this as their top objective, data quality now ranks far above more traditional marketing objectives such as improving marketing data analytics (45%), improving user experience (43%), optimizing the lead funnel (26%), and even acquiring an adequate budget (20%).

Data quality is also their biggest challenge. Respondents also ranked data quality as currently being their most critical challenge, in smaller numbers (46%) but in similar proportions to the other factors such as those mentioned above.

But things are getting better. Fully 83% of respondents feel that their marketing data strategy is at least somewhat successful at achieving objectives, with over one-third (34%) rating their own efforts as “very successful (best-in-class).” Similar numbers also feel that their tactical effectiveness is improving as well. While 14% feel that they have been unsuccessful in achieving objectives to some degree, only 3% consider themselves to be very unsuccessful.

Data quality is a downstream process. Respondents clearly favored cleaning up contact data versus constraining how it is collected. Nearly half (49%) felt that validating contact data was the most important tactic for improving marketing data quality, while less than a quarter (24%) felt that standardizing lead capture forms were important. Other upstream measures such standardizing the data upload process (34%) and developing segmentation criteria (33%) were also in the minority.

Call in the experts. An overwhelming majority of respondents (82%) outsource either some or all of the resources they use to improve marketing data quality, with over a quarter (26%) using no in-house resources at all.

The results of the survey clearly show that data quality is one of the largest challenges that marketers are currently dealing with. Whether you are frustrated with incomplete or inaccurate sales lead data, tired of bad contact data causing customer service issues, or wasting money on marketing campaigns with results negatively impacted by poor contact data, understanding the quality of your data is the first step in identifying the true costs that poor data quality is having on your organization.

Medical Data is Bigger than You May Think

What do medical centers have in common with businesses like with Uber, Travelocity, or Amazon? They have a treasure trove of data, that’s what! The quality of that data and what’s done with it can help organizations work more efficiently, more profitably, and more competitively. More importantly for medical centers, data quality can lead to even better quality care.

Here’s just a brief sampling of the types of data a typical hospital, clinic, or medical center generates:

Patient contact information
Medical records with health histories
Insurance records
Payment information
Geographic data for determining “Prime Distance” and “Drive Time Standards”
Employee and payroll data
Ambulance response times
Vaccination data
Patient satisfaction data

Within each of these categories, there may be massive amounts of sub-data, too. For example, medical billing relies on tens of thousands of medical codes. For a single patient, even several addresses are collected such as the patient’s home and mailing addresses, the insurance company’s billing address, the employer’s address, and so forth.

This data must be collected, validated for accuracy, and managed, all in compliance with rigorous privacy and security regulations. Plus, it’s not just big data, it’s important data. A simple transposed number in an address can mean the difference between getting paid promptly or not at all. A pharmaceutical mix-up could mean the difference between life and death.

With so much important data, it’s easy to get overwhelmed. Who’s responsible? How is data quality ensured? How is it managed? Several roles can be involved:

Data stewards – Develop data governance policies and procedures.
Data owners – Generate the data and implement the policies and procedures.
Business users –  Analyze and make use of the data.
Data managers –  Information systems managers and developers who implement and manage the tools need to capture, validate, and analyze the data.

Defining a data quality vision, assembling a data team, and investing in appropriate technology is a must. With the right team and data validation tools in place, medical centers and any organization can get serious about data and data quality.

How Can Data Quality Lead to Quality Care?

Having the most accurate, authoritative and up-to-date information for patients can positively impact organizations in many ways. For example, when patients move, they don’t always think to inform their doctors, labs, hospitals, or radiology centers. With a real-time address validation API, not only could you instantly validate a patient’s address for billing and marketing purposes, you could confirm that the patient still lives within the insurance company’s “prime distance” radius before treatment begins.

Accurate address and demographic data can trim mailing costs and improve patient satisfaction with appropriate timing and personalization. Meanwhile, aggregated health data could be analyzed to look at health outcomes or reach out to patients proactively based on trends or health histories. Just as online retailers recommend products based on past purchases or purchases by customers like you, medical providers can use big data to recommend screenings based on health factors or demographic trends.

Developing a data quality initiative is a major, but worthwhile, undertaking for all types of organizations — and you don’t have to figure it all out on your own. Contact Service Objects today to learn more about our data validation tools.

Data Monetization: Leveraging Your Data as an Asset

Everyone knows that Michael Dell built a giant computer business from scratch in a college dorm room. Less well known is how he got started: by selling newspaper subscriptions in his hometown of Houston.

You see, most newspaper salespeople took lists of prospects and started cold-calling them. Most weren’t interested. In his biography, Dell describes using a different strategy: he found out who had recently married or purchased a house from public records – both groups that were much more likely to want new newspaper subscriptions – and pitched to them. He was so successful that he eventually surprised his parents by driving off to college in a new BMW.

This is an example of data monetization – the use of data as a revenue source to improve your bottom line. Dell used an example of indirect data monetization, where data makes your sales process or other operations more effective. There is also direct data monetization, where you profit directly from the sale of your data, or the intelligence attached to it.

Data monetization has become big business nowadays. According to PWC consulting firm Strategy&, the market for commercializing data is projected to grow to US $300 billion annually in the financial services sector alone, while business intelligence analyst Jeff Morris predicts a US $5 billion-plus market for retail data analytics by 2020. Even Michael Dell, clearly remembering his newspaper-selling days, is now predicting that data analytics will be the next trillion-dollar market.

This growth market is clearly being driven by massive growth in data sources themselves, ranging from social media to the Internet of Things (IoT) – there is now income and insight to be gained out of everything from Facebook posts to remote sensing devices. But for most businesses, the first and easiest source of data monetization lies in their contact and CRM data.

Understanding the behaviors and preferences of customers, prospects and stakeholders is the key to indirect data monetization (such as targeted offers and better response rates), and sometimes direct data monetization (such as selling contact lists or analytical insight). In both cases, your success lives or dies on data quality. Here’s why:

  • Bad data makes your insights worthless. For example, if you are analyzing the purchasing behavior of your prospects, and many of them entered false names or contact information to obtain free information, then what “Donald Duck” does may have little bearing on data from qualified purchasers.
  • The reputational cost of inaccurate data goes up substantially when you attempt to monetize it – for example, imagine sending offers of repeat business to new prospects, or vice-versa.
  • As big data gets bigger, the human and financial costs of responding to inaccurate information rise proportionately.

Information Builders CIO Rado Kotorov puts it very succinctly: “Data monetization projects can only be successful if the data at hand is cleansed and ready for analysis.” This underscores the importance of using inexpensive, automated data verification and validation tools as part of your system. With the right partner, data monetization can become an important part of both your revenue stream and your brand – as you become known as a business that gives more customers what they want, more often.

Marketers and Data Scientists Improving Data Quality and Marketing Results Together

In the era of big data, marketing professionals have added basic data analysis to their toolboxes. However, the data they’re dealing with often requires significantly deeper analysis, and data quality (Is it Accurate? Current? Authentic?) is a huge concern. Thus, data scientists and marketers are more often working side by side to improve campaign efficiencies and results.

What is a Data Scientist?

Harvard Business Review called the data scientist profession “the sexiest job of the 21st century” and described the role of data scientist as “a hybrid of data hacker, analyst, communicator, and trusted adviser.”

The term data scientist itself is relatively new, with many data scientists lacking what we might call a data science degree. Rather, they may have a background in business, statistics, math, economics, or analytics. Data scientists understand business, patterns, and numbers. They tend to enjoy looking at diverse sets of data in search of similarities, differences, trends, and other discoveries. The ability to understand and communicate their discoveries make data scientists a valuable addition to any marketing team.

Data scientists are in demand and command high salaries. In fact, Robert Half Technology’s 2017 Salary Guides suggest that data scientists will see a 6.5 percent bump in pay compared to 2016 (and their average starting salary range is already an impressive $116,000 to $163,500).

Why are Marketers Working with Data Scientists?

Marketers must deal with massive amounts of data and are increasingly concerned about data quality. They recognize that there’s likely valuable information buried within the data, yet making those discoveries requires time, expertise, and tools — each of which pulls them away from their other important tasks. Likewise, even the tried-and-true act of sending direct mail to the masses can benefit from a data scientist who can both dig into the demographic requirements as well as ensure data quality by cross referencing address data against USPS databases.

In short, marketers need those data hackers, analysts, communicators, and trusted advisers in order to make sense of the data and ensure data quality.

A Look at the Marketer – Data Scientist Relationship

As with any collaboration, marketers and data scientists occasionally have differences. They come from different academic backgrounds, and have different perspectives. A marketer, for example, is highly creative whereas a data scientist is more accustomed to analyzing data.

However, when sharing a common goal and understanding their roles in achieving it, marketers and data scientists can forge a worthwhile partnership that positively impacts business success.

We all know that you’re only as good as your data, making data quality a top shared concern between marketers and data scientists alike. Using tools such as data validation APIs, data scientists ensure that the information marketers have is as accurate, authoritative, and up to date as possible. Whether pinpointing geographical trends or validating addresses prior to a massive direct mail campaign, the collaboration between marketers and data scientists leads to increased campaign efficiencies, results, and, ultimately, increased revenue for the company as a whole.

ERP: Data Quality and the Data-Driven Enterprise

Enterprise resource planning, or ERP for short, integrates the functions of an organization around a common database and applications suite. A brainchild of the late 20th century – the term was coined by the Gartner Group in the 1990s – the concept of ERP has grown to become ubiquitous for organizations of all sizes, in what has now become over a US $25 billion dollar industry annually.

ERP systems often encompass areas such as human resources, manufacturing, finance, supply chain management, marketing and customer relationships. Their integration not only automates many of the operations of these functions, but provides a new level of strategic visibility about your business. In the ERP era, we can now explore questions like:

  • Where your most productive facilities are
  • How much it costs to manufacture a part
  • How best to optimize your delivery routes
  • The costs of your back office operations
  • And many more

Its functions often interface with customer relationship management or CRM (discussed in a previous blog post), which provides visibility on post-sale customer interactions. CRM is often integrated within ERP product suites, adding market intelligence to the business intelligence of ERP.

ERP data generally falls into one of three categories:

Organizational data, which describes the infrastructure of the organization, such as its divisions and facilities. For most firms, this data changes very slowly over time.

Master data, which encompasses entities associated with the organization such as customers, employees and suppliers. This data changes periodically with the normal flow of business.

Transactional data, based on sales and customer interactions. This data, which is the lifeblood of your revenue pipeline, is constantly changing.

Note that two out of three of these key areas involve contact information, which in turn can come in to the system from a variety of sources – each of which is a potential source or error. Causes of these errors can range from incorrect data entry to intentional fraud, not to mention the natural process of changing addresses, phone numbers and email addresses. And this bad data can propagate throughout the system, causing consequences that can include wasted manpower, incorrect shipments, missed sales and marketing opportunities, and more.

According to one research paper, data quality issues are often a key driver for moving to ERP, and yet remain a concern following ERP implementation as well. This leads to a key concept for making ERP work for you: automated systems require automated solutions for data quality. Solutions such as Service Objects’ data verification tools ensure that good data comes into the system in the first place, leveraging constantly updated databases from sources such as the USPS and others. The end result is contact data quality that doesn’t depend on human efforts, in a chain that has many human touch points.

ERP is part of a much larger trend in business computing, towards centralized databases that streamline information flow, automate critical operations, and more importantly have strategic value for business intelligence. With the advent of inexpensive, cloud-based software, the use of these systems are spreading rapidly to businesses of all sizes. The result is a world that depends more than ever on good data quality – and the need to use tools that ensure this quality automatically.

The Importance of Data Accuracy in Machine Learning

Imagine that someone calls your contact center – and before they even get to “Hello,” you know what they might be calling about, how frustrated they might be, and what additional products and services they might be interested in purchasing.

This is just one of the many promises of machine learning: a form of artificial intelligence (AI) that learns from the data itself, rather than from explicit programming. In the contact center example above, machine learning uses inputs ranging from CRM data to voice analysis to add predictive logic to your cu
stomer interactions. (One firm, in fact, cites call center sales efforts improving by over a third after implementing machine learning software.)

Machine learning applications nowadays range from image recognition to predictive analytics. One example of the latter happens every time you log into Facebook: by analyzing your interactions, it makes more intelligent choices about which of your hundreds of friends – and what sponsored content – ends up on your newsfeed. And a recent Forbes article predicts a wealth of new and specialized applications, including helping ships to avoid hitting whales, automating granting employee access credentials, and predicting who is at risk for hospital readmission – before they even leave the hospital the first time!

The common thread between most machine learning applications is deep learning, often fueled by high-speed cloud computing and big data. The data itself is the star of the process: for example, a computer can often learn to play games like an expert, without programming a strategy beforehand, by generating enough moves by trial-and-error to find patterns and create rules. This mimics the way the human brain itself often learns to process information, whether it is learning to walk around in a dark living room at night or finding something in the garage.

Since machine learning is fed by large amounts of data, its benefits can quickly fall apart when this data isn’t accurate. A humorous example of this was when a major department store chain decided (incorrectly) that CNBC host Carol Roth was pregnant – to the point where she was receiving samples of baby formula and other products – and Google targeted her as an older man. Multiply examples like this by the amount of bad data in many contact databases, and the principle of “garbage in, garbage out” can quickly lead to serious costs, particularly with larger datasets.

Putting some numbers to this issue, statistics from IT data quality firm Blazent show that while over two thirds of senior level IT staff intend to make use of machine learning, 60 percent lack confidence in the quality of their data – and 45 percent of their organizations simply react to data errors as they occur. Which is not only costly, but in many cases totally unnecessary: with modern data quality management tools, their absence is too often a matter of inertia or lack of ownership rather than ROI.

Truly unlocking the potential of machine learning will require a marriage between the promise of its applications and the practicalities of data quality. Like most marriages, this will involve good communication and clearly defined responsibilities, within a larger framework of good data governance. Done well, machine learning technology promises to represent another very important step in the process of leveraging your data as an asset.

The Role of a Data Steward

If you have ever dined at a *really* fine restaurant, it may have featured a wine steward: a person formally trained and certified to oversee every aspect of the restaurant’s wine collection. A sommelier, as they are known, not only tastes wines before serving them but sets policy for wine acquisition and its pairings with food, among other responsibilities. Training for this role may involve as much as two-year college degree.

This is a good metaphor for a growing role in technology and business organizations – that of a data steward. Unlike a database administrator, who takes functional responsibility for repositories of data, a data steward has a broader role encompassing policies, procedures, and data quality. In a very real sense, a data steward is responsible for managing the overall value and long-term sustainability of an organization’s data assets.

According to Dataversity, the key role of a data steward is that they own an organization’s data. This links to the historical definition of a steward, from the Middle Ages – one who oversees the affairs of someone’s estate. This means that an effective data steward needs a broad background including areas like programming and database skills, data modeling and warehousing expertise, and above all good communications skills and business visibility. In larger organizations, Gartner sees this role as becoming increasingly formalized as a C-level position title, either as Chief Data Officer or incorporated as part of another C-level IT officer’s responsibilities.

One of the key advantages of having a formal data steward is that someone is accountable for your data quality. Too often, even in large organizations, this job falls to no one. Frequently individual stakeholders are responsible for data entry or data usage, and the process of strategically addressing bad data would add bandwidth to their jobs. This is an example of the tragedy of the commons, where no one takes responsibility for the common good, and the organization ultimately incurs costs in time, missed marketing opportunities or poor customer relations by living with subpar data quality.

Another advantage of a data steward is that someone is tasked with evaluating and acquiring the right infrastructure for optimizing the value of your data. For example, automated tools exist that not only flag or correct contact data for accuracy, but enhance its value by appending publicly available information such as phone numbers or geographic locations. Or help control fraud and waste by screening your contact data per numerous criteria, and then assigning a quantitative lead score. Ironically, these tools are often inexpensive and make everyone’s life easier, but having a data steward can prevent a situation where implementing these tools is no one’s responsibility.

Looking at a formal role of data stewardship in your own organization is a sign that you take data seriously as an asset, and can start making smart moves to protect and expand its value. It helps you think strategically about your data, and teach everyone to be accountable for their role in it. This, in turn, can become the key to leveraging your organization’s data as a competitive advantage.

Data Quality and the Environment

Service Objects recently celebrated our 15th year in business and it made me reflect on something that is important to me and is an underappreciated reason for improving your data quality: protecting our environmental resources.

Lots of companies talk about protecting the environment. Hotels ask you to re-use your towels, workplaces encourage you to recycle, and restaurants sometimes forego that automatic glass of ice water on your table. Good for them – it saves them all money as well as conserving resources. But our perspective is somewhat different because environmental conservation is one of the key reasons I founded this company in 2001.

Ever since I was a young man, I’ve been an avid outdoorsman who has felt a very strong connection to the natural world we inhabit. So one of the things I couldn’t help but notice was how much mislabeled direct mail showed up at my doorstep, as well as those of my friends. Some companies might even send three copies of the same thick catalog, addressed to different spellings of my name. Add in misdirected mail that never arrives, poor demographic targeting, and constant changes in workplace addresses, and you have a huge – and preventable – waste of resources.

As a mathematician and an engineer by training, thinking through the mathematics of how better data quality could affect this massive waste stream was a large part of the genesis of Service Objects. We discovered that the numbers involved were truly staggering. And we discovered that simple, automated filters driven by sophisticated database technology could make a huge difference in these figures.

Since then, our products have made a real difference. Over the past 15 years, our commitment to reducing waste associated with bad address data has saved over 1.2 million trees, and prevented over 150 million pounds of paper from winding up in landfills. We have also saved 520 million gallons of water and prevented 44 million pounds of air pollution. More important, these savings are driven by a growing enterprise that has now validated over two and a half billion contact records for over 2400 customers.

As a company, our concern for the environment goes far beyond the services we provide to customers. We encourage our staff to ride their bicycle to work instead of driving their car, use sustainable office supplies, and keep a sharp eye on our own resource usage. Corporate conservation is one of the four core values of our company’s culture. The result is a team I am proud of, with a shared vision and sense of purpose.

There are many great business reasons for using Service Objects’ data quality products, including cost savings, fraud prevention, more effective marketing, and improved customer loyalty. But to me personally, using a smaller footprint of the Earth’s resources is the core that underlies all of these benefits. It is a true example of doing well by doing good.

For any business – particularly those who do direct marketing, distribute print media or ship tangible products, among many others – improving your data quality with us can make a real difference to both your bottom line AND our planet’s resources. We are proud to play a part in protecting the environment, and look forward to serving you for the next 15 years and beyond.

Your Data’s New Year’s Resolutions

A new year is upon us once again. What are your resolutions for 2017? Hit the gym more often? Get organized? Improve the state of your bank account?

Guess what: your data has a very similar set of resolutions. And automating the quality of your contact data and marketing leads is one of the easiest and most cost-effective ways to reach them because the accuracy of this data has a ripple effect in so many of the areas that will make you successful in 2017.

Let’s look at a few of them:

Saving Money

The economic argument for automating your data quality is extremely well-established nowadays. A relatively small expenditure lets you tap into continually updated databases of addresses, phone numbers, emails and more, to validate and correct contact information before you use it. This not only saves you the costs of manually cleaning up your contact records, but reduces the manpower and time that bad data costs your business.

Meeting Budget Goals

There is much more to your financial picture than cost savings. Your top line revenue is often directly proportional to the quality of your marketing leads. Automated lead validation systems take the work out of optimizing the value of this asset, analyzing over 100 data points including name, address, phone, email, device verification and more to provide a quantitative rating of lead quality. The result? Targeted marketing to your highest quality leads, and a much better ROI on your marketing budget.

Automated systems can help you turn the data you have into the competitive intelligence you need.

Gaining a Competitive Edge

When is contact data more than just contact data? When you can leverage it to append demographic information, phone contacts, or other enhanced lead data. Automated systems can help you turn the data you have into the competitive intelligence you need.

Improving Customer Service

Good service starts with responding to the right people at the right time – and many of your worst service failures start with incorrect or misdirected contact information, ranging from lack of response to lost shipments. And since we now live in a social media world where our service missteps are exposed to the public, the ripple effect of these service failures can quickly spread far and wide.

Other Benefits

Did you know that automated contact validation systems can be your front line partner in a host of other critical business objectives, including fraud control, phone contact and address standards compliance, or sales tax computation? When you harness the power of a third-party vendor’s databases and assessment criteria, you make it possible to leverage many tasks that have an influence on your bottom line.

The New Year is a perfect time to reassess how hard your data is working for you – and more important, how you can painlessly improve these data assets to make your life easier and more profitable. And unlike most people’s personal resolutions, there is no need for them to fall by the wayside if you choose the right partner. By strategically exploring what possibilities you have for your own data, you can take a very big step towards making 2017 your best year yet.

Your Favorite Holiday Characters and Data Quality

The holidays are upon us. One of the busiest seasons for many businesses, particularly in the consumer sector. According to the National Retail Federation, as much as 40 percent of annual sales for small to medium-sized businesses takes place in the last two months of the year. And in 2015, consumers spent a record amount of money on gifts and other purchases celebrating Christmas, Hanukkah or Kwanzaa.

Part of what makes this time of year special for consumers is the lore surrounding the holidays, in timeless stories that get passed down from generation to generation. So in the spirit of the season, Service Objects thought we would take a look at some of your favorite holiday characters and see what they are up to nowadays it the world of data quality. Here is what a few of them are doing in the 21st century:

Jack Frost is no longer nipping at your nose – now he is freezing your credit because you’ve been getting too many chargebacks. Why? Because your holiday packages keep getting sent to wrong addresses provided by your fat-fingered customers. And before long, you learn that a little bit of automated address verification would warm things up nicely.

The Grinch doesn’t have to climb down chimneys to steal presents anymore – these days, he’s into online fraud. And if you don’t start validating his credit information, or checking into why his North Pole IP address doesn’t match his shipping location, you’ll keep getting tricked into making his balance sheet a lot greener.

Rudolph the Reindeer has long since stopped leading Santa around on Christmas Eve, ever since they started making headlights for sleighs. Instead, his nose now lights up red when bad contact data shows up on the list of naughty and nice children on Santa’s CRM database, using a direct UI interface that taps into extensive contact databases in the US and Canada. Which means that Santa now has a lot fewer surprises dropping off presents compared with the past.

The Little Drummer Boy is now all grown up, and a corporate consultant beating the drum for greater service quality and competitive advantage. And one of the first places he swings his drumsticks towards nowadays is leveraging your contact data as an asset, by continually improving your data quality.

Scrooge is no longer a miser who exploits his workers at Christmas time. Now redeemed by the ghosts of Christmas Past, Present, and Future, today he saves money by reducing your marketing expenses – starting with validating every lead’s contact information and getting lead quality rankings ranging from “Bah, humbug!” to 100, to reduce your top line costs and manpower.

Service Objects spreads holiday cheer all year round by reducing the waste, fraud and poor customer service caused by bad data. We hope you have a very happy holiday season!

Validating Online Transactions Plays Key Role as Cyber Monday Sets US Online Sales Record

Cyber Monday shattered previous online sales records and set a new all-time high, with consumers opening their wallets and spending $3.45 billion, marking a 12.1% jump over last year’s figure and earning its place in retail history.

The data, compiled by Adobe Digital Insights, surpassed initial estimates and dismissed fears that consumer shopping during the Thanksgiving weekend would hurt sales on Cyber Monday, which is historically the busiest day of the year for internet shopping. Adobe’s data measured 80 percent of all online transactions from the top 100 U.S. retailers.

The record-breaking online shopping put retailers’ data quality at the top of their priority list. Many smart retailers turned to Service Objects to help them make informed decisions about their customers, relying on Service Objects to validate over 1 million of their online transactions on Cyber Monday.

An increase in order volume impacts everything from a store’s inventory levels to brand reputation. By implementing data quality solutions like the ones Service Objects offers allows retailers to:

  • Greatly reduce the number of fraudulent orders by validating that a consumer is really who they say they are;
  • Ensure customers’ orders are delivered to the correct location, heading off customer service nightmares and stopping harmful customer horror stories from going viral on social media;
  • Save significant money by eliminating bad or incorrect address data and increase the percentage of successful package deliveries;
  • Eliminate the headache of dealing with credit card chargebacks caused by missed shipments; and
  • Gain a competitive advantage over the competition and increase customer loyalty.

Using a data quality solution is fundamental to turning your customer data into a strategic asset. Read more about the different business challenges that data quality can solve.

What Does Address Validation Offer?

Our USPS CASS Certified™ Address Validation service improves internal mail processes and delivery rates by standardizing contact records against USPS data and flagging for vacancy, addresses returning mail, and general delivery addresses. Our industry-leading GetBestMatches operation now combines Delivery Point Validation (DPV), SuiteLink, and Residential Delivery Indicator (RDI) into one robust API call to our USPS CASS Certified™ database engine.

Delivery Point Validation (DPV)

The DPV 1-4 codes are our way of indicating the deliverability of an address. A quick glance at the DPV code can tell you if an address is deliverable according to the USPS.

DPV can be broken down into the 4 following codes, and their subsequent descriptions:

1: Yes, the input record is a valid mailing address
2: No, the input record is not in the DPV database of valid mailing addresses
3: The apartment or rural route box number is not valid, although the house number or rural route is valid
4: The input record is a valid mailing address, but is missing the apartment or rural route box number


The DOTS Address Validation 3 service has the ability to correct and/or append suite information to an address. Through the use of business names, the service will try to find or append the proper suite information. SuiteLink provides an added level of accuracy to Business to Business relationships by ensuring the proper address and suite information is included in your validated data.

Residential Delivery Indicator (RDI)

The Residential Delivery Indicator enables you to know if an address is residential. This is often important if you are looking for targeted marketing (Business to Consumer). By knowing your target address’s delivery type you can make more informed business decisions.


The Coding Accuracy Support System (CASS) enables the United States Postal Service (USPS) to evaluate the accuracy of software that corrects and matches street addresses. It is important because it ensures that our validation system doesn’t make far-reaching changes to your input address. We comply with the CASS regulations and thus, the validated address and additional information that is returned to you actually pertains to the original input address. A company that doesn’t comply with the CASS regulations could easily take your input address and make a change to it that completely changes the intended location. In doing so, your data would be rendered effectively useless. The DOTS Address Validation 3 service is CASS compliant and any changes that may be made will pertain to the proper address.

Try out Address Validation for your business, for free.

What Is Data Onboarding – And Why Is It Important?

What is the best marketing database of all?

Statistically, it is your own customers. It has long been common wisdom that existing customers are much easier to sell to than new prospects – but what you may not know is how valuable this market is. According to the Online Marketing Institute, repeat customers represent over 40 percent of online revenue in the United States, while being much less price-sensitive and much less costly to market to. Moreover, they are often your strongest brand advocates.

So how do you tap into these customers in your online marketing? They didn’t share their eBay account or their Facebook page with you – just their contact information. But the science of data onboarding helps you turn your offline data into online data for marketing. And then you can do content channel or social media marketing to people who are not just like your customers, but are your customers.

According to Wikipedia, data onboarding is the process of transferring offline data to an online environment for marketing purposes. It generally involves taking this offline data, anonymizing it to protect individual privacy, and matching components of it to online data sources such as social media or content providers. Beyond monetizing customer information such as your CRM data, it has a host of other applications, including:

  • Look-alike marketing, where you grow your business by marketing to people who behave like your customers
  • Marketing channel assessment, where you determine whether your ads were seen and led to increased sales across multiple channels
  • Personalization, where you target your marketing content to specific customer attributes
  • Benchmarking against customer behavior, where you test the effectiveness of your marketing efforts against actual customer purchasing trends

This leads directly to the question of data quality. The promise of marketing technologies such as data onboarding pivots around having accurate, up-to-date and verified data. Bad data always has a cost in time and resources for your marketing efforts, but this problem is magnified with identity-based marketing: you lose control of who you are marketing to and risk delivering inappropriate or confusing brand messages. Worse, you lose the benefits of customizing your message to your target market.

This means that data validation tools that verify customer data, such as email addresses, help preserve and enhance the asset value of your offline databases. Moreover, you can predictively assess the value of marketing leads through cross-validating data such as name, street address, phone number, email address and IP address, getting a composite score that lets you identify promising or high-value customer data at the point-of-entry.

Online marketers have always had many options for targeting people, based on their demographics, activity, or many other criteria. Now your existing customer database is part of this mix as well. As your data becomes an increasingly valuable marketing asset, taking a proactive approach to data quality is a simple and cost-effective way to guard the value of this information to your marketing – and ultimately, your bottom line.

People, Process, and Technology: The Three Pillars of Data Quality

For many people, managing data quality seems like a daunting task. They may realize that it is an important issue with financial consequences for their organization, but they don’t know how to proceed in managing it. With the right strategy, however, any organization can reap the benefits of consistent data quality, by focusing on three core principles: People, Process, and Technology.

Taken together, these three areas serve as the cornerstones of a structured approach to data quality that you can implement and manage. And more importantly, a framework that lets you track the ROI of successful data quality. Let’s look at each of these in detail:


This is frankly where most organizations fail at the data quality game: not allocating dedicated gatekeepers for the health of their data. It is a very easy mistake to make when budgets are tight, resources are focused on revenue-generating functions like sales or product development, and the business case for data quality gets lost amidst a host of competing priorities.

The single biggest thing an organization can do for data quality is to devote dedicated resources to it. This becomes an easier sell once you look at the real costs of bad data: for example, research shows that 25% of all contact records contain bad data, a third of marketing leads use fake names, and half of all phone numbers provided won’t connect. Run these numbers across the direct costs of customer acquisition, add in missed sales opportunities, increased customer care costs, and even potential compliance fines, and you often have the financial justification for a data quality gatekeeper.


How much control do you have over data entry points, data accuracy, and verification? For too many organizations, the answer is none – with resulting costs due to factors such as duplicate data entry, human error, or lack of verification. And who is responsible for maintaining the integrity of your business data? Too often, the answer is “no one,” in a world where data rarely ages well. An average of 70% of contact data goes bad in some form each year, which ushers in yet another level of direct and indirect costs.

One of the more important roles of a data gatekeeper is to have processes in place to manage the touch points for your data, engineer data quality in on the front end of customer and lead acquisition, and maintain this data over the course of its life cycle. Having the right policies and procedures in place gives you control over your data, and can make the mechanics of data quality frictionless and cost-effective. Or as your teachers used to put it, an ounce of prevention is worth a pound of cure.


Data quality solutions range from simply scanning spreadsheets for duplicates and mistakes, all the way to automated tools for tasks such as address validation, lead validation, and verification of email or phone contact information. And far too often, the solution of choice for an organization is to do nothing at all.

Ironically, using the best available automated tools for data quality is often a surprisingly cost-effective strategy, and can yield your best ROI. Automated tools can be as simple as verifying an address, or as sophisticated as creating a statistical ranking value for the quality of a lead or contact record. Used properly, these tools can put much of the hard work of data quality on autopilot for you and your organization.

Ensuring your organization’s data quality can seem like an overwhelming task. But broken into its component parts – your people, your process, and your technology – this task can turn into logical steps that pay themselves back very quickly. It is a simple and profitable three-step strategy for any organization that runs on data.

How To Have a Happy, and More Profitable, Holiday Season

The holidays are approaching. For many merchants, this is the busiest and most profitable time of year. Especially if you sell high-ticket items that are popular as gifts. Unfortunately, the holidays are also the high season for incorrect and fraudulent orders.

First of all, both your staff and your customers are human – and it is easy for both to be more human than ever over the holidays, because of high transaction volumes, rush orders, and the stress of the season. A bungled address or a credit card problem can have effects ranging from time and human intervention to the possible loss of merchandise. And your valuable customer service reputation can also take a hit from order and delivery errors.

Merchants are also often targeted for intentional fraud over the holidays. Did you know that stolen credit card numbers sell on the black market “dark web” for as little as $5 each? And that cards with what hackers call “full info,” including name, address, expiration date, verification info and CVV can be had worldwide for $30-40 apiece? It is a small price to pay for someone who uses that card with a fraudulent name and mailing address – and a large cost to you when they use it to order expensive merchandise from your business.

The cost of fraud to unsuspecting businesses can sink your profit margins. Small businesses, in particular, can be on the hook for merchandise ordered by thieves who never intend to pay, and globally this kind of fraud costs businesses throughout the US $14 billion per year . Online merchants have become particularly vulnerable, with a 9-12% year-over-year increase in fraud levels as of 2016 . Beyond chargebacks for stolen or fraudulent credit cards, there are costs in your time and manpower involved – not to mention the aggravation of unexpected losses.

Thankfully, you can mitigate a great deal of this risk with a little planning. Here are some tips for putting a little more holiday cheer back into your busiest season:

Look for the red flags

Fraudsters often have a common modus operandi. Look for orders where the “Bill to” and “Ship to” addresses are different, unusually large orders from unknown sources, or international orders, particularly from developing countries. And particularly around the holidays, pay attention to last minute high-ticket orders with rush shipment, where merchandise can fall into the wrong hands before the fraud is discovered.

Have a policy

What procedures do you follow when you receive a questionable order? What procedures do you follow to make sure that orders are valid or accurate? Are there circumstances where it would be prudent to delay shipment or decline an order? Learn the norms for vendors in your business, and teach all of your employees to follow them.

Validate your orders

For most businesses, one of the most reliable solutions is to use an inexpensive online service to verify the authenticity of a customer. A validation service can compare your order information against existing databases to quickly answer critical questions like whether an address is valid, a name or its corresponding credit information is legitimate, whether the IP address of an online order matches the address you are shipping to, and much more.

One solution for this is Service Objects’ DOTS Order ValidationSM, a real-time API that verifies, standardizes and authenticates customer order information. It performs 200 proprietary tests including address, BIN, email and IP validation, giving you an overall order score of 0-100 to flag suspicious orders. Check it out today and make sure your business avoids the perils of fraudulent orders this holiday season.

1 Danielson, Tess, “Here’s exactly how much your stolen credit card info is worth to hackers,” Business Insider, Nov. 30, 2015.

2 Heggestuen, John, “The US Sees More Money Lost to Credit Card Fraud than the Rest of the World Combined,” Business Insider, Mar. 5, 2014.

3 LexisNexis, 2016 LexisNexis® True Cost of FraudSM Study

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Integrating Service Objects with Salesforce

Salesforce is a great CRM that allows businesses to easily put customers at the center of their attention. But even a tool like Salesforce can be halted by poor quality customer data. Well, luckily for businesses and their data, Salesforce allows its users to call outside APIs and web services like our data validation services. If this is something that you haven’t done before, it can be a bit tricky; but don’t worry! The integration specialists here at Service Objects have you covered and we can show you how to get up and running in Salesforce in no time!

Remote Site Settings

One of the first things you will want to do to get your data validated is to add the Service Objects domain as an allowed site to access from within Salesforce. To do this, log into your Salesforce account and enter “Remote” in the Quick Find search bar as shown below.

Salesforce, Service Objects

Select “Remote Site Settings” and you will be taken to a page that lists all of the external sites that you can access from within Salesforce. Select “New Remote Site” and enter the information as shown below.

Salesforce, Service Objects

Once you click “Save” you will be ready to validate your data through a Service Objects web service. You should also add the site to the list of remote site settings as this will allow the ability to integrate proper failover configuration into your application.


If you are using REST to access a Service Objects web service, then you are good to start validating data. You simply have to make the HTTP call to one of our web services and then decide how you want to implement your newly validated data.

If you happen to be using SOAP to connect to our services, then you will have to alter the WSDL you use to connect to our services. A WSDL is a machine-readable document that tells a platform and coding language how to connect and interact with a web service. In order to do this, a bit of the WSDL will need to be altered. Lucky for you, we have already done this! We have a flight of updated WSDLs, Apex code examples, and handy guides for each of our services that will help you get up and running in Salesforce in no time.

To upload the edited WSDL, select the “Develop” link under the “Build” heading on the left-hand side of the main screen. Then select “Manage Apex Classes”.

Salesforce, Service ObjectsSalesforce, Service ObjectsOn this screen, you can select the “Generate from WSDL” button and then choose the updated WSDL to upload to Salesforce. If the WSDL has been properly edited then all the necessary classes will be successfully created and you can begin accessing Service Objects web services through Salesforce.

Your data is now ready to be validated! Once you have ensured the integrity of your customer data, you can get back to using Salesforce to guarantee the best interactions possible with your customers!

Also, be sure to check out our Free Salesforce Chrome Extension for a quick and easy validation tool to use in your browser!

How Lead Validation Works

As a marketer, one of your jobs is to ensure that your sales team has access to high-quality leads. So how do you go about screening your lead lists for accuracy? Believe it or not, most marketers are doing very little to measure the quality of their leads , leaning heavily on the lead scoring tools in their marketing automation platforms to help screen leads. But this is not enough and is only one small part of the bigger lead validation picture. In fact, according to a study conducted by Straight North, about half of the leads generated by your marketing campaigns are not actual sales leads.

In our previous blog, ‘Custom Lead Scoring and How it Works‘ we touch on the perils of lead scoring. So what can you do to ensure you’re handing off high-quality leads to your sales team? Service Objects’ DOTS – Lead Validation solution is a good place to begin.

How does it work?

Now, you’re probably wondering how does Service Objects “know” whether or not a lead is a legitimate person or viable contact? Without getting too technical or giving away trade secrets, I can tell you that we use a combination of our best-of-breed data quality tools to look at key elements of a contact record. By examining a combination of the contact record’s data points like; name, email, address, IP address, device, etc., we are able to derive an overall lead certainty and quality score for each contact record. This is what DOTS Lead Validation does.

Once you have these lead quality scores, you can create specific rules and actions based on the scores. To give you some real-world scenarios, let’s run through a couple quick examples showcasing how you can use Lead Quality scores to develop rules-based actions that will streamline your marketing and sales processes:

Chip’s call center

After implementing DOTS Lead Validation API into their lead capture process, “Chip’s Call Center” sets up the following rules for their incoming, real-time leads, making it more efficient for their sales team to prioritize and respond to their potential customers.

If the lead’s quality score is between:

  • 75-100, Lead is systematically moved to the top of the call center’s call list and assigned to the best performing sales reps.
  • 60-74, Lead is assigned secondary priority on call list and sent to second tier sales reps.
  • 50-59, Lead is sent an email asking for further confirmation/interaction
  • 25-49, Lead is placed on an email drip campaign (cost effective)
  • 0-24, These leads are largely considered time-wasters and ignored or placed on a monthly newsletter with low-conversion expectation

Martha’s marketing agency

Martha is driving traffic from multiple channels and audiences within these channels, and needs to decide the best way to allocate her monthly budget. The sales cycle for her company is 9-12 months, so she has little concrete data from which to base her decisions.

  • Most of Martha’s leads are delivered from external sources like Adwords, LinkedIn and Facebook. With Lead Validation, Martha can easily determine the lead quality scores for these channels. Here’s how that score report might look:
  • Adwords – Campaign#1, average Lead Quality score of 78
  • Adwords – Campaign#2, average score of 36
  • LinkedIn, CEO Audience – 90
  • LinkedIn, IT/Developer audience – 54
  • Organic, average score of 87

Based on these average scores, Martha decides to place the majority of her marketing budget in the top scoring channels.

Heather’s house of iPhone accessories

Heather wants to buy a highly-targeted email list to drive traffic to her online store. In the past, she has purchased lists that did not perform and needs a reliable way to pre-determine the quality of these lists before buying them. As part of her negotiation with the list broker, she asks them to, “Please bounce your email list against Service Objects’ Lead Validation service and let me know the average score. I only want to buy leads that score a 50 or higher.

The point I’m highlighting here is that by investing in a Lead Validation Service to determine Lead Quality scores, your company can create whatever strategies, rules, and actions that best fit YOUR experiences and needs. Even more importantly, you can preset sales quotas and expectations around the different lead quality scores.

The power of lead validation

DOTS Lead Validation is the first step in recouping the inevitable 50% of unusable sales leads coming in. You have been shown a great resource to increase ROI on your marketing campaigns, save your company unnecessary costs, expand marketing and sales efficiencies, as well as elevate overall company morale. The benefits of incorporating a lead validation system will bring about a successful, sustainable business and we would love to help you get started!

What’s Dragging Your Marketing Campaigns Down?

Savvy marketers know just how important it is to capitalize on every customer lead. And as technology advances and our customer lists expand, so does the need to streamline our marketing campaigns and processes. But so many businesses are missing the boat when it comes to keeping their contact records accurate, genuine, and up to date. The reason? Poor contact record data quality.

Let’s face it, marketing platforms can only take your campaigns so far. The reality is—they are only as good as the contact records within them, leaving a lot of room for error. Yes, it is humans entering their data and we know that humans make mistakes. From something as simple as duplicate records, falsifying info, simple data entry mistakes, to aging, out of date, or lost contact records, these components contribute to the erosion of your company’s contact records.

To give you some perspective, SeriusDecisions reports that 25% of customer data is inaccurate, containing critical errors. CRITICAL! That means that you’re missing out on converting 25% of your leads into customers. That works out to be 1 out of 4 of your contact records are corrupt. The sooner you can acknowledge that contact data quality issues exist in your company, the sooner you can move towards a solution.

The Lead Form - by the Numbers
Bad Data Equates to Negative Impacts

The first step in fixing the problems is to locate the areas being impacted by bad contact record data and then pinpoint those areas within your marketing platform that need correcting. Quite simply, there are some soft and hard costs associated with poor data quality. These are the areas where you will may see and feel the effects initially (soft costs):

  • Decreased productivity from your sales teams
  • Increased costs for Lead & Customer Acquisition, Marketing Automation,
    and CRM fees.
  • Misspent marketing labor costs
  • Increased risk of regulatory fines (e.g., Do Not Call compliance, SPAM traps).
  • Increase in Customer Care efforts (resulting from unhappy customers)

Over time, these build up and begin to cause larger problems (hard costs). The major impacts being realized through increased costs and lost revenues. Perhaps you’re experiencing a high employee turnover rate or realizing that customers are walking? These consequences could be happening as a result of poor quality data.

The point is, you need to enact some cost effective measures that will identify where the bad data resides and then correct it. Once you’re back on track, you’ll want a maintenance and management system in place to keep your campaigns moving along smoothly and productively, i.e., via a data governance policy. Long story short, bad data is in your contact records and it is dragging down your marketing campaigns. But…it CAN be prevented!

Hungry for more? You can get the full picture about the impacts of poor data quality and how to prevent/improve it by downloading our white paper, “Marketing with Bad Contact Data: A Recipe for Disaster

Your Own ‘Big Data’ is Silently Being Data Mined to Connect the Dots

With apps like Facebook, Waze, and the release of iOS 9, you probably didn’t realize that your cell phone is now quietly mining data behind the scenes. Don’t be afraid, though. This isn’t big brother trying to watch your every move, it’s data scientists trying to help you get the most out of your phone and its applications, ultimately trying to make your life easier.

Here are a few things your phone is doing:

Data mining your email for contacts

Since it was released late last year, Apple’s newest iPhone operating system (iOS 9) now searches through your email in order to connect the dots. For example, let’s say that you get an email from Bob Smith and the signature line in the email gives his phone number. iOS9 records this so that if his number calls you, and Bob isn’t in your contacts, Apple shows the number with text underneath that says “Maybe: Bob Smith”.

Apple was quick to point out that this automatic search is anonymous – not associated with your Apple ID, not shared with third parties, nor linked to your other Apple services, and you can turn it off at any time in your Settings.

Mining your data via Facebook’s facial recognition

Upload a photo with friends into Facebook and it will automatically recognize them and make suggestions for tagging them based on other photos of your friends.

When facial recognition first launched on Facebook in 2010, it automatically matched photos you would upload, and tagged your friends accordingly. This spooked so many users that Facebook removed the feature. They later they brought it back, this time around asking the users if the tagged photos were correct first. They also included the ability to turn it off altogether for those who thought it was still too ‘Big Brother”. You can turn it off via Facebook Settings -> Timeline and Tagging -> Who sees tag suggestions when photos that look like you are uploaded?

Waze crowd-sourced data mining for traffic

Google purchased Waze in 2013 for $1.3 Billion and people wondered “why so much?” Quite simply: because of the data. Accepting the terms of the app when you install it means that even when running in the background, the app sends the data to Waze of where you are and how fast you are driving. Waze had amassed a large enough user base that they have a constant stream of real-time traffic. The users are both the source of how fast they were going on any given road at any given time and the beneficiaries of knowing how fast everyone else is going on all other roads. There is no need for cameras or special sensors on the freeway. This meant Google could use the real-time data to make better maps and projections for traffic conditions, and re-route you based on traffic and incidents others had reported to Waze.

Here is a case where, if you read the fine print of the app user agreement, you might have second guessed your download. But like nearly everyone else, you probably didn’t read it and you are now avoiding traffic for the better.

Un-connecting the dots

Sometimes Big Data will have connected the dots, but you’d like to undo the connection. A recent article in the New York Times gave examples of how people managed breakups on social media:

‘I knew that if we reset our status to “single” or “divorced,” it would send a message to all our friends, and I was really hoping to avoid a mass notification. So we decided to delete the relationship status category on our walls altogether. This way, it would disconnect our pages quietly. In addition, I told him I planned to unfriend him in order to avoid hurt feelings through seeing happy pictures on the news feed.’

As ‘Big Data’ connections become more prevalent, luckily so too are the tools that help undo the connections they make. Facebook’s “OnThisDay” feature allows you to turn off friend’s reminders so that you aren’t shown memories of exes that you’d rather have not appeared.

Here at Service Objects, we are constantly looking at connecting the disparate dots of information to make data as accurate and as up-to-date as possible. Whether scoring a lead the second a user fills out their first online form on your website or preventing fraudulent packages from being delivered to a scam artist’s temporary address, having the freshest, most accurate data allows companies to make the best decisions and avoid costly mistakes.

What is a Data Quality Scan and How Can it Help You?

Marketing Automation and Your Contact Records: A Five Part Series That Every Marketer Can’t Miss (Part 5)

Putting Your Contact Records to the Test!

Now that you’ve read our series on how the quality of contact record data impacts the performance of your marketing automation efforts, we are hopeful that you better understand the importance of correcting and cleaning up these records. The next step is learning how your contact records measure up. To help, Service Objects offers a complimentary Data Quality Scan, providing a snapshot of where your data stands in real time and where improvements can be made.

DQS-ScreenHow does the Data Quality Scan work? It’s simple: we use a small sample of your data (up to 500 records), to not only find out where your data is “good”, but more to the point, what percentage of your data is bad and/or correctable. You’ll receive a data quality score for each of the five critical lead quality components.

  • Email address
  • Phone number
  • Name
  • Mailing address
  • Device

Check out a sample Data Quality Scan to see how each of these components are scored and the detailed information provided.

Once you get your results, you will have an opportunity to work with a Data Quality expert (yes, a real, live person) who will help you in deciding how best to correct and improve your contact records. From start to finish,think of them as your personal guide to improving your contact record data quality.

Leave Your Worries Behind

All in all, the Data Quality Scan is here to help marketers expose data quality problems within their companies, and we know that percentage is likely to be around 25%. It certainly wouldn’t hurt to give it a try and see for yourself, right? At the very least, ongoing data validation and correction services should become a priority within your marketing automation best practices.

And finally, before we sign off on this series, I wanted to leave you with this thought: When it comes to marketing, you have plenty to worry about. Constantly verifying and updating your marketing automation contact records should NOT be one of those worries. Implementing a data governance strategy around your contact records will ensure your records are always accurate, genuine, and up to date!

If you’re new to this series, start from the beginning! Read Part 1: Make Marketing Automation Work for You


Contact Data Governance in Action: It’s All About Validation & Verification

Marketing automation and your contact records: A five part series that every marketer can’t miss (part 4)

Well-integrated and accurate customer data is one of the best assets marketers have at their disposal to effectively personalize and engage customers, drive conversion rates, boost loyalty and trust, and ultimately maximize sales.” (Vera Loftis, UK managing director of Bluewolf, Salesforce experts)

Bottom line: you want the very best long-term outcome for your marketing campaigns, yes? The goal is to build a sustainable marketing automation platform, and in the world of data integrity, validation and verification are the keys to unlocking its success. And really this needs to happen at the beginning— at both the point of data capture AND/OR when migrating to a new marketing automation or CRM platform. Here’s the opportunity you’ve been waiting for!

Migrating data between platforms allows for a fresh, clean start. This is an ideal time for your Data Quality Gatekeeper to launch a preemptive strike against bad data (validate and verify). Essentially, they need to make sure that migrations and the fields being imported are accounted for BEFORE importing the data, i.e., mailing addresses, phone numbers, and email addresses can all be validated and brought up-to-date, as well as attaching demographic and firmographic information to corresponding contact records. And let’s not forget about data formatting and purging duplicate records. This is the time for remediation.

Implementing the right tools

To handle source data issues upfront, organizations need to employ a powerful, automated discovery and analysis tool that provides a detailed description of the data’s true content, structure, relationships, and quality. This kind of tool can reduce the time required for data analysis by as much as 90 percent over manual methods.” (Jeffrey Canter, Exec. VP of Operations at Innovative Systems, INC.)

Now you’ve followed our recommendations and have your gatekeeping system, or data governance strategy, up and running, but perhaps you’re thinking, “what’s the best way to do regular data quality check-ups and maintenance from here on out?” Honestly, you need a systematic solution in place that you can trust and rely upon. One that will continually provide THE most accurate, genuine, and up to date data for your company. AND you will want a program that, once it’s set up and running, will allow you to get on with other important jobs. Seriously, it should be as simple as “set it and forget it”.

Look no further! Service Objects provides a comprehensive set of solutions that will validate and verify your data as it’s being migrated or captured in real-time. Let’s take a look at just some of our Data On-Time Solutions (DOTS) you can choose from:

Lead Validation: A real-time lead validation service that verifies contact name, business name, email, address, phone, and device while providing an actionable lead score.

Email Validation: Verifies that the email addresses in your contact and lead database are deliverable at the mailbox-level. This feature corrects and standardizes, while assessing if addresses are fraudulent, spam-trap, disposable, or a catch-all.

Address Validation: Uses USPS® certified data to instantly validate, parse, correct, and append contact address data, including locational meta-data and the option for Delivery Point Validation (DPV).

NCOA LIVE: Keeps your contact mailing list accurate and up-to-date with data from the USPS® National Change-of-Address database (NCOALink).

Email Insight: Appends demographic data to emails from proprietary consumer database with over 400 million contacts. Supplies you with valuable info i.e., city, geolocation, age range, gender and median income of the address owner.

GeoPhone: Gives you accurate reverse telephone number lookup of US and Canadian residential and business names and addresses.

Phone Append: Supplies you with the most accurate lookup of landline telephone numbers available.

This is quite a lot of information to digest for sure. So, how do you know “what” solutions will work best for YOUR platform? Depending on your individual needs, our free Data Quality Scan will determine where your contact data might be falling short. Then you can take steps to clean up your contact records for good.

Up next: Part 5: What is a Data Quality Scan and How Can it Help You?

Data Breakdown Happens. Know the Reasons “Why” and Protect it from the Start

Marketing automation and your contact records: A five part series that every marketer can’t miss (part 3)

Welcome back! The fact that you’re interested in improving the data quality within your marketing automation platform is apparent, so let’s look at how the data breakdown happens so we can build it back up. The goal is to find sustainable and trustworthy ways to correct and effectively manage data from here on out.

So who’s responsible for ensuring good data quality? Your IT department? If only it was that simple. The fact is, your TEAM is responsible and it starts with a company that incorporates a culture of data quality and best practices. Senior managers must do their part to integrate and maintain solid data quality policies that are easy to follow and implement. Next, marketing managers must screen and clean up new lists before importing. Finally, your sales team must practice diligence when entering customer data. And, even if ALL of these groups commit to following these guidelines in the most stringent and cautious manner, you can bet that mistakes will still happen! We are, after all, human.

Now that we’ve identified the players, we can pinpoint what makes our data corrupt:

  • Inaccurate data: either info is entered incorrectly, or it becomes outdated
  • Duplicate data: multiple accounts are mistakenly set up for the same lead, i.e., some companies do not centralize their data.
  • Missing data: some fields are simply left empty
  • Non-conforming data: inconsistent field values, i.e., using various abbreviations for the same thing
  • Inappropriate data: data entered in wrong fields

Would you agree that the four most common data fields in your lead forms are: name, email, address, and phone? According to Salesforce:

These statistics highlight just how difficult it is to keep up to date and current with changing customer demographics. It’s overwhelming, especially when you consider the rate at which data is being captured. Fortunately, there IS a better way!

InfoGraph_FormTurning strategy into action

We’ve discussed how even the best and most efficient internal processes can be in place yet cannot completely wipe your data clean of corruption. Oh if only…BUT, there are additional steps you can take to continue moving in the right direction. You could implement a Data Governance strategy which “refers to the overall management of the availability, usability, integrity, and security of the data employed in an enterprise. A sound data governance approach includes a governing body or council, a defined set of procedures, and a plan to execute those procedures”. Here’s what you can do:

  • Conduct regularly scheduled data quality check-ups & maintenance scans: By work with an external source to automatically scan, identify, and update your data, you will save money and headaches in the long run. Sending in the Calvary!
  • Bump up personal accountability: Consider training your sales reps and then monitoring their data input via their login info. Like Big Brother, but less invasive.
  • Hire data quality managers: They have oversight of all data input, monitoring, and management. Think of them as Data Quality Gatekeepers – like a mini task force.

Congratulations! You are well on your way to stopping the bad-data-cycle. And the most important step in doing so is: running regular data check-ups and continued maintenance. So how do you set your business up for success? Come back for the 4th part in our series where we show you how to ensure your records are genuine, accurate and up-to-date. It’s as easy as one…two…THREE!

Be sure to start from the beginning! Check out Part 1: Make Marketing Automation Work for You

If you are interested, Service Objects provides a free Data Quality Scan that will give you insight into the quality and accuracy of your contact data. Click here to get your free scan.

Up next: Part 4: Contact Data Governance in Action: It’s All About Validation & Verification


Data Quality and Political Advertising

One benefit or problem, depending on your inclinations, of being in the marketing business is that you become acutely aware of the marketing that goes on around you. In many ways, that awareness is a great thing. Every day you can see great, and not so great, marketing ideas, concepts, and implementations. We learn from what others do. Now that marketing has become such a data-driven endeavor, the combination of ideas and data are the key to marketing and business success, and especially in political advertising.

The pundits are predicting that in the time between now and November, the spending on political advertising will set new records. Some estimates reach as high as $6.5 Billion across TV, print, and digital media. The people who implement political advertising are becoming more sophisticated in the ability to target potential voters due to the tools and techniques that the rest of us marketers use every day.

Here’s where the problems start. Many people and organizations rely solely on email addresses for their marketing. Yet political processes such as elections are largely local processes that rely on real people having real addresses. A great way to waste money is to throw advertising spend at the wrong people. Wasted spend would be so easy to do in this case.

The connection to data quality starts to emerge. One factor in defining the quality is the completeness of the data. Whether data is correct or not is easy to understand. Completeness is a little harder to grasp. In an election scenario, an email address alone certainly may not be of much benefit. Knowing a street address and having some certainty that the address is correct moves us in the right direction. Appending the demographics of the location to the record speeds up the move to valuable data. Being able to attach a score to measure the quality of a lead helps. Yes, for marketing purposes, a potential voter is a lead.

Those of us not in the election business can learn a few things from those that are. The marketing has to be done now. The election date isn’t moving. Getting things right the first time becomes more critical. Time for lots of A/B testing simply doesn’t exist. We can have great creative, but the effectiveness of our work diminishes quickly with poor lists. The need for data quality becomes more apparent due to the compressed timeframe.

If you are in marketing, keep your eyes and ears open in the next few months. What we hear and see from the political process is going to be interesting. The more that data quality plays a part, the more effective all that spend will be.

Where Does Bad Data Come From?

We talk a great deal about data quality, validating information, and the impact on our business. Do we ever stop and think where bad data comes from? It’s not like there is some bad part of town where bad data hangs out as in some B-movie. Bad data doesn’t spontaneously appear as some clouds part. It’s not delivered by some evil version of the stork. Bad data has to come from someplace, but where?

I like to put the sources of bad data into one of three categories: people, processes, and policies. It’s not that any of this happens intentionally. In the course of doing business, we make decisions or perform actions that impact data quality. If we understand the source, we can be better prepared to address the issues. Let’s look at the categories:

The first source of bad data is people. People do enter names like “Mickey Mouse” in a web form to download a piece of information. The resulting lead quality is now very low. If I’m a salesperson, I want to be selling so I may not be very diligent entering prospect information into a CRM system. In many instances, people just don’t know. How many of us know the full 9 digits of our home zip code? Could you properly format an address on a letter to France? How many different versions of a company name could be in the order entry system because the contact center people want to get the order booked? None of this is malicious, but it happens.

The second category, process, is a little more subtle. Two companies combine through a merger or acquisition. Those companies have different ERP systems. Chances are the data in the two systems aren’t consistent, so we now have a data quality problem trying to find the common customer records. Even within a single organization, the people in accounts receivable may be treating data differently than the people in shipping. When a customer moves, the process to change the customer may not be getting enough attention. The orders and invoices are now going to the wrong place costing money and lowering customer satisfaction.

Policies can be external to an organization. Did you know that over 100 different postcode formats exist across the globe? In the US, we don’t even call them postcodes; we call them zip codes. Many countries don’t have postcodes at all. In countries like Japan, the format of the address changes depending on the language in which the address is written. The US includes states as a part of the address; most countries don’t. What happens to our data and our customers if we require a state and US-format zip code on a web form? You get the picture by now.

Rather than bemoan the state of data quality, let’s be aware of the sources. When we build our ERP systems, install our marketing automation systems, and create our websites, think about what can happen. From that point, we can help the people who use these systems and their policies and procedures cope with all the issues. Improving data quality at the source has huge payoffs.

30% of the data in your marketing automation platform is likely incorrect – see how bad your data is with a free scan!

Cold Calling in an Election Year

This election year has already had its share of surprises and upsets, and it’s just getting started. Political parties and individual campaigns alike are scrambling to reach out to as many members of the community as possible. Networks of volunteers are cold calling voters while robocalling systems are dialing numbers at an astounding rate. One metric given by an unnamed party estimates the number of calls to be around 50,000 per day within one party’s headquarters alone.

That’s a lot of calls being made. Imagine 50,000 calls per day until November 8th.

That’s roughly 12 million phone calls, and that estimate is for just one political party.

As a phone validation expert, we question how these massive lists have been prescreened to ensure compliance on a number of issues.

For example, robocalling systems are prohibited from dialing wireless phone numbers. With most phone numbers being ported from landline to mobile, this leaves a much more condensed pool of potential voters to be accessible — at a time when political parties and campaigns are under serious pressure to get out the vote. By current estimates, the total available landline numbers in the United States has shrunk from 175 million to roughly 65 million active phone numbers.

Many wireless numbers are unidentifiable by contact name — even with Caller ID — unless that number has previously registered by an opt-in site. In order to comply with the Telephone Consumer Protection Act of 1991, political telemarketers need to perform some sort of phone validation on their massive lists in order to track local number portability and identify wireless and VoIP phone numbers.

Compiling a massive list in such a short time would require access to lists which may contain numbers that have since been ported or disconnected. Thus, campaigns and their robodialers may be dialing numbers that they believe are landlines, yet are really mobile phones. They may also be wasting valuable time and resources dialing disconnected numbers. Phone validation software both identifies wireless and VoIP numbers and scrubs lists for disconnected numbers.

As the race to gain key electoral votes continues, many parties are feeling the pressure to ramp up efforts to swing votes in their favor. This sense of urgency is likely enticing political organizations to seek fresh contact leads, and knowing whether it is legal to contact someone thanks to real-time phone validation would be of utmost value. Compliance is essential, and easy to accomplish with our real-time phone validation API which instantly identifies whether a phone number belongs to a landline or wireless phone.

Leveraging SSD (Solid-State-Drive) Technology

Our company recently invested in SSD (solid-state-drive) arrays for our database servers, which allowed us to improve the speed of our services. As you likely know, it’s challenging to balance cost, reliability, speed and storage requirements for a business. While SSDs remain much more expensive than a performance hard disk drive of the same size (up to 8 times more expensive according to a recent EMC study), in our case, the performance throughput far outweighed the costs.

Considerations before investing in SSD


As we researched our database server upgrade options, we wanted to make sure that our investment would yield both speed and reliability. Below are a couple of considerations when moving from traditional HDDs to SSDs:

  • Reliability: SSDs have proven to be a reliable business storage solution, but transistors, capacitors, and other physical components can still fail. Firmware can also fail, and wayward electrons can cause real problems. As a whole, HDDs tend to fail more gracefully in that there may be more warning than a suddenly failed SSD. Fortunately, Enterprise SSDs are typically rated at twice the MTBF (mean-time-between-failures) compared to consumer SSDs, a reliability improvement that comes at an additional cost.
  • Application: SSDs may be overkill for many workloads. For example, file and print servers would certainly benefit from the superior I/O of an SSD storage array, but is it worth the cost? Would it make enough of a difference to justify the investment? On the other hand, utilizing that I/O performance for a customer-facing application or service would be most advantageous and likely yield a higher ROI. In our case, using SSDs for data validation databases is a suitable application that can make a real difference to our customers.

How SSDs have improved our services

Our data validation services rely on database queries to generate validation output. These database queries are purely read-only and benefit from the fastest possible access time and latency — both of which have been realized since moving our data validation databases to SSD.

SSDs eliminate the disk I/O bottleneck, resulting in significantly faster data validation results. A modern SSD boasts random data access times of 0.1 milliseconds or less whereas a mechanical HDD would take approximately 10-12 milliseconds or more. This is the difference in time that it takes to locate the data that needs to be validated, making SSDs over 100 times faster than HDDs. By eliminating the disk I/O bottleneck, our data validation services can take full advantage of the superior QPI/HT systems used by modern CPU and memory architectures.

10 Data Analytic Tools To Help You Better Understand Your Marketing ROI

A global review of data-driven marketing by GlobalDMA and the Winterberry Group found in its survey of 3,000 marketing professionals that nearly all recognize the importance of data in advertising and customer experience efforts, with over 77% saying they’re confident in the practice and its prospects for future growth. That study also noted that spending on data-driven marketing grew for 63%of respondents in 2013, and 74% said that growth will continue this year1.

Reliable data analysis is essential if marketing dollars are to be spent on initiatives delivering the highest ROI. If you’re a business owner questioning your own marketing ROI, there are a number of new analytics tools now at your disposal. To better understand your company’s marketing spend, check out the following roundup of ten analytics tools:

YouTube Channelytics

If YouTube is part of your audience outreach plans, check out YouTube Channelytics. Their helpful data analysis interface lets you do everything from check your subscriber rates to understand your top traffic sources and audience device preferences. Know if most of your views are coming via mobile devices, analyze your view counts, and even monitor view counts for specific time periods. What an awesome tool to monitor your video marketing ROI, right?



If Instagram is an essential component of your content marketing strategy, consider adding TinyMetrics to data analysis plans. TinyMetrics offers a number of powerful tools including hashtag, audience, and time analytics. Know the best times to post, track follower counts, and maximize the potential of your audience engagement opportunities. TinyMetrics offers handy export capabilities, making it easy to share your visual marketing analytics with your team.



If you’re overwhelmed by the volume of Google Analytics data at your disposal, consider incorporating Whatagraph into your data-diving quest. Whatagraph lets you create infographic reports from your Google Analytics data. Easily understand numerous data points including site visits, bounce rates, and top-performing site pages.


If you analyze how you are spending your time across multiple marketing platforms, you might develop a better understanding of how to improve your efficiency. Esper offers calendar analytics for both Google calendars and Microsoft Exchange calendars. Used by the likes of Uber and Dropbox, Esper lets you analyze your calendar by event categories and garner insightful data as to where your time might be better optimized. Create detailed data visualizations for a clear picture of your current time expenditures.


Gahint will help you understand fluctuations in visits to your website. Available from KPI Watchdog, gahint uses your Google Analytics data to offer detailed visitor information based upon trigger events. Understand increases/decreases in organic traffic as well as social referral fluctuations.



For those who frequently A/B test their marketing efforts, ChangeAgain is an interesting option to explore. Fully integrated with Google Analytics, ChangeAgain lets you analyze everything from bounce rates to page views and session length. If you’re going to be A/B testing your marketing anyways, doesn’t it make sense to use the power of Google Analytics to better understand your test results?



If your company is utilizing email marketing for customer outreach, investigate Saydoc for document analytics. Saydoc offers data analytics on everything from time-on-page to real-time open rate information. You can use Saydoc to create time-limited, permission-based documents and incorporate electronic signatures into your email outreach efforts.


If you are making a concerted effort to connect with mobile-enabled consumers, consider integrating into your mobile marketing efforts. lets you create QR codes for mobile-enabled consumers and helps you understand interactions with your QR codes thanks to built-in analytics. Create a QR code from an URL, text input, or even from your favorite social media interface. Their powerful platform even lets you create a mobile landing page from a QR code. Who knew QR code marketing was so easy?



If you want to optimize everything from your email marketing to your social media outreach, consider adding Ziplr to your growth marketing plans. Ziplr lets you create custom URLs for every link you share and analyze interactions with your links thanks to detailed analytics. Monitor interactions across multiple social networks and email marketing campaigns, understand which geographic locations are most fruitful for your brand, and which types of devices your audience is using. If you’re sharing links anyways, doesn’t it make sense to understand audience interactions with those URLs?


Service Objects

Bad data has far-reaching negative impacts on the costs and performance of your marketing and sales campaigns. With Service Objects data quality tools, we can identify and correct bad data in your marketing platform, resulting in significantly improved performance of your marketing and sales efforts. Service Objects’ data correction and enrichment services ensure that your marketing lists and contact records are accurate, genuine and up-to-date, allowing you to focus on converting and selling.


Integrating a number of data analysis tools into your growth plans not only helps you understand your audience, it helps you understand where your marketing efforts could be better optimized. If data analysis reveals a higher percentage of audience engagement is coming from email marketing versus social media marketing, you can tweak your outreach strategy to focus more resources on email campaigns.

Understanding which methods are producing the highest value for your business allows you to adjust your marketing efforts on an ongoing basis. With powerful data analytics tools at your disposal, you never have to worry about throwing good money after bad on outreach efforts that aren’t producing significant ROI for your business. Do you think you’ll be investigating any of the aforementioned analytics resources for your company?


The True Cost Of Bad Data In Your Marketing Automation

Inbound marketing and marketing automation platforms promise to make your marketing more effective, and they have the potential to live up to that promise. However, reality often tells a different story — especially when bad data plays a starring role.

Marketing automation platforms like Eloqua, Hubspot, Marketo, and iContact are great tools that can help you connect with your leads and customers. But they are just that, tools. The idea of marketing automation tools is promising, but poor execution and bad data will limit your success.

The cost of bad data

You pay for every contact residing in your marketing database. If your data quality is bad, you are wasting time and money. Data quality suffers for several reasons. Some data starts out clean before going bad due to address or phone number changes. Meanwhile, it’s not uncommon for users to enter bogus information into lead and contact forms. For example, 88% of online users have admitted to lying on online contact forms.

Bad email addresses mean your messages never arrive as intended; the same is true with bad postal addresses, plus you’ve just wasted money on postage or shipping, and bad phone numbers waste your sales and marketing team’s time calling bogus numbers. Improving the data accuracy within your marketing automation platform could save a ton of money.

How much money is at stake? It’s more than you may realize. Applying Deming’s 1-10-100 rule, it costs $1 to prevent bad data, $10 to correct it, and $100 if it is left uncorrected1. So, if you had just 10 bad records, that would be $1,000 wasted. Chances are, you have far more than 10 bad records in your marketing automation software. Approximately 10 to 25 percent of B2B contact records contain critical errors2.

Moreover, using bad data has a cascading effect on the organization. Not only are you expending valuable resources to capture leads, each lead, whether good or bad, takes up a “seat” in your marketing automation plan — with each seat costing money.

The cost to contact bad leads is real. Some of the more obvious costs include printing and postage cost for direct mail and outbound calling, which average costs are about $1.25 per attempted call. Even email costs money, albeit not much (roughly $0.0025 per email), but this adds up over time if left uncorrected.

There’s more to data accuracy than cost savings alone

PrintLooking beyond obvious costs, it is important to understand the cascading impact of bad data on other areas of your business. For example, even though you are using the latest and greatest real-time CRM or marketing platform, if the data is bad, your CSRs will begin to doubt the effectiveness of the platform. This can lead to a lack of confidence in your data, poor morale, and poor performance.

Another example is the impact on your marketing intelligence reports and decision making. Marketing to bad leads will result in “false-negative” data. Since these leads do not respond (because the data quality is bad), your marketing campaigns’ performance will be dragged down.

If you don’t like to throw money away, cause undue stress on the team, or make decisions based off of bad data, improving the data accuracy of your marketing automation software can go a long way toward solving these problems. If that’s not compelling enough, consider this: clean records improve contact conversion rates by 25 percent2.

Service Objects can help ensure that the promise of marketing automation becomes a reality in your inbound marketing strategy. Our data quality tools correct and improve the data in marketing automation platforms, resulting in better performance. Benefits include reduced cost per lead and cost per sale, more reliable performance data, increased contact rates, increased response rate, reduced cost to contact, and more sales.

Isn’t it time you banish bad data from your Marketing Automation Platform?




Geocoding Resolution – Ensuring Accuracy and Precision

When geocoding addresses, coordinate precision is not as important as coordinate accuracy. It is a common misconception to confuse high precision decimal degree coordinates with high accuracy. Precision is important, but having a long decimal coordinate for the wrong area could be damaging. It is more important to ensure that the coordinates point to the correct location for the given area. Accurately geocoding an address is very complex. If the address is at all ambiguous or not properly formatted then a geocoding system may incorrectly return a coordinate for a location on the wrong side of town or for a similar looking address in an entirely different state or region.

Some address geocoding systems will return decimal coordinates to the sixth decimal place or more; however, depending on your particular needs, that level of precision may actually prove unnecessary. The degree of precision for most consumer level GPS devices only goes up to the 5th decimal place anyway, which equates to “roughly” one meter of precision. This is “roughly” one meter because the distance can vary depending on how close you are to the either the equator or the poles. The distance will be at its greatest the closer you are to the equator and gradually gets smaller as you move north; however, when dealing with coordinates at this level of precision and above, the difference is mostly negligible for address geocoding.

In the Decimal Degrees wiki page (link below), there is a table that covers the levels of precision for each decimal place in a decimal degree. Below is a similar looking table:


Decimal DegreesLooking at the table above we see that a decimal coordinate with a level of precision past the 6th decimal place would be entirely unnecessary for locating an address. That level of precision would only be necessary under very special circumstances and would require very specialized equipment to use. If a decimal coordinate goes past the 7th or 8th decimal place then the coordinate was most likely calculated and the true level of precision would be unknown. So don’t let a decimal degree coordinate with a high level of precision fool you into thinking that it is more accurate. It is important to always thoroughly test any geocoding system to ensure that it meets your particular needs.

Reference: (

The Definition of ‘Garbage’ in Email, and How to Get Rid of It

We all know garbage when we see it — or do we? In the case of Service Objects’ email validation, we have our own definition of garbage and we actively seek it out. We want to find garbage in email and warn you about it so that you can make the most informed decision possible about accepting or rejecting it.

How do we define garbage?

The dictionary defines garbage as follows:

  •  Wasted or spoiled food and other refuse, as from a kitchen or household.
  • A thing that is considered worthless or meaningless: a store full of overpriced garbage.
  • Computing unwanted data in a computer’s memory.

Okay, we’re off to a good start. It’s safe to say that garbage is essentially: trash, worthless or meaningless items, or unwanted data. When it comes to email garbage, our email validation service flags email addresses that contain what we consider garbage. Our developer guide currently explains that our garbage flag:

“Indicates if the email address is believed to contain ‘garbage-like’ keyboard strokes and/or ‘garbage-like’ characters.” 

Garbage-like keyboard strokes might include pure gibberish or typos gone to the extreme. Garbage-like characters could include symbols and characters not typically included in email addresses. For example, hyphens are commonly used in URLs. Thus, an email address with a hyphenated domain, such as wouldn’t necessarily qualify as garbage. On the other hand, punctuation marks like exclamation points and commas are not used in URLs. Thus, an email address such as joe@example, or joe@example! would probably be considered garbage.

It’s not just about special characters


There’s a lot going on here behind the scenes, though. Many bots will use valid email addresses, such as addresses, with randomly generated email addresses. Though theoretically, they could be real, most often they look like complete gibberish or garbage.

Our DOTS Email Validation service checks for known names and dictionary words, various keystroke patterns, vowel and consonant patterns, and special characters that are syntactically valid but often rejected by most mail servers.

Garbage detection is important. Not only is sending messages to bogus email accounts a waste of time, but it could also get you flagged by ISPs. If you have a lot of garbage emails in your database, you’ll also have a high bounce rate. ISPs may think you’re a spammer when you’re really a victim of bots that randomly generate email addresses on web forms.

Just because an email may be considered deliverable, it does not mean it is good. That’s why we run multiple integrity checks, including garbage email checks, as part of our email validation service. If an email is flagged as garbage, then it means you probably shouldn’t accept it.

Catch-all Domains Explained

Imagine launching an online business and associating your email address with your business domain. For example purposes, let’s say your domain is and your name is John. Your email address would be Now what if someone entered If you had a “catch-all” domain, you’d receive email messages sent to — even if senders misspelled your name.

In fact, that was originally part of the allure of catch-all email addresses. With a catch-all domain, you could tell people to send email to anything at your designated domain such as: sales@, info@, bobbymcgee@, or mydogspot@. No matter what they entered in front of the @ sign, you’d still get the message without having to configure your server or do anything special.

The Downside of Catch-all Email Addresses

Catch-all email addresses were created to ensure that no email to the domain would be rejected and lost. Catch-all domains accept all email without rejection. Though useful for those concerned about potentially missing important messages due to typos in the mailbox, spammers soon took advantage of the opportunity before them. All they need is the domain name. They do not need to hunt for usernames, guess usernames, or scrape email addresses. They simply put whatever they want in front of the domain and send their messages — and those messages arrive as intended. As a result, catch-all boxes tend to get flooded with spam and become unusable.

How Service Objects Defines Catch-All Domains

Service Objects uses the term “Catch-All Domain” to refer to a domain that has its mail server(s) configured to not reject email addresses, even if they do not exist. Thus, if an email arrives to, our catch-all domain example, and that email does not actually exist, it will not be rejected.

Keep in mind, however, that mail servers can be configured in various ways. Traditionally, a “catch-all” message is accepted and forwarded to the designated “catch-all” mailbox.

Mail servers can also be set up to delete incoming or bounce messages when no recipient is found. Bounced emails do not necessarily bounce immediately. Thus, a mail server may accept an unknown message initially and later bounce it back. We know, it’s confusing. Remember that rejecting an email and bouncing one back are not one and the same.

Catch-all Domain Practices

It is considered bad practice for a mail server to accept email addresses that do not exist and then bounce them back later. This practice was initially employed when spammers began mining mail servers for email recipients with the thought that spammers who could not accurately mine the mail server for recipients would simply move on and leave it alone.

As you know, spammers are a creative bunch, and they quickly learned to manipulate this type of server behavior to their advantage. This practice also increases bandwidth usage due to both incoming spam and outgoing bounce messages.

A better approach is to reject nonexistent email outright so that no message is ever received, accepted, and then bounced back.

Identifying Catch-all Domains

Service Objects’ email validation service identifies catch-all domains, giving you a better idea of how your outgoing messages may be handled. For example, messages sent to a catch-all domain may arrive as intended, but they may get lost in a flood of spam messages whereas a message sent to a legitimate business recipient’s mailbox will be more likely to be perceived as legitimate.

Embarking on a Data Adventure

Service Objects’ CEO Geoff Grow recently embarked on a data adventure during his fourth trip to Palau. Ever the data enthusiast, Geoff couldn’t help but wonder if the PO box numbers in that country would match that data we have on file.

We’ve always been confident in our data, but yet there’s been this nagging curiosity about some of the world’s most remote places (and Palau is about as remote as you can get from our neck of the woods). What better way to put this matter to rest than to verify the data quality in person while visiting the island nation? With that in mind, Geoff added data verification to his already busy itinerary and set off on a mission to verify the postal information in Palau.

Palau is located in the Pacific Ocean between the Philippines and Australia. Up until 1994 when it gained its independence, Palau was a part of the United Nations Trust Territory of the Pacific and administered by the United States. During the Trust Territory era, the United States Postal Service (USPS) provided postal services to the island’s inhabitants. 

The Palau Post Office is now an independent agency serving roughly 17,000 people. Though independent, the Palau Post Office continues to have a relationship with the USPS, receiving training, technical support, advice, and other assistance. It also complies with USPS policies for domestic and international mail and keeps its mailing rates consistent with those of the USPS.

Palau Post OfficeIn between snorkeling and sightseeing adventures, Geoff visited the local post office to document the available PO numbers. With that important task out of the way, Geoff was able to enjoy the island’s delights. 

After many sun-soaked days of snorkeling, kayaking, sightseeing, verifying data, checking out street addresses, and mingling with locals, Geoff returned to Santa Barbara. We welcomed him back and gathered around his computer to admire his digital photo album and find out if our confidence in our data quality was warranted.

We’re happy to report that his digital photo album contains a beautiful mix of snapshots of PO boxes and street signs along with stunning landscapes, sunsets, and typical tourist photos. After all, all work and no play would be sad. That said, he looked just as happy evaluating data quality as he did hanging out at the beach. If you know Geoff, he lives for data. To him, embarking on a data adventure is almost as thrilling as a zip lining or scuba diving.

We checked the actual PO addresses Geoff had documented while in Palau against the data that Service Objects has on file. A cheer erupted in our office when Geoff confirmed that Service Objects does, in fact, have all of the addresses correct for Palau. 

Mission accomplished.

Then and Now: How Data Has Changed the Shopping Experience

Remember when Amazon first came along? That was 20 years ago in 1995. That same year, eBay arrived — and shopping as we formerly knew it was about to change dramatically. PayPal arrived in 2002, addressing concerns about online payments. Despite the Dot Com Bust and the Great Recession, online shopping is alive, well, and continuing to evolve. What’s shaping the online shopping experience today? Data.

How data impacts the shopping experience

Amazon is the perfect example of how data collection can impact the shopping experience. Let’s say that you’re searching for a new kitchen gadget, a pizza cutter, on Amazon. One of the pizza cutters you’re interested in looks good, and Amazon helpfully suggests a pizza pan and a pizza stone to go with your purchase. Yes, Amazon wants to sell more products to you, but these aren’t random upsells; they’re suggestions built upon real data and presented to you under the “Frequently Bought Together” section.

If you scroll down the page, you’ll also see “Customers Who Bought This Item Also Bought” and “Customers Who Viewed This Item Also Viewed” sections.

Meanwhile, whether you buy the pizza wheel or not, Amazon will keep that data on file. When you log in to Amazon in the future, expect to see related suggestions and recommendations. The more data Amazon collects about you, the more personalized your shopping experience becomes. You’ll also start to notice more relevant email offers.

Online retailers also use data to determine the ideal reorder point for items such as laundry detergent, shampoo, ink cartridges, and other consumables. Again, we can thank data for optimizing these recommendations.

Data also impacts pricing — potentially in real time. For example, algorithms can be used to set prices based on demand, inventory levels, and other factors.

Offline retailers use data, too. The last time you swiped your rewards card at the drugstore, your purchase was captured and associated with your account. The next time you shop at that particular chain, your receipt may contain coupons related to your past purchases .

How data impacts package delivery

The packages that ultimately arrive, whether ordered online, via a catalog, or over the phone, are also affected by data. This may start at the point of sale and continue on in conjunction with the shipping carrier. For example, an online retailer will likely use address validation to verify the accuracy of your address and correct any errors such as a misspelled street name or transposed ZIP code. From there, the closest fulfillment center will be selected to ensure a timely delivery. Once the carrier gets involved, data will be used to determine the most efficient and cost-effective delivery route. As the saying goes, they have it down to a science — in this case, data science.

What’s next?

Expect even more sophisticated data collection in the future. For example, some retailers are using location-based tracking and Web beacon technologies to track your movements in a store, detect your presence, and present you with instant offers. Omni-channel fulfillment is also hot, and data collection can help retailers to provide a seamless shopping experience across all of their channels. For example, when you check out at a department store, the clerk may be prompted to suggest a particular product based on the items you’ve previously looked at on the company’s online store.

We’ve come a long way since 1995. What will shopping look like 20 years from now?

No image, text reads Service Objects Tutorials

API Explained in Fewer than 140 Characters (or thereabouts)

If you look up API on Wikipedia, you’ll find a lengthy explanation filled with computer terms and examples. In fact, the Wikipedia entry on APIs is more than 4,000 words long.

While we appreciate this detailed explanation, we were impressed by the bite-sized explanations we found on Quora and the web at large. Below are a few of our favorite explanations:

“An API is a programming language that allows two different applications to communicate, or interface, with each other. An API is used to enhance features and add functionality to one or both applications.” — Sprout Social

“… a series of rules. APIs allow an application to extract information from a service and use that information in their own application, or sometimes for data analysis.” — Hubspot

“An API is a set of commands that some application exposes so other applications can control it and extract information from it, e.g., the Twitter API allows programmers to create applications that interact with Twitter.” — Rafael Pinto, Neural Networks Researcher and Quora User

“You speak one language, and you are trying to work with someone else who speaks another language. This guy knows how to do things that are useful. An API is like a phrasebook. The phrasebook tells you the different commands you can give the other person to get them to do what you want, and it tells you how to interpret the responses he gives you.” — Dean Carpenter, Professional Programmer and Quora User

“Your TV remote control is an API. It is an interface that allows you to control your TV without needing to understand any of the underlying technologies. Notice that each remote control is different. Buttons are organised differently. Your TiVo remote control or your DVD/Bluray remote will operate your TV, but each one provides a different interface.” — Haoran Un, Quora User

“An API (“application program interface”) in the context of web applications and sites would be the facility for other software to interoperate with it as a client; use the site’s features, access its data, etc.

“A site without an API presents an interface which is convenient only for humans, but very inconvenient for machines. Using technology like REST, XML and JSON, an API does the opposite: presents an interface convenient for machines, but almost impossible for humans.” — Toby Thain, Quora User

“An API is a software-to-software interface, not a user interface. With APIs, applications talk to each other without any user knowledge or intervention.” — How Stuff Works

These bite-sized explanations should give you a better idea about what an API is, but how can one of our APIs help your business?

Our web APIs can enhance your existing software processes, allowing you to incorporate data validation into the mix. Our APIs tell your software how to interact with our software. API integration is easy, and the results impressive. There’s no need to master yet another computer programming language or develop your own data validation processes. There’s no need to understand what’s going on behind the scenes.

Simply integrate our APIs into your existing system and begin validating data in real-time. Sign up for a free trial key and find out just how easy our data validation APIs are.

Read Quote of Rafael Pinto’s answer to In layman’s terms, what is an API? on Quora

Read Quote of Dean Carpenter’s answer to In layman’s terms, what is an API? on Quora

Read Quote of Rafael Pinto’s answer to In layman’s terms, what is an API? on Quora

Read Quote of Toby Thain’s answer to What is an API? on Quora

Sources: Title

Popular Myths about Big Data

Everyone’s talking about big data and data quality services. The hubbub isn’t likely to shrink any time soon because big data is only getting bigger and becoming increasingly important to rein in and manage. As with any topic receiving a great deal of attention, several myths have emerged. Ready for some myth-busting? 


Ted Friedman, Vice President and analyst from Gartner, debunked this myth at Gartner’s 2014 Business Intelligence & Analytics Summit in Munich, Germany. IT leaders believe that the huge volume of data that organizations now manage makes individual data quality flaws insignificant due to the “law of large numbers.” Their view is that individual data quality flaws don’t influence the overall outcome when the data is analyzed because each flaw is only a tiny part of the mass of data in their organization.  “In reality, although each individual flaw has a much smaller impact on the whole dataset than it did when there was less data, there are more flaws than before because there is more data,”1 said Ted Friedman, vice president and distinguished analyst at Gartner. Instead of ignoring minor flaws, easy-to-use data quality tools can quickly correct or remove them.


This myth comes courtesy of Michel Guillet of Juice Analytics who shared some myths in an INC article, 3 Big Myths about Big Data. Comparing big data choices to grocery store shelves, Guillet illustrated how too many big data choices (i.e., metrics, chart choices, and so on) can quickly become overwhelming. Guillet suggests that when faced with uncertainty about data options, users may simply ask for all of it. 

What he doesn’t say, however, is that too many choices often lead to indecision. For example, if you’re faced with, and overwhelmed by, too many different potato chip flavors plus traditional, low-fat, gluten-free, and baked options, name brands and store brands, you may grab one just to be done with it. Or, you might not choose one at all.  

Guillet says that users want guidance, not more uncertainty and that expressing an interest in more data is an indicator of uncertainty. What they really want? According to Guillet, “…they want the data presented so as to remove uncertainty, not just raise more questions. They won’t invest more than a few minutes using data to try to answer their questions.”


Sure, customers may own their own data, but they don’t necessarily know how to extract meaningful information from that raw data. Guillet explains that you’re not selling them access to their own data. Rather, you’re selling algorithms, metrics, insights, benchmarks, visualizations, and other data quality services that increase the data’s value.


Not necessarily. While it seems like everyone else has already adopted a big data solution, or is in the process of doing so, according to Gartner, interest in big data technologies and services is at a record high, with 73 percent of the organizations Gartner surveyed in 2014 investing or planning to invest in them. But most organizations are still in the very early stages of adoption — only 13 percent of those we surveyed had actually deployed these solutions.1 The majority remain in the early stages of adoption. 

While you may feel like you’re late to the big data and data quality services party, the party is just getting started! In fact, this is the perfect time to investigate your big data options. Data quality tools have been around long enough to be both innovative and yet mature enough to have had the bugs worked out of them. 

These myths continue to make their rounds, but rest assured: our data quality services take care of data quality so even the tiniest of flaws are removed or corrected. We provide expert guidance, helping you to get the most out of the data quality tools available to you. Moreover, it’s not too late to get started. Simply sign up for a free trial key and try our data quality services in a matter of minutes.


1 Gartner Newsroom, Gartner Debunks Five of the Biggest Data Myths,

The Anatomy of a Great Contact Record

contact-verificationMany of us work with databases containing row after row of contact records. We tend to think of these databases as a whole, oftentimes referring to a contact database as a “list.” While you may care deeply about the database’s quality and health overall, have you ever considered its health on a more cellular level? Let’s dig into data quality at the record level and figure out what a great contact record really is.

From a contact verification perspective, a great contact record contains five specific pieces of information. In all cases, this information must pass (or be corrected / flagged) several tests such as:  

  • Is it spelled correctly?  
  • Is it complete? 
  • Is it genuine? 
  • Is it current? 

The information contained within a great contact record are:

  • Name – Obviously, every contact record should have a name. However, do you have the contact’s full name? Is it real or is it bogus such as “John Hancock”? Is it vulgar? Is it the name of a celebrity? Is the name, as supplied, separated into individual data fields or is it a long string of text that’s difficult to work with? Name validation tools run various checks to weed out bogus contact information as well as parse names into their appropriate data fields such as: prefix, first name, middle name, last name, and suffix. In addition, name validation can suggest alternate spellings or related names.   
  • Address – Again, each contact record should have a valid shipping and/or billing address. USPS address validation standardizes and corrects addresses as well as provides you with additional insights such as whether the address is a post office box, an occupied residence, or an occupied business.    
  • Phone Number – Great contact records also include accurate phone numbers. However, it’s not always clear whether a supplied phone number is the contact’s home, business, or mobile phone. Reverse phone lookup and phone validation tools can be used to determine the validity of a given phone number, find out who it belongs to, identify the line type (such as landline, VOIP, or mobile phone), and more. Phone append tools are a type of contact verification that can find and append appropriate phone numbers for a contact record lacking this information.  
  • Email Address – Email addresses are also important for truly great contact records. However, it’s not uncommon for users to transpose letters or submit bogus email addresses in an attempt to keep spam at bay. Email verification can correct common domain misspellings and syntax errors as well as return detailed insights as to the address’s authenticity. 
  • IP Address – IP addresses provide geographic information related to a user’s Internet connection. This information is crucial in preventing fraud. For example, if an e-commerce customer claims to be in Los Angeles but the IP address reveals a Russian address, that’s a warning sign that the order could be fraudulent. IP address validation also identifies the use of public or anonymous proxy servers which are often used by criminals. 

From a business perspective, a great contact record contains all of the same basic information that our contact verification software validates. After all, you’ve gathered that information for specific purposes. A great contact record is one that has also been recently validated by appropriate contact verification tools. For example, if you have a mailing list, a great contact record is one that contains the required information and has recently undergone USPS address validation. 

Contact verification helps to ensure that your contact records are current, accurate, complete, and authentic — in other words, great!

5 Trending Data Buzzwords

Here at Service Objects, we love data almost as much as we love our dogs. Few things make us happier than exploring or talking about data. Turns out, we’re not the only ones obsessed with data. Check out the following data buzzwords and join the conversation.


Smart Data

In the last 30 days, “smart data” was mentioned on Twitter 9,234 times. What is it? You know what big data is, right? It’s massive, but it doesn’t always make sense. Denis Igin from NXCBlog explained smart data this way:

“…big data is what we know about consumer behavior, while smart data is how we discover the underlying rationale and predict repetition of such behavior… In short, smart data is adding advanced business intelligence (BI) on top of big data, in order to provide actionable insights.”

Business intelligence and data quality software help you make sense of data. Now that’s smart!

Data Warehousing

Another 1,000+ conversations on Twitter in the last month have centered on “data warehousing.” What is it? Wisegeek explains it best:

Data warehousing combines data from multiple, usually varied, sources into one comprehensive and easily manipulated database.

For example, Service Objects’ data quality services tap into our massive database which pulls data from various sources such as the US Postal Service, telephone databases, and GPS mapping databases to validate, standardize, and enhance data.

Dark Data

Topsy’s Social Analytics reveals nearly 1,700 Twitter mentions for “dark data” in the last month. According to Gartner’s IT Glossary, dark data is defined as:

“… the information assets organizations collect, process and store during regular business activities, but generally fail to use for other purposes (for example, analytics, business relationships and direct monetizing).”1

In other words, dark data is data that’s not being used. It’s often kept solely for compliance purposes. What if you could enhance that data and put it to good use? Using data quality software, for example, you could validate older, potentially obsolete, addresses and create a direct marketing campaign targeting former customers.

Big Analytics

With over 23,000 mentions in the last month, “big analytics” is definitely buzzworthy.

Big data analytics is the process of examining large data sets containing a variety of data types — i.e., big data — to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information.

Big analytics goes hand-in-hand with big data. When big analytics and data quality services work together, your data becomes much smarter and easier to work with.

The Internet of Things

We saved the biggest buzzword for last: “The Internet of Things. Also commonly referred to as “IoT,” the Internet of Things refers to a dramatically more connected Internet. Remember when the only things connected to the Internet were computers and servers? How quaint is that? Look around your personal space. You may have a desktop scanner that scans documents directly to Dropbox or Evernote. Maybe you’re wearing a Fitbit, which transmits your fitness and health stats directly to You may even have Internet-connected light bulbs or video surveillance system. If you’re really fancy, your refrigerator connects, tracking UPC codes and notifying you when your milk is about to go sour. On a larger scale, industrial equipment, parking meters, weather stations, and more are connected.

How many mentions did it get in the last 30 days? Over 150,000 for “Internet of Things” and another 74,000 for “IoT.” Now that’s trending!

As you can imagine, all those IoT “things” are generating data, contributing to big data. Each of these data buzzwords is interrelated and reflects the need for solutions such as data quality services and analytics.

1 Gartner IT Glossary, Dark Data,

Data Quality, Not Just For Scientists

data-quality-scientist-2Scientists and analysts know the importance of data quality. In fact, they attend workshops focused on just that such as the recent Data Infrastructure: The Importance of Quality and Integrity workshop held in late November of last year. While scientists, researchers, and analysts clearly recognize the value of data quality software, data cleansing tools aren’t just for research labs or scientific data. Businesses of all types can benefit from data quality improvements, particularly in contact information.

Which types of data are prime for improvement?

Businesses generate and use massive amounts of data, day in, day out. The sheer volume and diversity of data can quickly prove to be overwhelming. Take a deep breath and think about the data that your business works with the most. Some of the most common data types in need of improvement are: contact information, addresses (including mailing, delivery, and email addresses), sales tax data, and sales and marketing leads. Each of these data types can be improved through the use of data quality software. However, since most businesses gather contact information, we’ll focus on improving data quality for contact information today.

The problems with contact information

Let’s say you run a brick-and-mortar appliance business and collect your customers’ addresses at the point of sale. Your clerks simply ask your customers for their names, addresses, and phone numbers when ringing up their purchases and scheduling delivery. Easy enough, right? After all, your customers know where they live and there’s no reason for them to offer a fake address or phone number.

That said, what happens when the customer says she lives at 135 Limonite Street but the clerk types in 135 Lemon Street or transposes the street numbers? Suddenly you have a potentially costly problem. Your delivery drivers will end up going to the wrong location and could be hours late. Not only will they have wasted time and fuel, your customer won’t be happy.

Meanwhile, your resourceful drivers will have solved the problem by using their mobile phones to call the customer to get the correct address. However, will the drivers remember to correct the address in your point of sale system later? Probably not, bringing yet another problem: all of your subsequent mailings will go to the wrong address or be returned as undeliverable.

These same data quality problems occur when customers enter their contact information using self-service portals online. Mistakes — and autocorrect — happen. Not only that, people move all the time but rarely inform the companies that they do business with. Automated address verification solves all of these — and many more — problems.

How data quality software improves contact information

Data cleansing tools exist for all kinds of contact information including address verification, phone verification, reverse phone lookup, demographic information, and more. For example, Service Objects DOTS Address Validation, which has editions available for both the United States and Canada, is a real-time API that instantly compares inputted contact information with a huge database containing millions of current contact records. This data quality software is capable of detecting and correcting typos, standardizing address information against USPS®  data, verifying deliverability, appending secondary suite information for business addresses, and adding missing postal information. With a response time as fast as .15 seconds and various implementation options including a real-time API, PC-based software, FTP batch processing, and quick online lookups, these data cleansing tools quickly verify and correct contact information.

While scientists rely on quality data to further their research, businesses of all sizes can reduce costs, improve deliverability, improve customer satisfaction, and much more with data quality software. While your company likely works with a great deal of data, improving the quality of your contact information is an excellent place to start.


5 Data Quality Experts to Follow in 2015 

data-quality-thought-leadersIs 2015 the year of data quality? It could be! Companies are beginning to recognize the benefits of data quality: improved customer service, better quality leads, improved deliverability, reduced costs, fraud prevention, productivity gains, and more. Below are five data quality experts who are helping to spread the word. Follow them on social media and kick off 2015 by educating yourself on data quality.

Ted Friedman, @ted_friedman

Ted Friedman is an IT industry analyst at Gartner, and has been for more 15 years. He brings nearly 30 years of IT industry experience to the table, focusing on: Information Management Strategy, Internet of Things, Information Governance, Data Integration, and Data Quality. Tim Friedman regularly tweets data quality factoids and statistics, definitions, tips, insights, and links to informative articles, videos, and reports. An expert in IT and data, Ted Friedman is also well-versed in the art of using social media effectively. He regularly tags his tweets with hashtags such as #DataQuality, #DataIntegration, and #DataGovernance. Join the conversation and learn more about data quality by following @ted_friedman.

Jim Harris, @ocdqblog

With a blog titled “Obsessive-Compulsive Data Quality,” it’s a safe bet that blogger Jim Harris is passionate about data quality. Jim Harris is a data quality expert who shares his expertise via consulting work, speaking engagements, and his Obsessive-Compulsive Data Quality blog. He is also a freelance writer who regularly contributes to publications such as the SAS Data Roundtable,, Informatica Perspectives, Actian Big Data and Analytics Blog, and more. What can you expect from Jim Harris? In-depth original articles, thoughtful insights, links to relevant information about data quality, and nuggets of wisdom such as this:

“In the era of big data, it’s not about being data-driven–because your organization has always been data-driven. It’s about what data your organization is being driven by–and whether that data is driving your organization to make better decisions.”

David Loshin, @davidloshin

David Loshin is the president of Knowledge Integrity, a firm that helps organizations institute Data Quality, Master Data Management, Data Standards, and Data Governance programs. In addition to providing consulting services, training, custom implementations, and data quality assessments, David Loshin, is also a thought leader, programmer, and data expert extraordinaire. He is also a prolific author with numerous books, articles, and blog posts published. His books include: Business Intelligence: The Savvy Managers Guide; Big Data Analytics: From Strategic Planning to Enterprise Integration with Tools, Techniques, No SQL, and Graph; and Using Information to Develop a Culture of Customer Centricity.

Dylan Jones, @dataqualitypro

Based in the United Kingdom, Dylan Jones founded Data Quality Pro, a vibrant community of data quality professionals. According to Mr. Jones’ LinkedIn profile, Data Quality Pro is the “largest online resource dedicated entirely to Data Quality and Data Governance.”

Dylan Jones regular writes on data quality topics, coordinates Data Quality Pro’s “virtual summit,” holds data quality webinars, and tweets for the company using the @dataqualitypro handle. He has been involved in data quality management since 1992.

Loraine Lawson, @LoraineLawson

Loraine Lawson is a technology writer and editor who often writes and tweets about data quality. Many of her articles appear on IT Business Edge. Recent articles include: Tackling the Unstructured Data in Big Data, Two Integration Trends That Could Challenge IT in 2015, and How to Make Data Quality a Habit in 2015. With a background in print journalism, access to industry experts, and an easy-reading writing style, Loraine Lawson helps readers make sense of highly technical topics. You can count on authoritative data quality articles, common sense analysis, and a sprinkle of fun when you follow @LoraineLawson.

Finally, keep up with us on Twitter @serviceobjects for all things #DataQuality!

Four TED Talks that Anyone Working With Data Should Watch

The era of big data is upon us, and it’s bringing us surprising insights. Remember the Icelandic volcano eruption that grounded flights a few years ago? Data shows us that the volcano emitted 150,000 tons of carbon dioxide while the grounded airplanes would have emitted more than twice that much had they flown. This particular insight was discussed in a TED Talk, one of four that anyone working with data should watch.

The beauty of data visualization by David McCandless



David McCandless describes the Icelandic volcanic eruption along with other data-driven facts beautifully in a 2010 TED Talk about data visualization. McCandless explains that data visualization can help to overcome information overload. By visualizing information, it becomes possible to “… see the patterns and connections that matter and then designing that information so it makes more sense, or it tells a story, or allows us to focus only on the information that’s important.” Plus, he said, it can “look really cool.” To illustrate his point that data visualization is beautiful, indeed, he presented several gorgeous data visualizations.

What do we do with all this big data? by Susan Etlinger



This Ted Talk was filmed in September 2014. Susan Etlinger presents the notion that data that makes you feel comfortable or successful is likely wrongly interpreted. She says it’s hard to move from counting things to understanding them, and that in order to understand data, we must ask hard questions. She wraps up the talk by saying that we have to treat critical thinking with respect, be inspired by some of the examples she presented, and, “like superheroes” use our powers for good.

The best stats you’ve ever seen by Hans Rosling



Starting with a story about quizzing some of the top undergrad students in Sweden on their knowledge of global health and determining that they knew less about it statistically than chimpanzees did, Hans Rosling takes viewers on a visual, stat based journey around the world and through time. Rosling’s data challenges preconceived notions and will likely leave you feeling hopeful about the future.

Why smart statistics are the key to fighting crime by Anne Milgram



Anne Milgram, attorney general of New Jersey, presents an inspirational talk about how data analytics and statistical analysis can be used to fight crime in the United States. After talking office, Milgram discovered that the local police department relied more on Post-it notes to fight crime than it did data. Meanwhile, the Oakland A’s was using smart data to pick players that would help them win games. Why couldn’t the criminal justice system use data to improve public safety? End result: a universal risk assessment tool that presents crime data in a meaningful way.

Each of these TED Talks runs about 15 to 20 minutes. Set aside an hour or so this weekend to watch each one. Prepare to get inspired about data!

Who’s Your CDQO (Chief Data Quality Officer)?

Your company likely has a CEO, COO, and CFO. It might even have a CRO and a CISO, but does it have a CDQO (Chief Data Quality Officer)? Just as executive, operational, financial, risk management, information security, and other vital functions must be appropriately managed, the same is true of data quality. Improving data quality can increase profits, reduce costs, improve customer satisfaction, and much more, yet few companies think to appoint a CDQO.

Taking ownership of data quality

data quality officerWho should be responsible for data quality in your organization? That’s a tough question. Some might think IT should be in charge of data quality. After all, IT sets up the database and configures the systems used to access your company’s data. Others may suggest that the marketing department should take ownership of data quality. After all, marketing is in charge of mailing lists and lead management. Maybe it’s an administrative function or a project for customer service to tackle. While virtually every department uses and is affected by data, there’s a tendency to think of data quality as someone else’s job. Thus, no one is specifically in charge of it.

Why data quality matters

Not taking data quality seriously could prove to be a costly mistake. Data quality affects everything from employee productivity and customer satisfaction to your bottom line. For example, each time a mailer or shipper is returned as undeliverable as addressed, an employee must stop what he or she is doing and investigate. This may involve calling the customer back to get the correct address — and apologize for the shipping delay. If the returned item is “just a promotional mailer,” an employee might toss it in the trash rather than attempt to correct the address. Both scenarios cost the company money. In the second scenario, the company will lose money with each subsequent mailing unless someone steps up and validates that address.

Data quality isn’t just about cleaning up addresses and phone numbers. For example, if fraudulent orders are a problem for your company, the ability to verify data in real time improves data quality and alerts you to potential fraud. Data validation tools can correct misspelled addresses and typos as well as identify suspicious mismatches between a customer’s IP address, credit card, and other contact details.

Not only does data quality impact employee productivity, customer service, and your bottom line, it can also affect decision making. Your executive team relies on data to make decisions. However, if that data is not accurate, genuine, or current, those decisions can’t possibly be the best decisions for your company.

Data quality impacts the entire organization, for better or for worse. Yet few CDQOs exist. It is essential to manage data quality at the executive level and then empower the entire team to take ownership of the quality of the data that they generate and/or encounter.

How to prioritize data quality in your organization

Start by putting one of your C-level executives in charge of data quality. From there, schedule a demo with Service Objects. We can help you improve data quality with our easy-to-use contact validation APIs.

Request Data Quality Demo

Six Features to Look for in a Data Validation Provider

You may have recently seen our whitepaper andData Validation infographic, Hitting the Data Trifecta, on data quality excellence. It describes how data becomes high quality when it is genuine, accurate, and up-to-date.

A data validation provider can help you achieve data quality excellence, but how do you determine which provider is the right fit for your company?

Our complimentary one-pager, Six Features Key to Choosing a Data Validation Provider, can help you out! If you’re looking for data quality solutions to reduce fraud and waste in your organization, it’s time to read this – download now!

No image, text reads Service Objects Tutorials

Infographic: Hit the Data Trifecta to Ensure Data Quality Excellence!

Last week we released our newest whitepaper – Hitting the Data Trifecta. To accompany it, we’ve created a guide to help you determine what type of contact validation tool you’ll need to help with different types of incomplete, incorrect or potentially fraudulent contact information.

With genuine, accurate and up-to-date contact data, you’re on the way to data quality excellence and hitting the Data Trifecta!

Data Quality Infographic

Introducing: The Data Trifecta

Data quality, or lack thereof, impacts businesses in many ways. Poor data quality, for example, wastes time, paper, and money. Poor data quality can also mean shipping delays, canceled orders, and fraudulent orders. Examples of poor data quality include: typos in addresses such as “Lem” Street instead of “Elm” Street, bogus names such as Donald Duck, missing or incorrect ZIP codes, transposed phone numbers, and IP addresses that don’t align to contact info.


In contrast, excellent data quality means that your business, marketing and operational efforts pay off. Your team won’t waste time calling wrong or non-existent phone numbers or sending expensive mailers to nowhere. In addition, ensuring data quality helps to catch potentially fraudulent orders before they are processed. Excellent data quality doesn’t happen by accident; you need to implement the right tools in order to hit the “Data Trifecta.”

Data Trifecta

What is the Data Trifecta?

The Data Trifecta contains three equally important data quality components: genuine, accurate, and up-to-date data.

Hitting the Data Trifecta is a worthy goal, and it’s easier to achieve than you may think. Learn more about the Data Trifecta and data validation by downloading our free whitepaper, Hitting the Data Trifecta, today.

Genuine, Authentic and Accurate Data… What’s the Difference?

At the heart of our data validation suite is our company’s mission: Deliver the most genuine, authentic and accurate data as possible to our customers. Often we’re asked to clarify the differences between “genuine, authentic and accurate” data.

This is how we explain the differences between these three very important data validation points:

Example 1 – Accurate Data

Mike Wilson visits your website and fills out a registration form. When Mike enters his address, he inadvertently types in “123 Mian Strt” instead of “123 Main Street. This is an example of data that is NOT ACCURATE even though Mike Wilson is a real person and meant to enter his valid information. Accuracy issues with mailing address data waste an exorbitant amount of time and money in postage and printing. With the most accurate data, companies reduce wasted mailing costs, paper and resources.

Example 2 – Authentic Data

Mike Wilson visits your website and fills out a registration form. When he enters his address, he types in 123 Main Street, Los Angeles, CA. While the information and address that Mike entered is technically accurate, our web services confirm that the address is a large parking lot, and Mike’s IP address is in another country. Further, the bank identification number tells you that this is a prepaid gift card which may be a red flag as well. This Mike Wilson is NOT authentic. Data authenticity is a critical step in detecting and preventing fraud data and orders from entering your systems.

Example 3 – Genuine Data

Mike Wilson visits your website and registers with the name Mickey Mouse. He enters the address 1313 Disneyland Railroad, Anaheim, CA. This is an accurate, real name and address; however, it is definitely NOT genuine. When people use bogus, vulgar or celebrity names in lieu of their genuine information, they have no place in your contact database.

Making sure that your data is genuine, authentic and accurate is one of the most vital components in data quality excellence.

If you have questions about data validation, feel free to call us we’re happy to talk!

The Fight for Data Integrity

The importance of data quality to companies cannot be denied; but how many know how to identify high quality data from poor data? Even if companies identify a data integrity issue, do they understand the inefficiencies and wasted resources that result from it? Often times the individuals who recognize a data integrity problem are not the decision makers, resulting in a misunderstanding of how poor data quality is impacting the bottom line.

Organizations need to become aware that every customer entry point is an opportunity for bad data to enter their systems. Proactive measures should be taken to keep contact records and databases clean.

The first step a company should take to improve data integrity is to identify in real-time the areas where bad data is entering the system. Poor data quality has a downstream effect on all systems it interacts with, so the best solution is to reduce and prevent bad data directly at the source.

Common Origins of Bad Data:

  • Online customers:  Customers may intentionally enter incorrect information into web forms.
  • Front-line employees:  Customer Care representatives or support personnel are pressed for time to help as many customers as possible.  This can lead to abbreviated or incorrect data being entered into the system.
  • Typing errors:  Fingers can get tangled on a keyboard; letters or numbers become transposed or spelled wrong.

Poor Data Quality

Taking actions to improve the quality of data entering your database can lead to significant business gains and higher levels of customer satisfaction. One simple way to improve data integrity is to verify, correct and append customer contact information in real-time. This includes performing lead validation, address validation, phone number append, and email validation on contact records before it enters the database.

Failing to put proper data integrity procedures in place today can lead to significant waste down the road. Companies need to take the time to protect its most important business assets – customer data.