Service Objects’ Blog

Thoughts on Data Quality and Contact Validation

Email *

Share

Posts Tagged ‘Data Quality’

To Be Customer-Centric, You Have To Be Data-Centric

In today’s fast-paced world, customers have become more demanding than ever before. Customer-centric organizations need to build their models after critically analyzing their customers, and this requires them to be data-centric.

Today, customers expect companies to be available 24/7 to solve their queries. They expect companies to provide them with seamless information about their products and services. Not getting such features can have a serious impact on their buying decision. Customer-centric organizations need to adopt a data-centric approach not just for targeted customer interactions, but also to survive competition from peers who are data-centric.

Customer-centric organizations need to go data-centric

Why?

Today, customers enquire a lot before making a decision. And social media enquiries are the most widely used by customers. A satisfactory experience is not limited to prompt answers to customer queries alone. Customers need a good experience before making the purchase, during the installment of product or deployment of service, and even after making the purchase. Thus, to retain a customer’s loyalty and to drive in regular profits, companies need to be customer-centric. And that can only happen if companies adopt a data-centric approach. Big data comes handy in developing a model that gives optimum customer experience. It helps build a profound understanding of the customer, such as what they like and value the most and the customer lifetime value to the company. Besides, every department in a data-centric organization can have the same view about its customer, which guarantees efficient customer interactions. Companies like Amazon and Zappos are the best examples of customer-centric organizations that heavily depend on data to provide a unique experience to their customers. This has clearly helped them come a long way.

How?

Companies can collect a lot of information that can help them become customer-centric. Here are some ways in which they can do so:

  • Keep a close eye on any kind of new data that could help them stay competitive, such as their product prices and the money they invest in logistics and in-product promotion. They need to constantly monitor the data that tells them about the drivers of these income sources.
  • Reach out to customers from different fields with varying skill sets to derive as many insights as possible so as to help make a better product.
  • Develop a full-scale data project that will help them collect data and apply it to make a good data strategy and develop a successful business case.

Today, there is no escaping customer expectations. Companies need to satisfy customers for repeat business. Customer satisfaction is the backbone of selling products and services, maintaining a steady flow of revenue, and for the certainty of business. And for all of that to happen, companies need to gather and democratize as much information about their customers as possible.

Reprinted with permission from the author. View original post here.

Author: Naveen Joshi, Founder and CEO of Allerin

What Role Does Data Quality Play in the General Data Protection Regulation (GDPR)?

With the arrival of 2018, many businesses are realizing the deadline to comply with the General Data Protection Regulation (GDPR) is looming in the not too distant future. In fact, according to Forrester Research, 80% of firms affected by the GDPR will not be able to comply with the regulation by the May 2018 deadline. GDPR has many different components that a business will need to meet to achieve compliance; one of which is making sure contact data is kept accurate and up-to-date.

Understanding the important role that data quality plays in complying with GDPR, Service Objects will host a webinar to provide information to those looking to get started. This live, informational webinar on the topic of “The Role of Data Quality in the General Data Protection Regulation (GDPR),” will take place on January 24, 2018 at 9:00am.

The interactive event will feature data quality expert Tom Redman, President of Data Quality Solutions, who will provide an overview of the regulation recently enacted by the EU, and why US businesses need to pay attention. He will also discuss how engaging in data quality best practices not only makes it easier to achieve compliance, but also makes good business sense.

According to Geoff Grow, Founder and CEO of Service Objects, businesses can incur fines up to $20 million Euros or 4% of a company’s annual revenue for not complying with the GDPR. The recently enacted EU regulation requires that every effort is taken to keep contact data accurate and up-to-date. Service Objects’ data quality solutions can help businesses meet this requirement, as well as identify which contacts reside in the EU and therefore fall under the GDPR.

To register for this complimentary webinar, visit https://www.serviceobjects.com/gdpr-webinar.  All those who register will also receive a link to the recording after the event.

Salesforce Data Quality Tools Integration Series – Part 1 – Apex Insert Trigger

With Salesforce being the dominant platform in the Customer Relationship Management (CRM) universe, we are excited to demonstrate how our tools can benefit the data quality of your contact data in this robust system.  Salesforce is highly customizable and one of the great things about it is that you don’t need to be a developer to create a rich user experience on the system.  Although, being a developer, will help with understanding some of the coding concepts involved with some of the components. With all the data in your organization don’t you want it to be clean, accurate, and as monetizable as possible? The Service Objects suite of validation API tools are here to help you achieve those goals. Our tools make your data better so you can gain deeper insights, make better decisions and achieve better results.

This blog will demonstrate various ways that a Salesforce administrator or developer can boost the quality of their data.  We will look into various flows, triggers, classes, Visualforce components, and Lightning components, and more as we demonstrate the ease of integration and the power of our tools in this series.

To get started, we will focus on demonstrating an Apex trigger with a future class.  I will show you the code, part by part, and discuss how it all fits together and why.  You will also learn how to create an Apex trigger, if you didn’t already know.

Apex triggers allow you to react to changes on a Salesforce object.  For our purposes, those objects will typically be objects that have some kind of contact information like a name, address, email or phone, IP, etc.  Additionally, the objects can contain various combinations of these as well.  In this example, we will use the Contact Salesforce object and specifically the mailing address fields, though the Account object would have made just as much sense to demo.  Service Objects has services for validating United States, Canadian and international addresses.  In this example, I am going to use our US address validation API.

We are going to design this trigger to work with both a single Contact insert and bulk Contact inserts.  This trigger will have two parts: the Apex trigger and an Apex future class.  We will use the trigger to gather all of the Contact Ids being inserted and the future class will process them with a web service call to our address validation API.

It is important to note a couple things about the future class method and why it is used.  First, using a future call tells the platform to wait and process the method at the next convenient time for the platform. Remember that Salesforce is a multi-tenanted platform, meaning organizations on the platform share, among others, processing resources.  With that in mind, the platform tries to govern processing so that everyone on the platform gets the same experience and not one organization can monopolize the system resources. Typically, the future calls get initiated very quickly but there are no guarantees on timing but you can be sure that the future method will process soon after calling it. Second, callouts to a web service cannot be executed within a trigger, so the future method acts more like a proxy for the functionality.  There are plenty more details and ways that a web service can be called and you can dive deep into the topic by going through the Salesforce documentation. If for no other reason, it forces you to separate the call to the web service from your trigger, in turn, exposing your future call to other code you may want to write.

Once we are finished the trigger and the future class, we will test out the functionality and then you will have some working code ready to deploy from your sandbox or development edition to your live org.  But wait, don’t forget to write those unit tests and get over 75% coverage…Shoot for 100%.  If you don’t know what I am talking about with unit tests I suggest you review that documentation on Salesforce.

The results of the future method will update mailing address fields and custom fields on the Contact object.  For this example, here are the fields you will need to create and their respective data types.  We will not need these until we get to the future method but it is a good idea to get them created and out of the way first.

  • Field name
    • Internal Salesforce name
    • Type
    • Service Objects field name
  • dotsAddrVal_DPVScore
    • AddrValDPVScore__c
    • Number(1, 0)
    • DPV
  • dotsAddrVal_DPVDescription
    • AddrValDPVDescription__c
    • Text(110)
    • DPVNotesDesc
  • dotsAddrVal_DPVNotes
    • AddrValDPVNotes__c
    • Long Text Area(3000)
    • DPVNotesDesc
  • dotsAddrVal_IsResidential
    • AddrValIsResidential__c
    • Checkbox
    • IsResidential
  • dotsAddrVal_ErrorDescription
    • AddrValErrorDescription__c
    • Long Text Area(3000)
    • Desc
  • dotsAddrVal_ErrorType
    • AddrValErrorType__c
    • Text(35)
    • Type

The last thing we will need to setup before we get started is registering the Service Objects’ endpoint with Salesforce so that we can make the web service calls to the address validation API.  The page to add the URL to is called “Remote Site Settings” and can be found in Home->Settings->Security->Remote Site Settings. This is the line I added for this blog.

 

Be aware that this will only work for trial keys.  With a live/production key you will want to add one for ws.serviceobjects.com and wsbackup.serviceobjects.com.  You’ll want both endpoints with a live key and we’ll explain more about that later.  We named this one ServiceObjectsAV3 but you can name it whatever you want.

Let’s get started with the trigger code.  The first thing needed is to setup the standard signature of the call.

The method will be acting on the Contact object and will execute after an insert.  Next, we will loop through the Contact records that were inserted pulling out all the associated Contact Ids.  Here you can add logic to filter out contacts or implement other business logic before adding the contact Id to the Id list of contacts to update.

Once we have gathered all the Ids, we will send them to the future method which is expecting list of Ids.

As you can see, this will work on one-off inserts or bulk inserts.  Since there is not much code to this trigger, I’ll show you the entire sample code for it here.

So that was painless, let’s move to the future class and we will see how easy it is to make the call to the web service as well.

Future methods need to be static and return void since they do not return and values.  They should also be decorated with the @future annotation and callout=true.

It will be more efficient to update the newly inserted records all at once instead of one at a time and with that in mind, we will store the results from our address validation web service in a new list of Contacts.

Based on the results from the service, we will either update the mailing address on the Contact record and/or the DPV note descriptions or errors, as well as, the Is Residential flag.  Some of these fields are standard on the Contacts object and some are custom ones that we created at the beginning of this project.  Here is a sample of initiating the loop in order to loop through the Contact Ids that we passed into this method from the trigger and then the SOQL call to retrieve the details.

In case you are wondering why we just didn’t create a list of Contacts and send those in from the trigger instead of building up the list of Contact Ids, the reason is there is a limitation to @future calls. You can only pass in primitive objects such as Ids, strings, integers and so on.  So we went with a list of Ids where in Salesforce Id is its own primitive type.

Demonstrated in the code, which is shown in the next screen shot, are our best practices for service failover to help ensure 100% uptime when making calls to the API.  Note, that with a live production key for our API, the URL for the first trial.serviceobjects.com would need to be ws.serviceobjects.com and the second one, the one inside the “if” statement, would need to be wsbackup.serviceobjects.com.

I left both of these as trial.serviceobjects.com because most people starting out will be testing their integration with a free trial key.  In the screen shot you will see that I have added the API license key to the call “ws-72-XXXX-XXXX”.  This is factitious. You will need to replace that with your proper key based on the live/production or trial endpoint your key is associated with.  A best practice suggestion for the key is to “hide” it in a custom variable or custom configuration instead of exposing here in this method.

Once we get a response back from the call to the API and everything is okay, we setup some variables and start parsing the response.  There are several ways to parse JSON and definitely better ways than the one described here but this is not an exercise in parsing JSON, it is an example in how to integrate.  In this example, we loop through the JSON looking for the field names that we are interested in.  We are looking for:

  • Address1
  • City
  • State
  • Zip
  • DPV
  • DPVDesc
  • DPVNotesDesc
  • IsResidential
  • Type
  • Desc

But the service returns many other valuable fields, which you can learn about from our comprehensive developer guide found here, which has other helpful information along with the fields mentioned.  Remember, if you do end up using more fields from the service and you want to display them or have them saved in your records, you will need to create corresponding custom fields for them.  The next screen shot is just a part of the code that pulls the data from the response.

In practice, you may want to make decisions on when to update the original address using more criteria, but in this example we are basing that decision on the DPV score result alone.  You can find out more about the different DPV codes back in the documentation.  When the DPV value is 1 then we are returning a valid mailing address.  Corrections to the address may have occurred so it would be best to update the address fields on the Contact record and that is what we are doing here just before adding the updated Contact to our new list of Contacts.

Once we have looped through the entire list of Ids that we sent into this method, we are ready to do the update in Salesforce.  Before this point, nothing yet would have been saved.

And there you have it, go ahead and add some new contacts with addresses and test it out.  Over at the Contacts tab I add a new contact and then refreshed the page to see the results.  I will purposely make an error in the address so we can see more of the validation results.

The address we added is for our office and there are several errors in the street name, city and zip code.  Let’s see if our system gets it right.

The address validation API fixed the address and returned that the fixed address is the correct address to use based on the inputs.  Next, we will demonstrate a bad, non-salvageable address. You will see more than a few things wrong here.

There were so many problems that the address was not salvageable at all.

Let’s try one more, but this time instead of a completely bad address, we will add a bad (not completely bad) address but missing key parts.

The input address is still not good enough to be good but this time we were able to get details back that can help with fixing the problems.

From the results, we can see that the address is there but perhaps a unit number or something key to the address is missing to get full delivery by the USPS.

In conclusion, there are a few things to take into consideration when integrating data quality tools into Salesforce. First, you need to do more error checking.  These were simple examples to show how to integration our service and the error checking was the bare minimum and not something I would expect to see in a production environment. So, please…please, more error checking. Second, don’t forget to write the unit tests and try to get 100% coverage.  Salesforce only requires 75% to be able to deploy your code, but we highly recommend striving for 100%, it is definitely attainable.  Salesforce has this requirement for several reasons. One being that when Salesforce makes updates to their platform, they can run all the units in all the organizations on the platform and ensure they are not going to break anyone’s system. It is just good practice to do so.  There is tons of documentation on Salesforce that will help you down the right path when it comes to testing. Lastly, we didn’t make any considerations in the code for the situation where a contact being inserted doesn’t have an address to validate or enough address components.  Clearly, you would want to add a check to the code to see if you have a valid combination of data that will allow for an address validation against our API.  You will want to see if any of these combinations exist. These represent the minimum or required fields.

  • Combination 1
    • Address
    • City
    • State
  • Combination 2
    • Address
    • Zip code

You can find the sample code for this blog on our web site with the file names TestTrigger.trigger and TestUtil.cls at this link.

The Direct and Indirect Costs of Poor Data Quality

Imagine that your company runs an online ad for a product. But when your customer gets to your website, this product has actually been discontinued. And from thereon in, every time the customer surfs the web they are now constantly served with banner ads for this non-existent product.

This scenario really happens more often than you think. And it is a perfect example of marketing at its worst: spending your budget to lose customers, ruin your service reputation, and destroy your brand.

We often talk about data quality on this blog, but this time I would like to focus on the results of lack of data quality. In the case above, poor linkages between inventory data and marketing lead to a bad customer experience AND wasted marketing dollars. Much the same thing is true with our specialties of contact data and marketing leads: bad data leads to a wellspring of avoidable costs.

First there are tangible costs. Bad leads and incorrect customer addresses lead to specific, measurable outcomes. Wasting human effort. Throwing away precious marketing capital. Penalties for non-compliance with telephone and email contact laws. Re-shipment costs and inventory losses. According to one Harvard Business Review article, the real costs of poor data quality now exceed $3 trillion per year in the United States alone.

Then there is the part few people pay enough attention to: the indirect costs of poor data quality. In a recent piece for CustomerThink, data management expert Chirag Shivalker points to factors such as sales that never materialize, potential customers who turn away, and the subtle loss of repeat business. Whether it is a misdirected marketing piece, an unwanted pitch on someone’s cell phone, or a poor customer experience, some of the biggest consequences of poor data quality are never quantified – and, perhaps more importantly, never noticed.

Finally there is the fact that data, like your car, is a depreciating asset. Even the most skillfully crafted database will degrade over time. Contact information is particularly vulnerable to decay, with estimates showing that as much as 70% of it goes bad every year. A recent article from insideBIGDATA put the scope of this in very stark terms: each and every hour, over 500 business addresses, 800 phone numbers and 150 companies change – part of a growing ball of data that, per IDC, will swell to over 160 zettabytes (for the uninitiated, a zettabyte is one sextillion, or 10 to the 21st power, bytes). And the bill for validating and cleaning up this data can average $100-200K or more for the average organization. So an ongoing approach is needed to preserve the asset value of this data, as well as prevent the costs and negative consequences of simply letting it age.

A recent data brief from SiriusDecisions breaks down these costs of poor data quality into four key areas: hard costs, productivity costs, hidden costs, and opportunity costs. And it is not like these costs are exactly a surprise: according to Accenture, data inaccuracy and data inconsistency are the leading concerns of more than three-quarters of the executives they surveyed. Their solution? Better use of technology, combined with process improvements such as formal data governance efforts.

The people in our original example above probably had no idea what kind of lost business and negative brand image they were creating through poor data quality. And ironically, I would say let’s keep it that way. Why? Because it is a competitive advantage for people like you and me who pay attention to data quality – or better yet, build processes to automate it. If you are ready to get started, take a look at some of our solutions and let’s talk.

Service Objects’ Application Engineers: Helping You Get Up and Running From Day 1

At Service Objects, one of our Core Values is Customer Service Above All. As part of this commitment, our Application Engineers are always available to answer any technical questions from prospects and customers. Whether users are beginning their initial investigation or need help with integration and deployment, our Engineers are standing by. While we continually make our services as easy to integrate as possible, we’d like to touch on a few common topics that are particularly helpful for users just getting started.

Network Issues

Are you are experiencing networking issues while making requests to our web services? It is a very common problem to face where outbound requests are being limited by your firewall and a simple rule update can solve the issue. When matters extend beyond simple rule changes, we are more than happy to schedule a meeting between our networking team and yours to get to the root cause and solve the issue.

Understanding the Service Outputs

Another common question revolves around the service outputs, such as how they should look and how they can be interpreted. From a high level, it is easy to understand what the service can provide but when it comes down to parsing the outputs, it can sometimes be a bit trickier. Luckily there are sets of documentation for every service and each of their operations. Our developer guides are the first place to check if you are having trouble understanding how individual fields can be interpreted and applied to your business logic. Every output has a description that provides insight into what that field means. Beyond the documentation, our Application Engineering team is available via multiple channels to answer your questions, including r email, live chat, and phone.

 Making the Move from Development to Production

Eventually everyone who moves from a being a trial user to a production user undergoes the same steps. Luckily for our customers, moving code from development to production is as easy as changing two items.

  • The first step is swapping out a trial license key to a production key.
  • The second step is to point your web service calls from our trial environment to our production environment. Our trial environment mirrors the exact outputs that you will find in production so no other code changes are necessary.

We understand that, even though we say it is easy, making the move to production can be daunting. That is why we are committed to providing your business with 24/7/365 technical support. We want the process to go as smoothly as possible and members of our team are standing by to help at a moment’s notice.

We have highlighted only a few broad cases that we have handled throughout our 16 years of providing genuine, accurate, and up-to-date data validation. Many technical questions are unique and our goal is to tackle them head on. If a question arises during your initial investigation, integration, move to production, or beyond, please don’t hesitate to contact us.

Baseball and Data Quality: America’s National Pastimes

By the time October rolls around, the top Major League baseball teams in the country are locked in combat, in the playoffs and then the World Series. And as teams take the field and managers sit in the dugout, everyone has one thing on their mind.

Data.

Honestly, I am not just using a cheap sports analogy here. Many people don’t realize that before my current career in data quality, I was a young pitcher with a 90+ MPH fastball. I eventually made it as far as the Triple-A level of the Pittsburgh Pirates organization. So I know a little bit about the game and how data plays into it. We really ARE thinking about data, almost every moment of the game.

One batter may have a history of struggling to hit a curve ball. Another has a good track record against left-handed pitching. Still another one tends to pull balls to the left when they are low in the strike zone. All of this has been captured as data. Have you noticed that position players shift their location for every new batter that comes to the plate? They are responding to data.

Long before there were even computers, baseball statisticians tracked everything about what happens in a game. Today, with real-time access to stats, and the ability to use data analytics tools against what is now a considerable pool of big data, baseball has become one of the world’s most data-driven sports. The game’s top managers are distinguished for what is on their laptops and tablets nowadays, every bit as much as for who is on their rosters.

And then there are the people watching the game who help pay for all of this – remember, baseball is fundamentally in the entertainment business. They are all about the data too.

A recent interview article with the CIO of the 2016 World Champion Chicago Cubs underscored how a successful baseball franchise leverages fan data at several levels: for example, tracking fan preferences for an optimal game experience, analyzing crowd flow to optimize the placement of concessions and restrooms, and preparing for a rush of merchandise orders in the wake of winning the World Series (although, as a lifelong Cubs fan, I realize that they’ve only had to do that once so far since 1908). For any major league team, every moment of the in-game experience – from how many hot dogs to prepare to the “walk up” music the organist plays when someone comes up to bat – is choreographed on the back of customer data.

Baseball has truly become a metaphor for how data has become one of the most valuable business assets for any organization – and for a competitive environment where data quality is now more important than ever. I couldn’t afford to pitch with bad data on opposing players, and you can’t afford to pursue bad marketing leads, ship products to wrong customer addresses, or accept fraudulent orders. Not if your competitors are paying closer attention to data quality than you are.

So, pun intended, here’s my pitch: look into the ROI of automating your own data quality, in areas such as marketing leads, contact data verification, fraud prevention, compliance, and more. Or better yet, leverage our demographic and contact enhancement databases for better and more profitable customer analytics. By engineering the best data quality tools right into your applications and processes, you can take your business results to a new level and knock it out of the park.

Testing Through Batches or Integration: At Service Objects, It’s Your Choice

More times than not, you have to buy something to really try it.  At Service Objects, we think it makes more sense to try before you buy.  We are confident that our service will exceed expectations and are happy to have prospects try our services before they spend any money on them.  We have been doing this from the day we opened our doors.  With Service Objects, you can sign up for a free trial key for any of our services and do all your testing before spending a single cent.  You can learn about the multiple ways to test drive our services from our blog, “Taking Service Objects for a Test Drive.” Today, however, I am focusing on batch testing and trial integration.

Having someone go through their best explanations to convey purpose or functionality can be worthwhile but, as the saying goes, a picture is worth a thousand words.  If you want to know how our services work, the best way to see them is simply try them out for yourself.  With minimal effort, we can run a test batch for you and have it turned around within a couple hours…even less time in most cases.  Another way we encourage prospects to test is by directly integrating our API services into their systems.  That way you see exactly how the services behave and get a better feel for our sub-second response times.  The difference between a test batch and testing through direct integration is the test batch will show the results and the test through integration will demonstrate how the system behaved to deliver results.

TESTING THROUGH BATCHES

Test batches are great.  They give you an opportunity to see the results from the service first hand, including all the different fields we return.  Our Account Executives are happy to review the results in detail and you always have the support of the Applications Engineering team to help you along.  With test batches, you can quickly see that a lot of information is returned regardless of the service you are interested in.  Most find it is far more information than expected and often clients find that the additional information helps them solve other problems beyond their initial purpose.  Another aspect that becomes clearer is the meaning of the fields. You get to see the fields in their natural environment and obtain a better understanding than the strict academic definitions.  Lastly, it is important to see how your own data fairs through the service and far more powerful to show how your data can be improved rather than just discussing it conceptually.  That is where our clients get really excited about our services.

TESTING THROUGH INTEGRATION

Testing through integration is a solid way to gain an understanding of how the service behaves and its results.  It is a great way to get a feel for the responses that come back and how long it takes.  More importantly, you can identify and fix issues in your process long before you start paying for the service.  Plus, our support team is here to assist you through any part of the integration process.  Our services are built to be straightforward and simple to integrate, with most developers completing them in a short period of time.  Regardless, we are always here to help.  Although we highly recommend prospects run their own records through the service, we also provide sample data to help you get started.  The important part is you have a chance to try the service in its environment before making a commitment.

Going forward with either of these approaches will quickly demonstrate how valuable our services are. Even more powerful is when you combine the two testing procedures with your own data for the best understanding of how they will behave together.

With all that said, if you’re still unsure how to best begin, just give us a call at 805-963-1700 or sign up for a free trial key and we’ll help you get started.

CASS and DPV: A Higher Standard for Address Accuracy

If you market to or serve people by mail, there are two acronyms you should get to know: CASS and DPV. Here is a quick summary of both of them:

  • CASS stands for the Coding Accuracy Support System™. As the name implies, its function is to support address verification software vendors with a measurable standard for accuracy. It also represents a very high bar set by the US Postal Service to ensure that address verification meets very strict quality standards.
  • DPV stands for Delivery Point Validation™. This is a further capability supported under CASS, making sure that an address is deliverable.

You may ask, “If an address is accurate, why do we have to check to make sure it is also deliverable?” The answer lies in the broader definition of what an address is – a placeholder for a residence or business that could receive mail. Not every address is, in fact, deliverable: for example, 45 Elm Street might be someone’s residence, while 47 Elm Street might currently be a vacant lot – or not exist at all. Another example is multi-unit dwellings that share an address: 100 State Street, Apartment 4 may be deliverable, while 100 State Street, Apartment 5 may not exist. So you want to ensure addressability AND deliverability for every address within your contact database.

Now, here is why you need to care about CASS and DPV in particular:

Rigorous. CASS certification is truly the data quality equivalent of Navy SEAL training. The first step is an optional (Stage I) test that lets developers run a sample address file for testing and debugging purposes. Next is Stage II, a blind 150,000-address test that only returns scores from USPS, not results. To obtain CASS certification, these scores must meet strict passing criteria ranging between 98.5% and 100% in specific categories.

Recurring. CASS certification is not a lifetime badge of honor. The USPS requires software providers to renew their certification every year, with a fresh round of testing required. Service Objects has not only been continuously CASS-certified for much of the past decade, but has also forged a unique partnership with USPS to update and refresh its CASS-certified address data every two weeks.

Reliable. DPV capabilities are based on the master list of delivery points registered with the USPS, which stores actual deliverable addresses in the form of an 11-digit code, incorporating data such as address, unit, and ZIP+4 codes. While the codes themselves can (and do) change frequently, the real key in address deliverability is having up-to-date access to current USPS data. Service Objects licenses DPV tools as an integral part of its address validation capabilities.

Our CASS-certified address engine and continuously updated USPS address data are two of the critical components behind our proprietary address database. Whether you run your addresses through our USPS address validation API in your application or use a convenient batch process, those addresses are instantly compared, validated, corrected, and/or appended to provide accurate results.

If you’ve read this far, it is probably clear that CASS certification and DPV capabilities are critically important for managing your contact data quality. So be sure to partner with a vendor that maintains continuous CASS certification with full support of DPV. Like Service Objects, of course. Contact us to learn what we can do for your contact addresses and marketing leads today!

Getting the Most Out of Data-Driven Marketing

How well do you know your prospects and customers?

This question lies at the heart of what we call data-driven marketing. Because the more you know about the people you contact, the better you can target your offerings. Nowadays smart marketers are increasingly taking advantage of data to get the most bang from their marketing budgets.

Suppose that you offer a deal on a new razor, and limit the audience to adult men. Or take people who already eat fish at your restaurant on Tuesdays, and promote a Friday fish fry. Or laser-target a new lifestyle product to the exact demographic group that is most likely to purchase it. All of these are examples where a little bit of data analytics can make a big difference in the success and response rate of a marketing campaign.

According to UK data marketing firm Jaywing, 95% of marketers surveyed personalize their offerings based on data, although less than half currently measure the ROI of these efforts, and less than 10% take advantage of full one-to-one cross-channel personalization. But these efforts are poised to keep growing, notes their Data Management Practice Director Inderjit Mund: “Data availability is growing exponentially. Adopting best practice data management is the only way marketers can maintain a competitive advantage.”

Of course, data-driven marketing can also go sideways. For example, bestselling business author and television host Carol Roth once found herself peppered with offers for baby merchandise – including an unsolicited package of baby formula – even though she is not the least bit pregnant. Her suspicion? Purchasing baby oil regularly from a major chain store, which she uses in the shower, made their data wonks mistakenly think that she was a new mother. Worse yet, this kind of targeted marketing also led the same chain to unwittingly tip off a father that his daughter was pregnant.

This really sums up the promise, and the peril, of using data to guide your marketing efforts. Do it wrong, and you not only waste marketing resources – you risk appearing inept, or worse, offending a poorly targeted segment of your market base. But when you do it right, you can dramatically improve the reach and efficiency of your marketing for a minimal cost.

This aligns very closely with our view of a marketing environment that is increasingly fueled by data. Among the best practices recommended by Jaywing for data-driven marketing, data quality is front and center with guidelines such as focusing on data management, having the right technology in place, and partnering with data experts. And they are not alone: according to a recent KPMG CEO survey, nearly half of respondents are concerned about the integrity of the data on which they base decisions.

There is a clear consensus nowadays that powering your marketing with data is no longer just an option. This starts with ensuring clean contact data, at the time of data entry and the time of use. Beyond that, smart firms leverage this contact data to gain customer insight in demographic areas such as location, census and socioeconomic data, to add fuel to their address or email-based marketing. With cost-effective tools that automate these processes inside or outside of your applications, the days of scattershot, data-blind marketing efforts are quickly becoming a thing of the past.

Service Objects is the industry leader in real-time contact validation services.

Service Objects has verified over 3 billion contact records for clients from various industries including retail, technology, government, communications, leisure, utilities, and finance. Since 2001, thousands of businesses and developers have used our APIs to validate transactions to reduce fraud, increase conversions, and enhance incoming leads, Web orders, and customer lists. READ MORE