Posts Tagged ‘Data Quality’

data privacy laws

A New Data Privacy Challenge for Europe – and Beyond

New privacy regulations in Europe have recently become a very hot topic again within the business community. And no, we aren’t talking about the recent GDPR law.

A new privacy initiative, known as the ePrivacy Regulation, deals with electronic communications. Technically a revision to the EU’s existing ePrivacy Directive or “cookie law,” and pending review by the European Union’s member states, it could go into effect as early as this year. And according the New York Times, it is facing strong opposition from many technology giants including Google, Facebook, Microsoft and others.

Data privacy meets the app generation

Among other things, the new ePrivacy Regulation requires explicit permission from consumers for applications to use tracking codes or collect data about their private communications, particularly through messaging services such as Skype, iMessage, games and dating apps.  Companies will have to disclose up front how they plan to use this personal data, and perhaps more importantly, must offer the same access to services whether permission is granted or not.

Ironically this new law will also remove the previous directive’s need for the incessant “cookie notices” consumers now receive, by using browser tracking settings, while tightening the use of private data. This will be a mixed blessing for online services, because a simple default browser setting can now lock out the use of tracking cookies that many consumers routinely approved under the old pop-up notices. As part of its opposition to these new rules, trade groups are painting a picture of slashed revenues, fewer free services and curbs on innovation for trends such as the Internet of Things (IoT).

A longstanding saying about online services is that “when something is free, you are the product,” and this new initiative is one of the more visible efforts for consumers to push back and take control of the use of their information. And Europe isn’t alone in this kind of initiative – for example, the new California Consumer Privacy Act, slated for the late 2018 ballot, will also require companies to provide clear opt-out instructions for consumers who do not wish their data to be shared or sold.

The future: more than just European privacy laws

So what does this mean for you and your business? No one can precisely foretell the future of these regulations and others, but the trend over time is clear: consumer privacy legislation will continue to get tighter and tighter. And the days of unfettered access to the personal data of your customers and prospects are increasingly coming to an end. This means that data quality standards will continue to loom larger than ever for businesses, ranging from stricter process controls to maintaining accurate consumer contact information.

We frankly have always seen this trend as an opportunity. As with GDPR, regulations such as these have sprung from past excesses the lie at the intersection of interruptive marketing, big data and the loss of consumer privacy. Consumers are tired of endless spam and corporations knowing their every move, and legislators are responding. But more important, we believe these moves will ultimately lead businesses to offer more value and authenticity to their customers in return for a marketing relationship.

Freshly Squeezed…Never Frozen

Data gets stale over time. You rely on us to keep this data fresh, and we in turn rely on a host of others – including you! The information we serve you is the product of partnerships at many levels, and any data we mine or get from third party providers needs to be up-to-date.

This means that we rely on other organizations to keep their data current, but when you use our products, it is still our name on the door. Here at Service Objects, we use a three-step process to do our part in providing you with fresh data:

Who: We don’t make partnerships with just anyone.  Before we take on a new vendor, we fully vet them to be sure this partnership will meet our standards, now and in the future. To paraphrase the late President Reagan, we take a “trust but verify” approach to every organization we team up with.

What: We run tests to make sure that data is in fact how we expect it to be. This runs the gamut from simple format tests to ensuring that results are accurate and appropriate.

When: Some of the data we work with is updated in real time, while other data is updated daily, weekly, or monthly.  Depending on what type of data it is, we set up the most appropriate update schedule for the data we use.

At the same time, we realize this is a partnership between us and you – so to get the most out of our data, and for you to have the best results, we always suggest that you make sure to re-check some of your data points periodically, regardless of whether you are using our API or our batch processing system. Some of the more obvious reasons for this are that people move, phone numbers change, emails change, areas get redistricted, and so on. To maintain your data and keep it current, we recommend periodically revalidating it against our services.

Often business will implement our services to check data at the point of entry into their system, and also to perform a one-time cleanse to create a sort of baseline. This is all a good thing, especially when you make sure that data is going into your systems properly and is as clean as possible. However, it is important to remember that in 6-12 months some of this data will no longer be current.  Going the extra step to create a periodic review of your data is a best practice and is strongly recommended.

We also suggest keeping some sort of time stamp associated with when a record was validated, so that when you have events such as a new email campaign and some records have not been validated for a long time – for example, 12 months or more – you can re-run those records through our service.  This way you will ensure that you are getting the most out of your campaign, and at the same time protect your reputation by reducing bounces.

Finally, here is a pro tip to reduce your shipping costs: in our Address Validation service, we return an IsResidential indicator that identifies an address as being residential or not.  If this indicator changes, having the most recent results will help your business make the most cost-effective shipping decisions.

For both us and you, keeping your data fresh helps you get the most out of these powerful automation tools. In the end there is no specific time span we can recommend for verification that will suit every business across the board, and there will be cases where it isn’t always necessary to keep revalidating your data: the intervals you decide to use for your application will depend mostly on your application. But this is still an important factor to keep in mind as you design and evaluate your data quality process.

To learn more about how our data quality solutions can help your business, visit the Solutions section of our website.

Around the World with Data Privacy Laws

If you work with data, you have certainly heard by now about GDPR: the new European Union laws surrounding consumer data privacy that went into effect May 25, 2018. But how about PIPEDA, NDB, APPI, CCPA, and SHIELD?

These acronyms represent data privacy regulations in other countries (in these cases for Canada, Australia, Japan, California and New York respectively). Many are new or recently expanded, and all are examples of how your legal responsibilities to customers don’t stop with GDPR. More importantly, they represent an opportunity for you and your business to use data quality and 21st century marketing practices to differentiate yourself from your competition.

Data Protection and Privacy Laws Are Becoming Increasingly Popular

Let’s discuss some of these new regulations. According to authentication vendor Auth0, there are a wide range of reasons for their recent proliferation. First, the rollout of GDPR has implications for other countries, including whether their personal data can flow into the EU – meaning that their data quality and protection regulations must align sufficiently with EU rules to be “whitelisted” by them. New laws now being adopted by other countries address issues such as breach notification, the use of genetic and biometric data, and the rights of individuals to stop their data from being sold.

Moreover, data privacy and security doesn’t stop with Europe and GDPR. Other countries are now starting to explore the rights of consumers in this new era of online information gathering and big data. For example, Japan and other countries now have additional regulations surrounding the use of personal information codes to identify data records, and there is increasing scrutiny on personal data that is gathered through means such as social media.

Contact Data Plays a Key Role in Compliance

Now, let’s talk about your contact data. It often isn’t ready for global data regulations, through actions such as not gathering country information at the point of data entry, or having onerous location data entry requirements (like putting “United States” at the end of a long pull-down menu of countries) that encourage false responses. Worse, existing contact data often has serious information gaps or incorrect information, and it goes bad very quickly: for example, nearly 20% of phone numbers and 35% of email addresses change every year.

Finally, let’s talk about you. In the face of a growing list of data privacy and security regulations, your job isn’t just to become GDPR-compliant. It is to build and maintain a best-practices approach to data quality, which in turn keeps you up to date with both today’s consumer data laws and tomorrow’s.

Data Quality Best Practices Are a Competitive Differentiator

Taking a step back from this flood of new regulations, we would also suggest that an ideal goal isn’t just compliance – it is to leverage today’s data quality environment as a competitive opportunity. Why do these new laws exist? Because of consumer demand. People are tired of interruptive broad-brush marketing, invasive spam, and unwanted telemarketing. When you build your own marketing strategy around better targeting, curated customer relationships, and respect for the consumer, your focus can shift from avoiding penalties to growing your brand and market share faster.

We can help with both of these objectives. For starters, we now offer our Country Detective service, which can process up to 500 contact records and append correct countries to them to help guide your compliance efforts. And for the longer term we offer a free Global Data Assessment, where our team will consult with you at no charge about strategies for data quality in today’s new regulatory and market environment. Interested? Contact us to get the ball rolling, and take the next step in your global market growth.

Instead of focusing on “cleaning dirty customer data,” organizations should focus on the connection between investments in data quality and customer service metrics.

Data Quality and Customer Experience

Once upon a time, customer service and support operations were viewed as the “complaint department” – a back-office function, a necessary evil, and above all a cost center whose role should be reduced as much as possible. These days, it has become increasingly clear that businesses must prioritize data quality. As Thomas Redman advised is a recent guest post, “Getting in front on data quality presents a terrific opportunity to improve business performance.”

While some organizations still have a break/fix mentality about customer support, the very best organizations now view their customer contact operations as the strategic voice of the customer – and leverage customer engagement as a strategic asset. Thanks to tools ranging from CRM and social media, many businesses manage their customer experience as closely as they manage their products and services.

The Strategic Role of Data Quality

This leads us to an important analogy about data quality. Like the “complaint department” days of customer service, many organizations still view data quality as little more than catching and fixing bad contact data. In reality, our experience with a base of nearly 2500 customers has taught that data quality plays a very strategic role in areas like cost control, marketing reach, and brand reputation in the marketplace.

This worldview is still evolving slowly. For example, according to a 2017 CIO survey by Talend, data quality and data governance remain the biggest concerns of IT departments, at 33 and 37 percent respectively – and yet their top priorities reflect more trendy objectives such as big data and real-time analytics. And back in 2012 Forrester vice president Kate Leggett observed that data quality often remains the domain of the IT department, and data projects for customer service rarely get funded.

Meanwhile, data quality has also become an important component of customer experience. Leggett notes that instead of an IT-driven process of “cleaning dirty customer data,” organizations should reframe the conversation towards the impact of data quality on customer-facing functions, and understand the connection between investments in data quality and customer service metrics.

Here at Service Objects, we see three key areas in the link between data quality and customer experience:

Customer Engagement

When you have good data integrated with effective CRM, you have the ability to market appropriately and serve customers responsively. You can target your messages to the right people, react responsively in real time to customer needs, and create systems that satisfy and delight the people you serve.

Service Failures

Mis-deliver a package because of a bad address, and you make a customer very unhappy. Do things like this even a small percentage of the time, and you gain a reputation as a company that doesn’t execute. Keep doing it and even many customers who haven’t been wronged yet will seek other options where possible, because of the “herd mentality” that builds around those who do complain publicly and on social media.

Strategic Visibility

Your customer data is an important asset that gives you the ability to analyze numerous aspects of your customer relationships and react appropriately. It holds the knowledge of everything from demographics to purchasing patterns, as well as their direct feedback through service and support. Having accurate customer data is central to leveraging this data strategically.

One heartening trend is that more organizations than ever now see the connection between data quality and their customer relationships. For example, one 2017 article in PharmExec.com cited a European customer data survey showing that nearly three-quarters of life sciences respondents feel that having a complete and real-time view of customers is a top priority – while only 40% are satisfied with how well they are doing this. We are seeing similar figures across other industries nowadays, and view this as a good sign that we are moving over time towards a smoother, more data-driven relationship between organizations and their customers.

Recognizing the vital role contact data quality plays in GDPR compliance, Service Objects is offering affected businesses a free data quality assessment.

Free Data Quality Assessment Helps Businesses Gauge GDPR Compliance Ahead of May Deadline

As the May 25, 2018, deadline looms, Service Objects, the leading provider of real-time global contact validation solutions, is offering a GDPR Data Quality Assessment to help companies evaluate their if they are prepared for the new set of privacy rules and regulations.

“Our goal is to help you get a better understanding of the role your client data plays in GDPR compliance,” says Geoff Grow, CEO and Founder, Service Objects. “With our free GDPR Data Quality Assessment, companies will receive an honest, third-party analysis of the accuracy of their contact records and customer database.”

Under the GDPR, personal data includes any information related to a natural person or ‘Data Subject’ that can be used to identify the person directly or indirectly. It can be anything from a name, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer IP address.

Even if an organization is not based in the EU, it may still need to observe the rules and regulations of GDPR. That’s because the GDPR not only applies to businesses located in the EU but to any companies offering goods or services within the European Union. In addition, if a business monitors the behavior of any EU data subjects, including the processing and holding of personal data, the GDPR applies.

Recognizing the vital role contact data quality plays in GDPR compliance, Service Objects decided to offer a free data quality assessment to help those industries affected by the regulation measure the accuracy of their contact records and prepare for the May 2018 deadline.

The evaluation will include an analysis of up to 500 records, testing for accuracy across a set of inputs including name, phone, address, email, IP, and country. After the assessment is complete, a composite score will be provided, giving businesses an understanding of the how close they are to being compliant with GDPR’s Article 5.

Article 5 of the GDPR requires organizations collecting and processing personal information of individuals within the European Union (EU) to ensuring all current and future customer information is accurate and up-to-date. Not adhering to the rules and regulations of the GDPR can result in a fine of up to 4% of annual global turnover or €20 Million (whichever is greater).

“To avoid the significant fines and penalties associated with the GDPR, businesses are required to make every effort to keep their contact data is accurate and up-to-date,” Grow added. “Service Objects’ data quality solutions enable global businesses to fulfill the regulatory requirements of Article 5 and establish a basis for data quality best practices as part of a broader operational strategy.”

 

For more information on how to get started with your free GDPR Data Quality Assessment, please visit our website today.

When that data is incomplete, poorly defined, or wrong, there are immediate consequences: angry customers, wasted time, and difficult execution of strategy. Employing data quality best practices presents a terrific opportunity to improve business performance.

The Unmeasured Costs of Bad Customer and Prospect Data

Perhaps Thomas Redman’s most important recent article is “Seizing Opportunity in Data Quality.”  Sloan Management Review published it in November 2017, and it appears below.  Here he expands on the “unmeasured” and “unmeasurable” costs of bad data, particularly in the context of customer data, and why companies need to initiate data quality strategies.

Here is the article, reprinted in its entirety with permission from Sloan Management Review.

The cost of bad data is an astonishing 15% to 25% of revenue for most companies.

Getting in front on data quality presents a terrific opportunity to improve business performance. Better data means fewer mistakes, lower costs, better decisions, and better products. Further, I predict that many companies that don’t give data quality its due will struggle to survive in the business environment of the future.

Bad data is the norm. Every day, businesses send packages to customers, managers decide which candidate to hire, and executives make long-term plans based on data provided by others. When that data is incomplete, poorly defined, or wrong, there are immediate consequences: angry customers, wasted time, and added difficulties in the execution of strategy. You know the sound bites — “decisions are no better than the data on which they’re based” and “garbage in, garbage out.” But do you know the price tag to your organization?

Based on recent research by Experian plc, as well as by consultants James Price of Experience Matters and Martin Spratt of Clear Strategic IT Partners Pty. Ltd., we estimate the cost of bad data to be 15% to 25% of revenue for most companies (more on this research later). These costs come as people accommodate bad data by correcting errors, seeking confirmation from other sources, and dealing with the inevitable mistakes that follow.

Fewer errors mean lower costs, and the key to fewer errors lies in finding and eliminating their root causes. Fortunately, this is not too difficult in most cases. All told, we estimate that two-thirds of these costs can be identified and eliminated — permanently.

In the past, I could understand a company’s lack of attention to data quality because the business case seemed complex, disjointed, and incomplete. But recent work fills important gaps.

The case builds on four interrelated components: the current state of data quality, the immediate consequences of bad data, the associated costs, and the benefits of getting in front on data quality. Let’s consider each in turn.

Four Reasons to Pay Attention to Data Quality Now

The Current Level of Data Quality Is Extremely Low

A new study that I recently completed with Tadhg Nagle and Dave Sammon (both of Cork University Business School) looked at data quality levels in actual practice and shows just how terrible the situation is.

A new study that I recently completed with Tadhg Nagle and Dave Sammon (both of Cork University Business School) looked at data quality levels in actual practice and shows just how terrible the situation is.

We had 75 executives identify the last 100 units of work their departments had done — essentially 100 data records — and then review that work’s quality. Only 3% of the collections fell within the “acceptable” range of error. Nearly 50% of newly created data records had critical errors.

Said differently, the vast majority of data is simply unacceptable, and much of it is atrocious. Unless you have hard evidence to the contrary, you must assume that your data is in similar shape.

Bad Data Has Immediate Consequences

Virtually everyone, at every level, agrees that high-quality data is critical to their work. Many people go to great lengths to check data, seeking confirmation from secondary sources and making corrections. These efforts constitute what I call “hidden data factories” and reflect a reactive approach to data quality. Accommodating bad data this way wastes time, is expensive, and doesn’t work well. Even worse, the underlying problems that created the bad data never go away.

One consequence is that knowledge workers waste up to 50% of their time dealing with mundane data quality issues. For data scientists, this number may go as high as 80%.

A second consequence is mistakes, errors in operations, bad decisions, bad analytics, and bad algorithms. Indeed, “big garbage in, big garbage out” is the new “garbage in, garbage out.”

Finally, bad data erodes trust. In fact, only 16% of managers fully trust the data they use to make important decisions.

Frankly, given the quality levels noted above, it is a wonder that anyone trusts any data.

When Totaled, the Business Costs Are Enormous

Obviously, the errors, wasted time, and lack of trust that are bred by bad data come at high costs.

Companies throw away 20% of their revenue dealing with data quality issues. This figure synthesizes estimates provided by Experian (worldwide, bad data cost companies 23% of revenue), Price of Experience Matters ($20,000/employee cost to bad data), and Spratt of Clear Strategic IT Partners (16% to 32% wasted effort dealing with data). The total cost to the U.S. economy: an estimated $3.1 trillion per year, according to IBM.

The costs to businesses of angry customers and bad decisions resulting from bad data are immeasurable — but enormous.

Finally, it is much more difficult to become data-driven when a company can’t depend on its data. In the data space, everything begins and ends with quality. You can’t expect to make much of a business selling or licensing bad data. You should not trust analytics if you don’t trust the data. And you can’t expect people to use data they don’t trust when making decisions.

Two-Thirds of These Costs Can Be Eliminated by Getting in Front on Data Quality

“Getting in front on data quality” stands in contrast to the reactive approach most companies take today. It involves attacking data quality proactively by searching out and eliminating the root causes of errors. To be clear, this is about management, not technology — data quality is a business problem, not an IT problem.

Companies that have invested in fixing the sources of poor data — including AT&T, Royal Dutch Shell, Chevron, and Morningstar — have found great success. They lead us to conclude that the root causes of 80% or more of errors can be eliminated; that up to two-thirds of the measurable costs can be permanently eliminated; and that trust improves as the data does.

Which Companies Should Be Addressing Data Quality?

While attacking data quality is important for all, it carries a special urgency for four kinds of companies and government agencies:

Those that must keep an eye on costs. Examples include retailers, especially those competing with Amazon.com Inc.; oil and gas companies, which have seen prices cut in half in the past four years; government agencies, tasked with doing more with less; and companies in health care, which simply must do a better job containing costs. Paring costs by purging the waste and hidden data factories created by bad data makes far more sense than indiscriminate layoffs — and strengthens a company in the process.

Those seeking to put their data to work. Companies include those that sell or license data, those seeking to monetize data, those deploying analytics more broadly, those experimenting with artificial intelligence, and those that want to digitize operations. Organizations can, of course, pursue such objectives using data loaded with errors, and many companies do. But the chances of success increase as the data improves.

Those unsure where primary responsibility for data should reside. Most businesspeople readily admit that data quality is a problem, but claim it is the province of IT. IT people also readily admit that data quality is an issue, but they claim it is the province of the business — and a sort of uneasy stasis results. It is time to put an end to this folly. Senior management must assign primary responsibility for data to the business.

Those who are simply sick and tired of making decisions using data they don’t trust. Better data means better decisions with less stress. Better data also frees up time to focus on the really important and complex decisions.

Next Steps for Senior Executives

In my experience, many executives find reasons to discount or even dismiss the bad news about bad data. Common refrains include, “The numbers seem too big, they can’t be right,” and “I’ve been in this business 20 years, and trust me, our data is as good as it can be,” and “It’s my job to make the best possible call even in the face of bad data.”

But I encourage each executive to think deeply about the implications of these statistics for his or her own company, department, or agency, and then develop a business case for tackling the problem. Senior executives must explore the implications of data quality given their own unique markets, capabilities, and challenges.

The first step is to connect the organization or department’s most important business objectives to data. Which decisions and activities and goals depend on what kinds of data?

The second step is to establish a data quality baseline. I find that many executives make this step overly complex. A simple process is to select one of the activities identified in the first step — such as setting up a customer account or delivering a product — and then do a quick quality review of the last 100 times the organization did that activity. I call this the Friday Afternoon Measurement because it can be done with a small team in an hour or two.

The third step is to estimate the consequences and their costs for bad data. Again, keep the focus narrow — managers who need to keep an eye on costs should concentrate on hidden data factories; those focusing on AI can concentrate on wasted time and the increased risk of failure; and so forth.

Finally, for the fourth step, estimate the benefits — cost savings, lower risk, better decisions — that your organization will reap if you can eliminate 80% of the most common errors. These form your targets going forward.

Chances are that after your organization sees the improvements generated by only the first few projects, it will find far more opportunity in data quality than it had thought possible. And if you move quickly, while bad data is still the norm, you may also find an unexpected opportunity to put some distance between yourself and your competitors.

______________________________________________________________________

Service Objects spoke with the author, Tom Redman, and he gave us an update on the Sloan Management article reprinted above, particularly as it relates to the subject of the costs associated with bad customer data.

Please focus first on the measurable costs of bad customer data.  Included are items such as the cost of the work Sales does to fix up bad prospect data it receives from Marketing, the costs of making good for a customer when Operations sends him or her the wrong stuff, and the cost of work needed to get the various systems which house customer data to “talk.”  These costs are enormous.  For all data, it amounts to roughly twenty percent of revenue.

But how about these costs:

  • The revenue lost when a prospect doesn’t get your flyer because you mailed it to the wrong address.
  • The revenue lost when a customer quits buying from you because fixing a billing problem was such a chore.
  • The additional revenue lost when he/she tells a friend about his or her experiences.

This list could go on and on.

Most items involve lost revenue and, unfortunately, we don’t know how to estimate “sales you would have made.”  But they do call to mind similar unmeasurable costs associated with poor manufacturing in the 1970s and 80s.  While expert opinion varied, a good first estimate was that the unmeasured costs roughly equaled the measured costs.

If the added costs in the Seizing Opportunity article above doesn’t scare into action, add in a similar estimate for lost revenue.

The only recourse is to professionally manage the quality of prospect and customer data.  It is not hyperbole to note that such data are among a company’s most important assets and demand no less.

©2018, Data Quality Solutions

 

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Salesforce Data Quality Tools Integration Series – Part 4 – Lightning App

We are back now with the fourth blog in our Salesforce Data Quality Tools Integration Series.  In previous blogs,  we covered various topics like; creating a plug-in that could be dropped on a flow, a trigger and Apex classes.  From these blogs, you can learn how  Service Objects APIs will help improve your data quality and in turn,  the performance of your Salesforce instance. Like the VisualForce app demonstration, this demo shows how you’ll be able to extend our services for your own purposes in Salesforce’s Lightning framework.  By the end of this blog, you’ll have all the code you’ll need to get started, so don’t worry about implementing this step by step.

This Lightning app is going to serve as a quick email contact input and validation tool.  We will use separate custom objects and custom fields to make this app stand alone from your other objects in Salesforce.  With that said, please note that everything I demonstrate is totally customizable.  For the purposes here, that means you can adjust which objects and fields (standard or custom) you want to use.  You will also be able to customize the call to our Email Validation API and insert your own business specific logic.  There are a lot of code files in this project but do not be discouraged.  This only means that the code is broken down into byte size parts and abstracted to keep the logic a UI separate.

First things first, we are going to start with some basic setup.  Unlike VisualForce, before we can work with the Lightning framework, we will need to turn on My Domain in Salesforce.  You need to have your own sub domain for your Salesforce org, and that is what My Domain does, it allows you to create a custom sub domain.  You can find the settings under Company Settings when using the Lightning Experience.  This link will take you through the details on setting it up.  After you have activated it, you may need to wait several minutes before it is ready. Salesforce will email you when it is done.

The next part of the setup is setting up the Service Objects endpoint.  I am not going to go over it this time because I go over it in the first and second parts of this series.  So, if you need help with setting this up or want a description of what this is, then I would refer you to those blogs.  If you have been following along from the first two blogs, then you would’ve had completed this part already.

In the VisualForce demo, we jumped right into creating the custom fields, however, this time we need first to create the custom object that will house our custom fields.  In the Object Manager, click on Create and select Custom Object.  From there, you will be prompted to fill in several details of our new custom object.  Here is what you will need to add:

  • Label
    • Email Contact
  • Plural Label
    • Email Contacts
  • Starts with vowel sound
    • Check
  • Record Name
    • Name
  • Launch New Custom Tab Wizard after saving this custom object
    • Check

That last check for launching the New Custom Tab Wizard will create a custom tab for you to be able to add/edit/delete during testing before we create the final home for our app.

Next, you will need to add the following custom fields to the newly created Email Contact object.  If you are customizing this for your own purposes, you will want to add these fields to the object you are working with.  If you want to map more of the fields that we return from our service, you’ll have to create the appropriate fields on the object if there isn’t an existing field at your disposal.  If you are using existing objects for these fields, you will want to take into consideration the names of the fields, to prevent conflicts or confusion in your system.

  • Field name
    • Internal Salesforce name
    • Type
    • Service Objects field name
  • Email
    • Email__c
    • Email (Unique and required)
    • EmailAddress
  • Status
    • Status__c
    • Picklist
      • Review (Default)
      • Accepted
      • Rejected
    • None
  • Score
    • Score__c
    • Number (1,0)
    • Score
  • Notes
    • Notes__c
    • Text (255 and default set to “None”)
    • NotesDescription
  • Warnings
    • Warnings__c
    • Text (255 and default set to “None”)
    • WarningDescriptions
  • Errors
    • Errors__c
    • Text (255 and default set to “None”)
    • Type, Error.Description
  • Is Deliverable
    • Is_Deliverable__c
    • Picklist
      • Unknown (Default)
      • True
      • False
    • IsDeliverable
  • Is Catch All Domain
    • Is_Catch_All_Domain__c
    • Picklist
      • Unknown (Default)
      • True
      • False
    • IsCatchAllDomain
  • Is SMTP Server Good
    • Is_SMTP_Server_Good__c
    • Picklist
      • Unknown (Default)
      • True
      • False
    • IsSMTPServerGood
  • Is SMTP Mailbox Good
    • Is_SMTP_Mailbox_Good__c
    • Picklist
      • Unknown (Default)
      • True
      • False
    • IsSMTPMailBoxGood

Ok, so I didn’t promise some of this wasn’t going to be tedious, but at least it wasn’t hard.  This time around, we are adding more custom fields with default values and picklists.  Do not skip the default values; they are key in some of the ways the code works.

Now that we have that out of the way, let’s take a look at what we are going to build (I fudged some of the values so we could see all states of a validated email on the interface).

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Below is a better view of the functionality.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Simply put, we will be building out a UI that with take a name and an email as an input and validate the email against Service Objects’ Email Validation API and display the results to the screen.  The validation portion will automatically evaluate the score of the email from the service and assign the appropriate status flag to the record.  In our scenario, I decided to let all the emails with scores of 0 and 1 automatically be accepted and all those with scores of 3 and 4 automatically be rejected.  The emails that scored 2, I left in the review state for the status field so that the end user can make the final analysis and assign the appropriate status flag.

In order to figure out all the different components we are initially going to need, we will re-examine the layout from above.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Taking a closer look, there are five parts identified from the UI that need to be created.  There are a couple more parts beyond what I have highlighted in the screenshot, but I will get to those as we go.  At this point we have identified the following components are needed:

  • Email Contact Container
    • A place to put all the parts
    • Filename
      • cmp
    • Email Contact Header
      • Self explanatory
      • Filename
        • cmp
      • Email Contact Form
        • Somewhere to add information
        • Filename
          • cmp
        • Email Contact List
          • A place to add the results
          • Filename
            • cmp
          • Email Contact Item
            • A single result
            • Filename
              • cmp

From there, the UI could be broken down into even smaller parts, but for this demo, this is as far as we will need to go.  Note, in retrospect, I would have used different filenames that don’t start with “Add” but what is done is done.  Next, I am going to go through each of these and briefly talk about the code.

I am actually going to start with a file I didn’t mention above.  It is the file that the app will run through: AddEmailContact.app.  It is simply three lines of markup that says what resources (slds for styling the page) the page is going to use and an element which is our app container, the AddEmailContactContainer component.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Well, that makes an easy transition to the next page, the AddEmailContactContainer.cmp page.  The first thing you will notice is that we assigned the EmailContactUtil as the controller for this component.  The EmailContactUtil.apxc is the Apex class file that will act as the server side controller where, in contrast, a page called AddEmailContactContainerController.js will serve as the client side controller for the component.  Another important thing to note is the list after “implements.”  I am not going to go into it here, but you will need this if you expect to drag and drop this component in the Lightning App Builder in the future.  Be sure to set access to global as well.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

This component takes in a list of EmailContact custom objects in the first attribute element and the list will be used to display the current EmailContact records.  After the attribute element, we have three handler elements named init, updateEmailContactItem and createNewEmailContactItem. Those handlers are used for initializing/updating the page (init) and registering two events that will occur later on in the other components.  This component will listen for creating a new EmailContactItem and updating an EmailContactItem.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

The init handler is important to mention now because it is how the app gets the list of EmailContacts to display.  The init handler calls the doInit method in the AddEmailContactContainerController.js page.  Asynchronously, the method creates an action to call the server side controller to pull back the needed records.  It makes sense that we are calling the server side controller since we need to be retrieving stored records.  So this method makes a call to the getEmailContacts method on the server side controller and creates a callback that will update our emailContactItems attribute on the main controller so that we can display the records found.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

This process will be called on the page being loaded initially and whenever there is some sort of post back, which is useful since we want to display the latest list when new items are added.  And that’s it for the init functionality.

I mentioned the two controller files earlier, but there is one more file associated to the AddEmailContactContainer component, a helper file called AddEmailContactContainerHelper.js.  I will run through these associated files later.  The remainder of this markup page simply references the other components that this page is composed of: AddEmailContactHeader, AddEmailContactForm, and AddEmailContactList.  The AddEmailContactList component takes the input list from this page as a parameter as we will see coming up.

Next, I will quickly run through the header page.  There really is not much to it.  Besides some out of the box styling and accessing an icon from the Salesforce predefined set of icons, it is not much more than a title.  Which makes sense since this is a simple header component.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

The AddEmailContactForm is a simple form component.  The parts of this page are the attribute or parameter that we will save the input data, registering an event and the form input section itself.  The attribute will be the variable that holds the input data and will be of the EmailContact type even though we are only populating the Name and Email portions of the object.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Next, we register the event that this component will be firing off whenever the button is clicked. This is the same event that the AddEmailContactContainer component was set to listen for and to handle.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Events need an event file, something that I would describe more as a signature for the event.  It’s a very basic file, so I am going to pop in and out of it quickly.  It is defined within the event element and has an attribute defined with a type, which is our EmailContact object, and a name.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Now back to the form file.  The form section will have input areas for the Name and Email of the contact followed by a button to submit the entry to the system.  In this case, both inputs are required and will provide errors when they are missing or if the email is not in the proper email format.  This component also has a controller and a helper file, AddEmailContactFormController.js and AddEmailContactFormHelper.js respectively.  Both files are there to do basic form validation, help setup and fire the button click event and clear the form.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

AddEmailContactFormController first checks to see if the inputs to the form had basic validity.  You’ll notice that the inputs in the form had the same aura:id, which is allowing us, at this point, to pull all of the inputs back at once in an array.  Upon the inputs being valid, we pull the newEmailContactItem variable that was populated with the inputs and send it off to the helper page.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

The helper file instantiates the event and sets the parameter for it, which, in this case, is the newEmailContactItem.  Then the event is fired, and the input attribute that fills the form is reset to make the form ready to take in more contacts from the user.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

In order to continue with the flow of what is happening with the event, I am going to jump back to the main component and look at its controller and the helper files.  At this point, the user has entered data in the form, clicked the button and the code has setup and fired the event.  Now, the AddEmailContactContainer takes over because it has been listening for this event and is ready to handle it with the createNewEmailContactItem handler mentioned earlier.  The action on the handler is to pass control to the handleAddEmailContactUpdate method in the client side controller.  AddEmailContactContainerController, which pulls in the EmailContact variable from the event that was fired, sends the EmailContact variable off to the addEmailContact method in the helper file.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Now on to the AddEmailContactContainerHelper.js file.  This app is going to both create email contacts and update them.  Since updating and creating are very similar in our scenario, we will have both update and create functions call the one save function.  Much of the code is the same, so it makes sense to reuse code here. Since we are following the path of the create event, we will look at it from that perspective here.  The two methods we will run through are the addEmailContact method and the saveEmailContact method.  The addEmailContact method immediately calls the saveEmailContact method, but it provides parameters for indicating if we are doing an update or create and it also supplies a callback function.  The callback gets the list of new records and sends them to the AddEmailContactContainer after the new EmailContact record is created.  The new Email Contact record is created in the server side controller and is called from the saveEmailContact method.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

And that completes the code flow of creating a new Email Contact record.  Next, I am going to jump into the last portion of the code, and that starts with the AddEmailContactList component, where we will look at the displaying of the Email Contact items and the event to update the Status field when there is a button click.

The AddEmailContactList component is another simple and straight forward page.  It handles displaying the list of Email Contacts sent to it through the emailContactItems attribute.  The code iterates over the emailContactItems attribute dynamically displaying the AddEmailContactItem to the screen.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Moving to the AddEmailContactItem, we see that it is a little longer than the reset of the code but in its parts, it is still pretty straightforward.   So what are the parts?  We see there is a graphic and a title, a couple of fields, some buttons, a status and, finally, some fine details.  We could have broken this out into smaller components, but for this demo, this is enough.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

In the code two things happen before the header markup is added.  First, just like the event we registered for the button that adds a new contact, we need to register an event for the buttons specific to each EmailContactItem on the page.  The purpose of these buttons will be to allow the user to accept or reject an email based on the information from the Service Objects Email Validation API operation.  The buttons should not be visible unless the score returned from the service is 2.  Scores of 0 are no-brainers, we can, without hesitation, mark those as accepted.  Those are valid emails.  A score of 1 has a very high certainty to be valid, so we will also automatically set their status as accepted.  Those with a score of 4 are bad emails, again, we automate these, but this time we set the status to be rejected.  And 3’s are near certainty that they are bad emails, so those are also automatically marked as bad.  The grey area is those that get validated with a score of 2.  This score represents unknown for various reasons, and we suggest some kind of manual review or inspection.  For more information about the score field and deeper details about service, you can check out our developer guides here.  After we register the event, we setup the attribute element, creating a local variable input that takes a single EmailContactItem.  This should be familiar by now based on some of the other pages we discussed.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Now we get to the graphic/icon and title code.  Nothing too interesting here really, so I am going to skip over it.  But I do want to mention; this code will display our item icon and the title, you will notice that it actually houses the item parts.  All of the rest of the items components will go in here.  If you want to change the icon, you can see a list of what is available at this link.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

The next bit of code does several things.  First, it displays the contact name and the associated email.  Then, based on the score determined by Service Objects’ Email Validation operation, we will, in the markup, decide if we should display the buttons used for accepting or rejecting the email address.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Next, a similar markup and logic is used to determine the background color on the status and score sections.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

There are several techniques to do this dynamic display instead of using the if/else markup elements but I just learned about them, and I wanted to give them a shot.  I would have probably used a different solution involving Javascript, and this code would have been much shorter.

The last part of this component is the Lightning accordion section.  I added a two-part accordion, so I had a place to display some details that came back from the Email Validation operation without consuming too much space on the screen.  I have it default to opening the second section of the according first.  The first part contains the warnings, notes and/or errors.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

The second part contains other various details like if the email is deliverable or is the email on a catch-all domain, as well as, is the SMTP Mailbox good or is the SMTP Server good.  If you wanted to adjust the code and display more data returned from the Email Validation operation, I would recommend adding those fields to these sections or adding more accordion sections with more fields to display.  This is the code for the second section.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

That does it for the markup pages.  Now, let’s just follow through the code for when either one of the buttons is clicked.  The rest of the functionality rests on the controller and helper of the AddEmailContactItem page but also some of the pages we already went through from the first event we discussed.

The accept and reject buttons each have their own method they call when clicked.  I could have used just one method here and checked the name or id of the calling event, but I thought this would keep things clear.  These are really just pass-through methods that indicate if the accept or reject button was clicked by passing the true or false to the updateEmailContactItem method in the helper file.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

In the AddEmailContactItemHelper.js file, the method updateEmailContactItem starts by setting the status field for the contact item based on the input parameter.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Once that is set, we create the event, fire it and hide the buttons from the screen.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

You may have noticed that we are not creating a new event file.  Since the signature for the create and update events are the same, we are able to reuse the one we already have.  Now, like before, the AddEmailContactContainer is listening for the event, but this time it is listening for the update event.  When the event is fired, the main controller will catch the event and call the handleUpdateEmailContactUpdate method in the AddEmailContactContainerController.  The handleUpdateEmailContactUpdate method pulls out the EmailContactItem that was passed in the event and sends it along to the updateEmailContact method in the helper.

Service Objects Salesforce Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

Earlier, when we were running through the create functionality, we came to a point where we wrote one save method because we knew that the save method could server both the save and update purposes.  Now we get to utilize what we setup earlier and piggyback off of the save code.  In the AddEmailContactContainerHelper the method for updating, the updateEmailContact method, will call our saveEmailContact from before but this time we are not sending along a call back method.  The saveEmailContact method then sets up a couple of parameters and calls the saveEmailContact on the server side controller, and that controller saves or updates the record.

Now that we have gone over all the code you can see that none of this is terribly difficult to understand, but there are a lot of files for a simple app.  We could have reduced the file count if we put all the helper code back up into the controller code.  We also could have reduced the number if we created fewer components and grouped everything together.  And though you can do all that, there are reasons I didn’t.  One reason to break the application out this way is so that we can focus on smaller parts and make each page more basic.  Another reason is that as you extend this solution to meet your own requirements you may be adding a lot more code and keeping the code compartmentalized like this will help with code readability and maintenance.  When you look at the files for this project, you will see that there were several CSS files that we didn’t go over here.  You can review those as you see fit.  Another thing that I didn’t do, is add more error checking which you will certainly want to do, as well as, writing your own unit tests.  The last thing you need to make sure of, is to get a license key (live or trial) from us and add it to the EmailContactUtil.apxc file, otherwise you will not be able to validate email addresses against our API.  If you get a trial key, you need only update the key in the code.  However, if you get a live key, then you will want to update the endpoints in the code in addition to the key.  With the live key, you will want the first endpoint to be ws.serviceobjects.com and the second to be wsbackup.serviceobjects.com.  In the example below, I changed the key to contain X’s instead of real values.

Salesforce Data Quality Tools Integration can help improve your contact data quality, help with data validation, and enhance your business operations.

So, where do you go from here?  One thing you will likely want to do is customize the solution.  Maybe your want to incorporate some of the standard objects in Salesforce or perhaps you want to use or create your own custom objects.  Maybe there is additional business logic that you want to apply to the results our Email Validation.  After that, you will likely be adding this component to a Lightning app.  Or maybe you’ll just use the code for the Email Validation alone for something completely different.  A side note to make here is that you can even apply this demonstration to many of our other services like Address Validation, Address Detective, Lead Validation, Name Validation and more.  When you look at this solution, you should notice that though the part of the code that dealt with calling our Email Validation API was very short, it had a huge impact on the utility of the app.  If you want some specific code that can help with this solution or applying this solution to our other services then simply reach out, and we’ll do our best to accommodate.

Address Suggestion with Validation: A Match Made in Heaven

In an ideal world, data entry would always be perfect. Sadly, it isn’t – human errors happen to end users and call center employees alike. And while we make a good living cleaning existing bad data here at Service Objects, we would still much rather see your downstream systems be as clean as possible.

To help with that, many organizations are getting an assist with Google, in the form of the Autocomplete with their Places API.  If you setup your form properly and use their API you can have address suggestions appear in a dropdown for your end users to take advantage of to help enter correct data into your system. That’s great, isn’t it?

It does sound great on the surface, but when you dig a little deeper there are two problems:

  • First, Google Places API often does not often suggest locations to the apartment or suite level of detail. The point is that a considerable segment of the population lives in apartments or does business on separate floors, suites or buildings.
  • Second, the locations the Google Places API suggests are often not mail deliverable. For instance, a business may have a physical location at one address and receive mail at a completely different address.  Or sometimes Google will just make approximations as to where an address should be.

For example, check this address out on Google Maps: 08 Kings Rd, Brighton BN1 1NS, UK.  It looks like a legitimate address, but as the street view shows, it does not seem to correspond to anything.

These issues can leave gaping holes in your data validation process.  So, what can you do? Contact us at Service Objects, because we have the perfect solution: our Address Suggestion form tool. When combined with the Google Places API you will have a powerful tool that will both save time and keep the data in your system clean and valid.

This form tool is a composite of the Google Places API and our Address Validation International service.  The process consists of the data entry portion, the Google Paces API lookup, then the Address Validation International service call, finally displaying selectable results to the end user.

Let’s start by discussing the Google API Key, and then the form, and finally the methods required to make that Google Places API call.

Google Places API requires a key to access it.  You can learn more about this API here.  Depending on your purposes you may be able to get away with using only Google’s free API key but if you are going to be dealing with large volumes of data then the premium API key will be needed.  That doesn’t mean you can’t get started with the free version: we in fact use it to put our demos together, and it works quite well.

When setting up your key with Google, remember to also turn on the Google Maps Javascript API, or else calls to the Places API will fail.  Also, pay particular attention to the part about restricting access with the API key.  When you have a premium key this will be very important because it will allow you to set the level at which you want the key to be locked down, so that others can’t simply look at your Javascript code and use your key elsewhere.

The form we need to create will look like a standard address data entry form, but with some important details to note.  First let’s look at the country select box: we recommended that this be the first selection that the user makes. Choosing a country first benefits both you and the user, because it will limit suggested places to this country, and will also reduce the number of transactions against your Google Places API key.  Here is a link to how Google calculates its transaction totals.

Another important note is that we need to have the Apt/Suite data entry field.  As mentioned earlier, the Google Places API often does not return this level of resolution on an address, so we add this field for the information be provided by the end user.

The rest of the fields are really up to you in how you display them.  In our case, we display the parsed-out components of the results from selected address back into the rest of the address fields.  We keep all the address input fields editable so that the end user can make any final adjustments they want.

The methods associated with this process can be summarized by a set of initializations that happen in two places: first, when a country is selected, and second, when the focus is on the Address field by a user clicking into it.  For our purposes we default the country selection to the United States, however when the country is changed the Autocomplete gets reinitialized to the selected country. And when a user clicks into the Address field, the initialization creates a so-called bias, e.g. Autocomplete returns results based on the location of your browser.  For this functionality to work, the end user’s browser will ask to let Google know its location.  If the user does not permit Google to know this the suggestion is turned off and does not work.

This bias has a couple of interesting features.  For instance, you can change the code to not utilize the user’s browser location but instead supply a custom latitude and longitude.  In our example, the address suggestion does not end up using the user’s current position when the selected country is not in the same country as the user.  But when the user is in the same country as the selected country then the results returned by the Google Places API are prioritized to your location.  This means that if you are in Santa Barbara, CA and select the United States as the country, when you start typing a United States address you will first see matching addresses in Santa Barbara, and then work outward from there.

You can customize the form bias to any particular location that you have a latitude and longitude for.  The ability to change this bias is very useful in that setting the proper bias will reduce the number of lookups against the Google Places API before finding an address match, and will also save manual typing time.

Now let’s discuss the Address Validation International service API call, which consists of a key, the call to the service and setting up best practices for failover.

Let’s start with the key.  You will need to either have a live or free trial license key from us, the latter of which can be gotten here.  For this example, a trial key will work fine for exploring and tinkering with this integration.  One of the great things about using our service is that when you want to turn this into a live production-ready solution, all you have to do is switch out the key from the trial to the production key and switch the endpoint to the production URL, both of which can be done in minutes.

The call to the Address Validation International service will be made to either the trial or production endpoints, which will depend on the key that you are using.  The details of the service and how to integrate with it can be found in our developer guides.  In the Javascript code you will round up all the data in the fields that were populated by the address suggestion selection and send them off to the service for validation.  The code that manages the call to the Address Validation International service needs to be executed on some back-end server client.

It is strongly discouraged to make the call to the service directly from Javascript, because it will expose your license and allow someone to take it and use your transactions maliciously.  You can read more about those dangers here.  Also, here is a blog about how to make a call to another one of our services using a proxy.  The basic idea is that your Javascript call will call the proxy method that contains your license key, essentially hiding it from the public.  This proxy method will make the final call to the Address Validation International service, get the results from it and pass those results back to the original call in the Javascript.  In this situation, the best place to implement failover is in the proxy method.

So what is failover? Failover, from the perspective of an end user developer, is just a secondary data center to call in the unlikely event that one of our primary data centers go down or does not respond in a timely manner.  Our developer guides can again help with this topic.  There you will also find code snippets that demonstrate our best practice failover.

Once this call is set up, all that is left is evaluating the results and displaying the appropriate message back to the end user. While you can go through our developer guides to figure this out, the first important field to examine in the response from the Address Validation International service is the Status field – here is a table of what is expected to be returned:

Address Status

Name Description
Invalid For addresses where postal and/or street level data is available, and the street was not found, bad building number, etc.
InvalidFormat For addresses where Postal data is unavailable and only the address format is known.
InvalidAmbiguous For possibly valid addresses that may be correctable, but ultimately failed validation.
Valid For addresses with postal level data that passed validation.
ValidInferred For addresses where potentially far reaching corrections/changes were made to infer the return address.
ValidApproximate For addresses where premise data is unavailable and interpolation was performed, such as Canadian addresses
ValidFormat For addresses where Postal data is unavailable and only the address format is known.

 

Another important field will be the ResolutionLevel, which can be one of the three following values: Premise, Street and City.  The values returned in these two fields will help you make a decision in the code with respect to what exactly you want to display back to the end user.  What we do in our demo is display the Status and ResolutionLevel to the end user along with the resulting address.  Then we give the user a side-by-side view of both the resulting address just mentioned and the original address the user entered.  This way the end user can make a decision based on everything we found. In the case shown here, for example, we updated Scotland to Dunbartonshire and validated to the premise resolution level.

There are many customizations that can be made to this demo, such as the example we mentioned earlier about setting up the bias.  Additionally, instead of using the Address Validation International service you could also create an implementation of this demo using our Address Validation US or our Address Validation Canada products.

Want to try this out for yourself? Just contact one of our Account Executives to get the code for this demo – we’ll be glad to help.

Service Objects Processes 3 Billion Transactions and Saves 1.6 Million Trees

In March 2018, Service Objects reached a major milestone in the company’s history by validating over 3 billion transactions. Since 2001, Service Objects has enabled businesses located around the world to validate their contacts’ name, location, phone, email address and device against hundreds of authoritative international data sources, all with the goal of reducing waste and fraud.

The fast-paced growth of the company, which has over 2,500 customers such as Amazon, LendingTree, American Express, MasterCard, and Verizon, has been an instrumental part of reaching this record number of transactions. Processing over 3 billion transactions is of particular significance to Service Objects because corporate conservation is one of the reasons the company was founded. The positive impact of achieving this milestone has had on the environment includes:

  • 193 million total pounds of paper saved
  • 6 million trees saved
  • 6 million gallons of oil saved
  • 670 million gallons of water saved
  • 393 million kilowatt hours of energy saved
  • 295 thousand cubic yards of landfill space saved

According to Geoff Grow, Founder and CEO of Service Objects, the idea to start Service Objects came from his desire to apply math and big data to the problem of waste caused by incorrect contact information. Surpassing the 3 billion transaction mark translates into saving 193 million pounds of paper. He would never have dreamed that 16 years ago Service Objects would be able to fulfill its commitment to corporate conservation in such a significant way.

Corporate conservation is embedded in the company’s culture, as evidenced by its commitment to recycling, using highly efficient virtualized servers, using sustainable wind power, purchasing sustainable office supplies, encouraging employees to bike to work and embracing a paperless philosophy. Every employee is conscious of how they can positively impact conservation efforts.

To read the entire Service Objects’ story, click here.