Service Objects’ Blog

Thoughts on Data Quality and Contact Validation

Email *

Share

Posts Tagged ‘Big Data’

How secure is your ‘Data at Rest’?

In a world where millions of customer and contact records are commonly stolen, how do you keep your data safe?  First, lock the door to your office.  Now you’re good, right?  Oh wait, you are still connected to the internet. Disconnect from the internet.  Now you’re good, right?  What if someone sneaks into the office and accesses your computer?  Unplug your computer completely.  You know what, while you are at it, pack your computer into some plain boxes to disguise it.   Oh wait, this is crazy, not very practical and only somewhat secure.

The point is, as we try to determine what kind of security we need, we also have to find a balance between functionality and security.  A lot of this depends on the type of data we are trying to protect.  Is it financial, healthcare, government related, or is it personal, like pictures from the last family camping trip.  All of these will have different requirements and many of them are our clients’ requirements. As a company dealing with such diverse clientele, Service Objects needs to be ready to handle data and keep it as secure as possible, in all the different states that digital data can exist.

So what are the states that digital data can exist in?  There are a number of states and understanding them should be considered when determining a data security strategy.  For the most part, the data exists in three states; Data in Motion/transit, Data at Rest/Endpoint and Data in Use and are defined as:

Data in Motion/transit

“…meaning it moves through the network to the outside world via email, instant messaging, peer-to-peer (P2P), FTP, or other communication mechanisms.” – http://csrc.nist.gov/groups/SNS/rbac/documents/data-loss.pdf

Data at Rest/Endpoint

“data at rest, meaning it resides in files systems, distributed desktops and large centralized data stores, databases, or other storage centers” – http://csrc.nist.gov/groups/SNS/rbac/documents/data-loss.pdf

“data at the endpoint, meaning it resides at network endpoints such as laptops, USB devices, external drives, CD/DVDs, archived tapes, MP3 players, iPhones, or other highly mobile devices” – http://csrc.nist.gov/groups/SNS/rbac/documents/data-loss.pdf

Data in Use

“Data in use is an information technology term referring to active data which is stored in a non-persistent digital state typically in computer random access memory (RAM), CPU caches, or CPU registers. Data in use is used as a complement to the terms data in transit and data at rest which together define the three states of digital data.” – https://en.wikipedia.org/wiki/Data_in_use

How Service Objects balances functionality and security with respect to our clients’ data, which is at rest in our automated batch processing, is the focus of this discussion.  Our automated batch process consists of this basic flow:

  • Our client transfers a file to a file structure in our systems using our secure ftp. [This is an example of Data in Motion/Transit]
  • The file waits momentarily before an automated process picks it up. [This is an example of Data at Rest]
  • Once our system detects a new file; [The data is now in the state of Data in Use]
    • It opens and processes the file.
    • The results are written into an output file and saved to our secure ftp location.
  • Input and output files remain in the secure ftp location until client retrieves them. [Data at Rest]
  • Client retrieves the output file. [Data in Motion/Transit]
    • Client can immediately choose to delete all, some or no files as per their needs.
  • Five days after processing, if any files exist, the automated system encrypts (minimum 256 bit encryption) the files and moves them off of the secure ftp to another secure location. Any non-encrypted version is no longer present.  [Data at Rest and Data in Motion/Transit]
    • This delay gives clients time to retrieve the results.
  • 30 days after processing, the encrypted version is completely purged.
    • This provides a last chance, in the event of an error or emergency, to retrieve the data.

We encrypt files five days after processing but what is the strategy for keeping the files secure prior to the five day expiration?  First off, we determined that the five and 30 day rules were the best balance between functionality and security. But we also added flexibility to this.

If clients always picked up their files right when they were completed, we really wouldn’t need to think too much about security as the files sat on the secure ftp.  But this is real life, people get busy, they have long weekends, go on vacation, simply forget, whatever the reason, Service Objects couldn’t immediately encrypt and move the data.  If we did, clients would become frustrated trying to coordinate the retrieval of their data.  So we built in the five and 30 day rule but we also added the ability to change these grace periods and customize them to our clients’ needs.  This doesn’t prevent anyone from purging their data sooner than any predefined thresholds and in fact, we encourage it.

When we are setting up the automated batch process for a client, we look at the type of data coming in, and if appropriate, we suggest to the client that they may want to send the file to us encrypted. For many companies this is standard practice.  Whenever we see any data that could be deemed sensitive, we let our client know.

When it is established that files need to be encrypted at rest, we use industry standard encryption/decryption methods.  When a file comes in and processing begins, the data is now in use, so the file is decrypted.  After processing, any decrypted file is purged and what remains is the encrypted version of the input and output files.

Not all clients are concerned or require this level of security but Service Objects treats all data the same, with the utmost care and the highest levels of security reasonable.  We simply take no chances and always encourage strong data security.

Big Data – Applied to Day to Day Life

With so much data being constantly collected, it’s easy to get lost in how all of it is applied in our real lives. Let’s take a quick look at a few examples starting with one that most of us encounter daily.

Online Forms
One of the most common and fairly simple to understand instances we come across on a daily basis is completing online forms. When we complete an online form, our contact record data points, like; name, email, phone and address, are being individually verified and corrected in real time to ensure each piece of data is genuine, accurate and up to date. Not only does this verification process help mitigate fraud for the companies but it also ensures that the submitted data is correct. The confidence in data accuracy allows for streamlined online purchases and efficient deliveries to us, the customers. Having our accurate information in the company’s data base also helps streamline customer service should there be a discrepancy with the purchase or we have follow up questions about the product. The company can easily pull up our information with any of the data points initially provided (name, email, phone, address and more) to start resolving the issue faster than ever (at least where companies are dedicated to good customer service).

For the most part we are all familiar with business scenarios like the one described above. Let’s shift to India & New Orleans for a couple new examples of how cities are applying data to improve the day-to-day lives of citizens.

Addressing the Unaddressed in India
According to the U.S. Census Bureau, India is the second most populated country in the world with 1,281,935,911 people. With such a large population there is a shortage of affordable housing in many developed cities, leading to about 37 million households residing in unofficial housing areas referred to as slums. Being “unofficial” housing areas means they are not mapped nor addressed leaving residents with very little in terms of identification. However, the Community Foundation of Ireland (a Dublin based non-profit organization) and the Hope Foundation recently began working together to provide each home for Kolkata’s Chetla slum their very first form of address consisting of a nine-digit unique ID. Beside overcoming obvious challenges like giving someone directions to their home and being able to finally receive mail, the implementation of addresses has given residents the ability to open bank accounts and access social benefits. Having addresses has also helped officials identify the needs in a slum, including healthcare and education.

Smoke Detectors in New Orleans
A recent article, The Rise of the Smart City, from The Wall Street Journal highlights how cities closer to home have started using data to bring about city wide enhancements. New Orleans, in particular, is ensuring that high risk properties are provided smoke detectors. Although the fire department has been distributing smoke detectors for years, residents were required to request them. To change this, the city’s Office of Performance and Accountability, used Census Bureau surveys and other data along with advanced machine-learning techniques to create a map for the fire department that better targets areas more susceptible to deaths caused by fire. With the application of big data, more homes are being supplied with smoke detectors increasing safety for entire neighbors and the city as a whole.

FIRE RISK | By combining census with additional data points, New Orleans mapped the combined risk of missing smoke alarms and fire deaths, helping officials target distribution of smoke detectors. PHOTO: CITY OF NEW ORLEANS/OPA

While these are merely a few examples of how data is applied to our day to day lives around the world, I hope they helped make “Big Data” a bit more relatable. Let us know if we can answer any questions about how data solutions can be applied to help your company as well.

Celebrating Earth Day

April 22 marks the annual celebration of Earth Day, a day of environmental awareness that is now approaching its first half century. Founded by US Senator Gaylord Nelson in 1970 as a nationwide teach-in on the environment, Earth Day is now the largest secular observance in the world, celebrated by over a billion people.

Earth Day has a special meaning here in our hometown of Santa Barbara, California. It was a massive 1969 oil spill off our coast that first led Senator Nelson to propose a day of public awareness and political action. Both were sorely needed back then: the first Earth Day came at a time when there was no US Environmental Protection Agency, environmental groups such as Greenpeace and the Natural Resources Defense Council were in their infancy, and pollution was simply a fact of life for many people.

If you visit our hometown today, you will find the spirit of Earth Day to be alive and well. We love our beaches and the outdoors, this area boasts over 50 local environmental organizations, and our city recently approved a master plan for bicycles that recognizes the importance of clean human-powered transportation. And in general, the level of environmental and conservation awareness here is part of the culture of this beautiful place.

Earth Day

It also has a special meaning for us here at Service Objects. Our founder and CEO Geoff Grow, an ardent environmentalist, started this company from an explicit desire to apply mathematics to the problem of wasted resources from incorrect and duplicate mailings. Today, our concern for the environment is codified as one of the company’s four core values, which reads as follows:

“Corporate Conservation – In addition to preventing about 300 tons of paper from landing in landfills each month with our Address Validation APIs, we practice what we preach: we recycle, use highly efficient virtualized servers, and use sustainable office supplies. Every employee is conscious of how they can positively impact our conservation efforts.”

Today, as Earth Day nears the end of its fifth decade, and Service Objects marks over 15 years in business, our own contributions to the environment have continued to grow. Here are just a few of the numbers behind the impact of our data validation products – so far, we have saved:

  • Over 85 thousand tons of paper
  • A million and a half trees
  • 32 million gallons of oil
  • More than half a billion gallons of water
  • Close to 50 million pounds of air pollution
  • A quarter of a million cubic yards of landfill space
  • 346 million KWH of energy

All of this is an outgrowth of more than two and a half billion transactions validated – and counting! (If you are ever curious about how we are doing in the future, just check the main page of our website: there is a real-time clock with the latest totals there.) And we are always looking for ways to continue making lives better though data validation tools.

We hope you, too, will join us in celebrating Earth Day. And the best way possible to do this is to examine the impact of your own business and community on the environment, and take positive steps to make the earth a better place. Even small changes can create a big impact over time. The original Earth Day was the catalyst for a movement that has made a real difference in our world – and by working together, there is much more good to come!

Medical Data is Bigger than You May Think

What do medical centers have in common with businesses like with Uber, Travelocity, or Amazon? They have a treasure trove of data, that’s what! The quality of that data and what’s done with it can help organizations work more efficiently, more profitably, and more competitively. More importantly for medical centers, data quality can lead to even better quality care.

Here’s just a brief sampling of the types of data a typical hospital, clinic, or medical center generates:

Patient contact information
Medical records with health histories
Insurance records
Payment information
Geographic data for determining “Prime Distance” and “Drive Time Standards”
Employee and payroll data
Ambulance response times
Vaccination data
Patient satisfaction data

Within each of these categories, there may be massive amounts of sub-data, too. For example, medical billing relies on tens of thousands of medical codes. For a single patient, even several addresses are collected such as the patient’s home and mailing addresses, the insurance company’s billing address, the employer’s address, and so forth.

This data must be collected, validated for accuracy, and managed, all in compliance with rigorous privacy and security regulations. Plus, it’s not just big data, it’s important data. A simple transposed number in an address can mean the difference between getting paid promptly or not at all. A pharmaceutical mix-up could mean the difference between life and death.

With so much important data, it’s easy to get overwhelmed. Who’s responsible? How is data quality ensured? How is it managed? Several roles can be involved:

Data stewards – Develop data governance policies and procedures.
Data owners – Generate the data and implement the policies and procedures.
Business users –  Analyze and make use of the data.
Data managers –  Information systems managers and developers who implement and manage the tools need to capture, validate, and analyze the data.

Defining a data quality vision, assembling a data team, and investing in appropriate technology is a must. With the right team and data validation tools in place, medical centers and any organization can get serious about data and data quality.

How Can Data Quality Lead to Quality Care?

Having the most accurate, authoritative and up-to-date information for patients can positively impact organizations in many ways. For example, when patients move, they don’t always think to inform their doctors, labs, hospitals, or radiology centers. With a real-time address validation API, not only could you instantly validate a patient’s address for billing and marketing purposes, you could confirm that the patient still lives within the insurance company’s “prime distance” radius before treatment begins.

Accurate address and demographic data can trim mailing costs and improve patient satisfaction with appropriate timing and personalization. Meanwhile, aggregated health data could be analyzed to look at health outcomes or reach out to patients proactively based on trends or health histories. Just as online retailers recommend products based on past purchases or purchases by customers like you, medical providers can use big data to recommend screenings based on health factors or demographic trends.

Developing a data quality initiative is a major, but worthwhile, undertaking for all types of organizations — and you don’t have to figure it all out on your own. Contact Service Objects today to learn more about our data validation tools.

Data Monetization: Leveraging Your Data as an Asset

Everyone knows that Michael Dell built a giant computer business from scratch in a college dorm room. Less well known is how he got started: by selling newspaper subscriptions in his hometown of Houston.

You see, most newspaper salespeople took lists of prospects and started cold-calling them. Most weren’t interested. In his biography, Dell describes using a different strategy: he found out who had recently married or purchased a house from public records – both groups that were much more likely to want new newspaper subscriptions – and pitched to them. He was so successful that he eventually surprised his parents by driving off to college in a new BMW.

This is an example of data monetization – the use of data as a revenue source to improve your bottom line. Dell used an example of indirect data monetization, where data makes your sales process or other operations more effective. There is also direct data monetization, where you profit directly from the sale of your data, or the intelligence attached to it.

Data monetization has become big business nowadays. According to PWC consulting firm Strategy&, the market for commercializing data is projected to grow to US $300 billion annually in the financial services sector alone, while business intelligence analyst Jeff Morris predicts a US $5 billion-plus market for retail data analytics by 2020. Even Michael Dell, clearly remembering his newspaper-selling days, is now predicting that data analytics will be the next trillion-dollar market.

This growth market is clearly being driven by massive growth in data sources themselves, ranging from social media to the Internet of Things (IoT) – there is now income and insight to be gained out of everything from Facebook posts to remote sensing devices. But for most businesses, the first and easiest source of data monetization lies in their contact and CRM data.

Understanding the behaviors and preferences of customers, prospects and stakeholders is the key to indirect data monetization (such as targeted offers and better response rates), and sometimes direct data monetization (such as selling contact lists or analytical insight). In both cases, your success lives or dies on data quality. Here’s why:

  • Bad data makes your insights worthless. For example, if you are analyzing the purchasing behavior of your prospects, and many of them entered false names or contact information to obtain free information, then what “Donald Duck” does may have little bearing on data from qualified purchasers.
  • The reputational cost of inaccurate data goes up substantially when you attempt to monetize it – for example, imagine sending offers of repeat business to new prospects, or vice-versa.
  • As big data gets bigger, the human and financial costs of responding to inaccurate information rise proportionately.

Information Builders CIO Rado Kotorov puts it very succinctly: “Data monetization projects can only be successful if the data at hand is cleansed and ready for analysis.” This underscores the importance of using inexpensive, automated data verification and validation tools as part of your system. With the right partner, data monetization can become an important part of both your revenue stream and your brand – as you become known as a business that gives more customers what they want, more often.

Marketers and Data Scientists Improving Data Quality and Marketing Results Together

In the era of big data, marketing professionals have added basic data analysis to their toolboxes. However, the data they’re dealing with often requires significantly deeper analysis, and data quality (Is it Accurate? Current? Authentic?) is a huge concern. Thus, data scientists and marketers are more often working side by side to improve campaign efficiencies and results.

What is a Data Scientist?

Harvard Business Review called the data scientist profession “the sexiest job of the 21st century” and described the role of data scientist as “a hybrid of data hacker, analyst, communicator, and trusted adviser.”

The term data scientist itself is relatively new, with many data scientists lacking what we might call a data science degree. Rather, they may have a background in business, statistics, math, economics, or analytics. Data scientists understand business, patterns, and numbers. They tend to enjoy looking at diverse sets of data in search of similarities, differences, trends, and other discoveries. The ability to understand and communicate their discoveries make data scientists a valuable addition to any marketing team.

Data scientists are in demand and command high salaries. In fact, Robert Half Technology’s 2017 Salary Guides suggest that data scientists will see a 6.5 percent bump in pay compared to 2016 (and their average starting salary range is already an impressive $116,000 to $163,500).

Why are Marketers Working with Data Scientists?

Marketers must deal with massive amounts of data and are increasingly concerned about data quality. They recognize that there’s likely valuable information buried within the data, yet making those discoveries requires time, expertise, and tools — each of which pulls them away from their other important tasks. Likewise, even the tried-and-true act of sending direct mail to the masses can benefit from a data scientist who can both dig into the demographic requirements as well as ensure data quality by cross referencing address data against USPS databases.

In short, marketers need those data hackers, analysts, communicators, and trusted advisers in order to make sense of the data and ensure data quality.

A Look at the Marketer – Data Scientist Relationship

As with any collaboration, marketers and data scientists occasionally have differences. They come from different academic backgrounds, and have different perspectives. A marketer, for example, is highly creative whereas a data scientist is more accustomed to analyzing data.

However, when sharing a common goal and understanding their roles in achieving it, marketers and data scientists can forge a worthwhile partnership that positively impacts business success.

We all know that you’re only as good as your data, making data quality a top shared concern between marketers and data scientists alike. Using tools such as data validation APIs, data scientists ensure that the information marketers have is as accurate, authoritative, and up to date as possible. Whether pinpointing geographical trends or validating addresses prior to a massive direct mail campaign, the collaboration between marketers and data scientists leads to increased campaign efficiencies, results, and, ultimately, increased revenue for the company as a whole.

The Role of a Chief Data Officer

According to a recent article in Information Management, nearly two-thirds of CIOs want to hire Chief Data Officers (CDO) over the next year. Why is this dramatic transformation taking place, and what does it mean for you and your organization?

More than anything, the rise of the CDO recognizes the growing role of data as a strategic corporate asset. Decades ago, organizations were focused on automating specific functions within their individual silos. Later, enterprise-level computing like CRM and ERP helped them reap the benefits of data interoperability. And today, trends such as big data and data mining have brought the strategic value of data front and center.

This means that the need is greater than ever for a central, C-level resource who has both a policy-making and advocacy role for an organization’s data. This role generally encompasses data standards, data governance, and the oversight of data metrics. A CDO’s responsibilities can be as specific as naming conventions and standards for common data, and as broad as overseeing enterprise data management and business intelligence software. They are ultimately accountable for maximizing the ROI of an organization’s data assets.

A key part of this role is oversight of data quality. Bad data represents a tangible cost across the organization, including wasted marketing efforts, misdirected product shipments, reduced customer satisfaction, and fraud, tax and compliance issues, among other factors. More important, without a consistent infrastructure for data quality, the many potential sources of bad data can fall through the cracks without insight or accountability. It is an exact analogy to how quality assurance strategies have evolved for manufacturing, software or other areas.

A recent report from the Gartner Group underscored the uphill battle that data quality efforts still face in most organizations: while those surveyed believed that data quality issues were costing each of them US $9.7 million dollars annually on average, most are still seeking justification to address data quality as a priority. Moreover, Gartner concludes that many current efforts to remediate data quality simply encourage line-of-business staff to abandon their own data responsibilities. Their recommendations include making a business case for data quality, linking data quality and business metrics, and above all shifting the mindset of data quality practitioners from being “doers” to being facilitators.

This, in turn, is helping fuel the rise of the central CDO – a role that serves as both a policymaker and an evangelist. In the former role, their job is to create an infrastructure for data quality and deploy it across the entire organization. In the latter role, they must educate their organizations about the ROI of a consistent, measurable approach to data, as well as the real costs and competitive disadvantage of not having one – particularly as more and more organizations add formal C-level responsibility for data to their boardrooms.

Service Objects has long focused on this transition by creating interoperable tools that automate the process of contact data verification, for functions ranging from address and email validation to quantitative lead scoring. We help organizations make data quality a seamless part of their infrastructure, using API and web-based interfaces that tap into global databases of contact information. These efforts have quickly gained acceptance in the marketplace: last year alone, CIO Review named us as one of the 20 most promising API solution providers. And nowadays, in this new era of the Chief Data Officer, our goal as a solutions provider is to support their mission of overseeing data quality.

The Importance of Data Accuracy in Machine Learning

The Importance of Data Accuracy in Machine Learning2Imagine that someone calls your contact center – and before they even get to “Hello,” you know what they might be calling about, how frustrated they might be, and what additional products and services they might be interested in purchasing.

This is just one of the many promises of machine learning: a form of artificial intelligence (AI) that learns from the data itself, rather than from explicit programming. In the contact center example above, machine learning uses inputs ranging from CRM data to voice analysis to add predictive logic to your cu
stomer interactions. (One firm, in fact, cites call center sales efforts improving by over a third after implementing machine learning software.)

Machine learning applications nowadays range from image recognition to predictive analytics. One example of the latter happens every time you log into Facebook: by analyzing your interactions, it makes more intelligent choices about which of your hundreds of friends – and what sponsored content – ends up on your newsfeed. And a recent Forbes article predicts a wealth of new and specialized applications, including helping ships to avoid hitting whales, automating granting employee access credentials, and predicting who is at risk for hospital readmission – before they even leave the hospital the first time!

The common thread between most machine learning applications is deep learning, often fueled by high-speed cloud computing and big data. The data itself is the star of the process: for example, a computer can often learn to play games like an expert, without programming a strategy beforehand, by generating enough moves by trial-and-error to find patterns and create rules. This mimics the way the human brain itself often learns to process information, whether it is learning to walk around in a dark living room at night or finding something in the garage.

Since machine learning is fed by large amounts of data, its benefits can quickly fall apart when this data isn’t accurate. A humorous example of this was when a major department store chain decided (incorrectly) that CNBC host Carol Roth was pregnant – to the point where she was receiving samples of baby formula and other products – and Google targeted her as an older man. Multiply examples like this by the amount of bad data in many contact databases, and the principle of “garbage in, garbage out” can quickly lead to serious costs, particularly with larger datasets.

Putting some numbers to this issue, statistics from IT data quality firm Blazent show that while over two thirds of senior level IT staff intend to make use of machine learning, 60 percent lack confidence in the quality of their data – and 45 percent of their organizations simply react to data errors as they occur. Which is not only costly, but in many cases totally unnecessary: with modern data quality management tools, their absence is too often a matter of inertia or lack of ownership rather than ROI.

Truly unlocking the potential of machine learning will require a marriage between the promise of its applications and the practicalities of data quality. Like most marriages, this will involve good communication and clearly defined responsibilities, within a larger framework of good data governance. Done well, machine learning technology promises to represent another very important step in the process of leveraging your data as an asset.

The Role of a Data Steward

If you have ever dined at a *really* fine restaurant, it may have featured a wine steward: a person formally trained and certified to oversee every aspect of the restaurant’s wine collection. A sommelier, as they are known, not only tastes wines before serving them but sets policy for wine acquisition and its pairings with food, among other responsibilities. Training for this role may involve as much as two-year college degree.

This is a good metaphor for a growing role in technology and business organizations – that of a data steward. Unlike a database administrator, who takes functional responsibility for repositories of data, a data steward has a broader role encompassing policies, procedures, and data quality. In a very real sense, a data steward is responsible for managing the overall value and long-term sustainability of an organization’s data assets.

According to Dataversity, the key role of a data steward is that they own an organization’s data. This links to the historical definition of a steward, from the Middle Ages – one who oversees the affairs of someone’s estate. This means that an effective data steward needs a broad background including areas like programming and database skills, data modeling and warehousing expertise, and above all good communications skills and business visibility. In larger organizations, Gartner sees this role as becoming increasingly formalized as a C-level position title, either as Chief Data Officer or incorporated as part of another C-level IT officer’s responsibilities.

One of the key advantages of having a formal data steward is that someone is accountable for your data quality. Too often, even in large organizations, this job falls to no one. Frequently individual stakeholders are responsible for data entry or data usage, and the process of strategically addressing bad data would add bandwidth to their jobs. This is an example of the tragedy of the commons, where no one takes responsibility for the common good, and the organization ultimately incurs costs in time, missed marketing opportunities or poor customer relations by living with subpar data quality.

Another advantage of a data steward is that someone is tasked with evaluating and acquiring the right infrastructure for optimizing the value of your data. For example, automated tools exist that not only flag or correct contact data for accuracy, but enhance its value by appending publicly available information such as phone numbers or geographic locations. Or help control fraud and waste by screening your contact data per numerous criteria, and then assigning a quantitative lead score. Ironically, these tools are often inexpensive and make everyone’s life easier, but having a data steward can prevent a situation where implementing these tools is no one’s responsibility.

Looking at a formal role of data stewardship in your own organization is a sign that you take data seriously as an asset, and can start making smart moves to protect and expand its value. It helps you think strategically about your data, and teach everyone to be accountable for their role in it. This, in turn, can become the key to leveraging your organization’s data as a competitive advantage.

Data Quality and the Environment

EnvironmentService Objects recently celebrated our 15th year in business and it made me reflect on something that is important to me and is an underappreciated reason for improving your data quality: protecting our environmental resources.

Lots of companies talk about protecting the environment. Hotels ask you to re-use your towels, workplaces encourage you to recycle, and restaurants sometimes forego that automatic glass of ice water on your table. Good for them – it saves them all money as well as conserving resources. But our perspective is somewhat different because environmental conservation is one of the key reasons I founded this company in 2001.

Ever since I was a young man, I’ve been an avid outdoorsman who has felt a very strong connection to the natural world we inhabit. So one of the things I couldn’t help but notice was how much mislabeled direct mail showed up at my doorstep, as well as those of my friends. Some companies might even send three copies of the same thick catalog, addressed to different spellings of my name. Add in misdirected mail that never arrives, poor demographic targeting, and constant changes in workplace addresses, and you have a huge – and preventable – waste of resources.

As a mathematician and an engineer by training, thinking through the mathematics of how better data quality could affect this massive waste stream was a large part of the genesis of Service Objects. We discovered that the numbers involved were truly staggering. And we discovered that simple, automated filters driven by sophisticated database technology could make a huge difference in these figures.

Since then, our products have made a real difference. Over the past 15 years, our commitment to reducing waste associated with bad address data has saved over 1.2 million trees, and prevented over 150 million pounds of paper from winding up in landfills. We have also saved 520 million gallons of water and prevented 44 million pounds of air pollution. More important, these savings are driven by a growing enterprise that has now validated over two and a half billion contact records for over 2400 customers.

As a company, our concern for the environment goes far beyond the services we provide to customers. We encourage our staff to ride their bicycle to work instead of driving their car, use sustainable office supplies, and keep a sharp eye on our own resource usage. Corporate conservation is one of the four core values of our company’s culture. The result is a team I am proud of, with a shared vision and sense of purpose.

There are many great business reasons for using Service Objects’ data quality products, including cost savings, fraud prevention, more effective marketing, and improved customer loyalty. But to me personally, using a smaller footprint of the Earth’s resources is the core that underlies all of these benefits. It is a true example of doing well by doing good.

For any business – particularly those who do direct marketing, distribute print media or ship tangible products, among many others – improving your data quality with us can make a real difference to both your bottom line AND our planet’s resources. We are proud to play a part in protecting the environment, and look forward to serving you for the next 15 years and beyond.

Service Objects is the industry leader in real-time contact validation services.

Service Objects has verified over 2.5 billion contact records for clients from various industries including retail, technology, government, communications, leisure, utilities, and finance. Since 2001, thousands of businesses and developers have used our APIs to validate transactions to reduce fraud, increase conversions, and enhance incoming leads, Web orders, and customer lists. READ MORE