Posts Tagged ‘Data Security’

TLS and Email Security: An Overview

Many people don’t realize that when you send an email, its contents are often unencrypted – and in turn, vulnerable to being seen and intercepted by others. This may be fine if you are sending recipes or plans for the weekend to your friends, but many businesses want a more secure solution for communicating with their clients, prospects and other stakeholders. Moreover, a number of well-publicized email hacking incidents over the past few years have put email security in the spotlight.

Thankfully there are numerous solutions that can be put to use to protect your emails. This article looks at how one common solution, the TLS protocol, can be used as part of your email privacy and security efforts.

What is TLS?

Transport Layer Security, or TLS for short, is a network security protocol implemented across most major web browsers and many email servers. It is the successor to Secure Sockets Layer (SSL), a now-deprecated approach used from the earliest days of the Internet to secure web traffic.

What is the advantage of TLS? It is an easy, seamless way to send secure emails WITHOUT making the recipient do anything. Many email security solutions are “walled gardens” requiring action on the part of the recipient to get at your email. But when you enable TLS encryption for your outgoing emails – and the recipients are set up to receive TLS-encrypted emails, which is the case for approximately 80% of emails sent today – emails are automatically encrypted until they are opened and read by the recipient.

Originally developed by Netscape engineers, TLS has evolved considerably since its first specification in the late 1990s, with its latest 1.3 version now in the process of rolling out. It is maintained as a public standard through the Internet Engineering Task Force standards body via its RFC (Request for Comments) process. Most browsers and mail servers currently support at least its current 1.2 level of functionality, considered a minimum requirement for effective data security nowadays.

Putting TLS to work

TLS encryption is normally a function of your outbound email platform: for example, this article describes how TLS encryption is used with Microsoft’s Exchange Server platform for business.

Since TLS encryption requires the cooperation of both the sending and receiving mail servers, there are basically two ways to implement it with your outgoing emails: so-called “opportunistic” versus “forced” or “mandated” TLS.

In the case of opportunistic TLS, the recipient’s server is checked for TLS capabilities, and if there is a match, the message is sent encrypted – otherwise, it is sent unencrypted. Be aware that in the case of opportunistic TLS, there is no guarantee that the message will be encrypted.

With forced TLS, the message is not delivered unless TLS is supported.

The National Institute of Standards and Technology (NIST), a government standards body, publishes guidelines for the use of Transport Layer Security in encrypting data “in motion” between systems. Note that there may also be compliance implications for the security of data “at rest,” e.g. once it is resident on the recipient’s system.

How we can help

TLS only encrypts emails when BOTH the sender and the recipient are using TLS. Thankfully, there is a tool for checking this: our DOTS Email Validation product returns a Note Code value of 16 in cases where the recipient supports email encryption vial TLS. This allows you to choose whether or not to send encrypted emails to this recipient.

Note that TLS verification alone may not suffice for high-security or compliance applications: for example, a positive TLS reading from Email Validation may mean that the receiver’s email front end (such as their spam filter) uses TLS, but does not guarantee that emails remain encrypted all the way to reaching the recipient – nor that it remains encrypted when the data is “at rest.”

So for some mission-critical applications – such as HIPAA compliance or sensitive financial data – you may need to consider more bulletproof solutions such as a secure email portal, a dedicated encryption service, or verification of end-to-end encryption for specific recipients (such as communications between two banks).

That said, many organizations do not need to go to the expense of a dedicated encryption solution, or cannot afford to put roadblocks such as a dedicated portal between their emails and their customers – particularly for applications such as sales and marketing. If this is the case for your business, TLS encryption can represent an easy, real-time way to keep your outgoing email as secure as your recipients will allow. And with our Email Validation product, TLS verification comes bundled as part of a unified strategy to help ensure the quality of your email contact data.’s Data Exposure: Choose Data Validation Firms Carefully

It isn’t often that our data validation industry makes it into the mainstream media. But this week, it was rocked by a story in Wired Magazine, where two security researchers discovered that the email validation firm, (site currently taken down), had exposed an unprotected, publicly accessible database containing over 800 million email addresses – together with personal and business information for some of them.

Many details are still sketchy at this point. claimed that this was an internal database containing no client records and has since gone dark. And thankfully these records did not appear to contain sensitive information such as financial data or passwords. But we are well aware that incidents like these might raise concerns for businesses who employ third parties to process their contact data assets.

The good news is that situations like this are NOT at all representative of how established vendors like Service Objects does business. Done properly, data validation services are extremely secure, and can strongly enhance the data quality and security of your business. In this article we wanted to share what things you should look for in a data validation partner, along with many of our best practices.

What to look for in a data validation company

Reputation. We are putting this one first, because reputation matters as much as all other factors combined. Look at how long a company has been in business, how many customers it serves, and who its marquee clients are. A little Googling will serve you well here: information abounds, so see what people are saying about the company: negative comments are a concern, of course, and sometimes NO comments can be even more concerning.

(P.S. Glad you asked. We’ve been in business since 2001, and serve over 2500 clients including Amazon, Microsoft, Sony, and every major credit card issuer. Review site gives us a tremendously high 4.6/5.0 rating across 600 customer reviews, and you will find our CEO and others featured prominently in the industry trade press.)

Data security. Simply put, we do not store customer data. Only our clients can see their own data. For the time it takes for us to verify it, it is encrypted using a high-level (https) protocol. And when the verification is completed, the data is immediately expunged. We feel you should never, ever use a data validation service that stores your unencrypted contact data in a way that is vulnerable to prying eyes.

In addition to this, we employ bank-grade security measures, including secure 24/7/365 data centers, which feature multi-layer perimeter security with hourly scans and modern firewalls, penetration testing and hardened Windows servers. More details on our data security can be found here.

Reliability. Contact data validation is often performed in real-time, and is frequently mission-critical to a company’s marketing, sales or customer contact activities. This is why we offer one of the industry’s only financially backed service level agreements, with a minimum goal of 99.999% availability of its services.

Customer impact. The above Wired article discusses how email addresses are sometimes validated by sending them test emails, essentially spamming them. We use a very different, non-invasive technology to validate email deliverability, based on ping testing, as described further in this recent blog. This provides accurate results without impacting your customers.

Resources. People often say that the value of any company rests in how well they invest in their products and services. We were founded by developers, for developers, and both our technical and support teams are very proud of their expertise. Above all, we make it a point to be there for our customers: we are available 24/7 if needed, make it a public policy to respond within 90 minutes, and garner rave reviews from our clients.

Feel secure with the right data validation partner

No one ever wants to end up as a news story. But incidents like this serve as a good reminder for what to look for when you entrust your valuable contact data to a third party. We are proud of our track record of safety and security going back nearly two decades that has helped position us as leaders in this industry. Service Objects knows that the more educated businesses are about the specifics of safe and reliable data quality, the better. If you would like to learn more, we are happy to discuss our security measures and our data validation products,  please contact us for more details.

Privacy concept: text PRIVACY over background of cityscape at night

Data Privacy and Security: The Next Big Thing for the US?

Unless you’ve been living under a rock for the past couple of years, you know that data privacy and security laws have become a big thing worldwide. Between Europe’s GDPR regulation, Canada’s PIPEDA laws and others, consumer’s rights over their own personal data became one of biggest issues of 2018 for CIOs and CDOs who do business internationally. But what about here in the United States?

Now we have some numbers behind public opinions on this issue, thanks to a recent survey from software giant SAS. The results show that many of the same concerns that led to regulations such as GDPR are top-of-mind among Americans, and should inform the way data professionals look at their contact data assets in 2019 and beyond.

What the survey says

In July 2018, SAS surveyed over 500 adult US consumers from a variety of socioeconomic levels about their opinions on data privacy. Here are some of the key conclusions from this survey:

People are concerned. Nearly three-quarters of respondents are more concerned about data privacy than they were a few years ago, with more than two-thirds also feeling their data is less secure. The biggest areas of concern? Identity theft, fraud, and personal data being used or sold without consent.

They want more regulation. 67% of respondents felt that government should do more to protect data privacy, while fully 83% would like the right to tell an organization not to share or sell their personal information. A large majority would also like the right to know how their data is being used, and to whom it is being sold.

Consumers are more savvy about privacy. Roughly two-thirds of respondents (66 percent) acknowledge that primary responsibility for their data security rests with them, and a majority are able do things like changing privacy settings. Notably, close to a third of people have reduced their social media usage and online shopping over these concerns.

Trust must be earned. Trust in organizations for keeping personal data secure vary widely, from highs of 46-47% for healthcare and banking organizations to roughly 15% for travel companies and social media.

Age matters. Older consumers value privacy more than young ones and are least willing to provide personal information in return for something (36% for Baby Boomers versus 45% for Millennials). However, this does not mean that young consumers live in a post-privacy world, with 66% of Millennials expressing concern over the security of their personal data.

What this means for data privacy – and for you

One important take-away from this study is that, whether or not we have a US version of GDPR some day – a direction favored by these survey results – the trend is clearly toward increasing consumer concerns over data privacy and security over time. This means that data professionals need to prepare for the very real possibility of increased regulation and compliance issues on the horizon.

These survey results also mean that even in the absence of regulation, your organization’s data policies can have a very real and tangible impact on brand image and consumer trust, which in turn affect your bottom line. The fact that some people are reducing their social media use and online shopping, for example, should be a warning for everyone to start paying more attention to data privacy and security concerns.

Finally, these results are another sign that more than ever, businesses need to get serious about contact data quality in 2019. Tools from Service Objects such as address, email and phone validation can help ensure that your contact data assets are accurate, and prevent unsolicited marketing contacts to mistaken or bogus entities – and in the process, give you higher quality leads and contacts.

Want to learn more? Contact us to speak with one of our knowledgeable product experts about improving your data quality in the new year.

GDPR Compliance: Is Your Business Ready?

If you conduct business in Europe, May 2018 will be an important date. This is when the planned introduction of the European Union’s General Data Protection Regulation (GDPR) is scheduled to take effect.

GDPR represents a sweeping set of privacy regulations that impact your use of personal data from European citizens. If you conduct business with people from Europe – whether they are your customers, employees, or job prospects – GDPR affects you as well. It will require you to have policies in place to protect people’s personal data, as well as require notification when this data has been breached. And penalties for violations will be extremely stiff, up to the greater of 20 million Euros or 4% of your gross turnover.

GDPR starts with the definition of “personal data.” This is an extremely broad net: a recent article from Software Development magazine notes that the European Commission’s guidelines include both obvious data such name, address or email, and associated data ranging from bank accounts to photos and social media posts. Even the IP address a European is using on their computer is considered part of this personal data.

Much like the HIPAA requirements on electronic health care data in the United States, GDPR will require organizations to safeguard the personal data they collect and store in the course of doing business. At one level, this will involve technology such as encrypted data storage, password protection, and other approaches, along with policies and procedures for protecting this data. At another level, it obligates you to inform European consumers about your privacy policies, gain explicit consent to collect and use their personal data and provide them with the ability to control or opt-out of data collection. And in the event personal data is compromised, you need a plan for reaching people affected by the breach.

Each of these levels have important areas where data quality and GDPR compliance efforts intersect. Some of the questions businesses will have to ask themselves include:

  • Do we have accurate contact information for people we do business with in Europe?
  • Is there a notification procedure in place for our privacy and data policies, including opting out of data collection or making changes to personal data?
  • If a breach notification were necessary, do we have the means to quickly reach all affected parties?
  • How do we handle changes to contact information? What if a person in your database moves, changes jobs, or gets a new email address?

This means that your GDPR and data quality strategies will need to be closely linked. Tools such as international address verification, lead validation and name validation can help make sure data is complete and correct as it enters your system, and stays correct when it is needed later. As a recent article in Information Management points out, the key to GDPR compliance lies in proactively analyzing your data and performing a thorough risk assessment long before an actual privacy issue arises.

The European Union has long been on the vanguard of consumer protection legislation, and the new GDPR regulations are the latest in an effort to level the playing field between big data and the individual rights of its citizens. They have a global reach, whether you do business in Europe or serve Europeans from elsewhere. At a broader level, GDPR is part of a new reality that businesses will soon need to work with, one that is part of a larger trend toward increasing privacy regulations.

May 2018 is coming soon – is your business ready?

How secure is your ‘Data at Rest’?

In a world where millions of customer and contact records are commonly stolen, how do you keep your data safe? First, lock the door to your office. Now you’re good, right? Oh wait, you are still connected to the internet. Disconnect from the internet. Now you’re good, right? What if someone sneaks into the office and accesses your computer? Unplug your computer completely. You know what, while you are at it, pack your computer into some plain boxes to disguise it. Oh wait, this is crazy, not very practical and only somewhat secure.

The point is, as we try to determine what kind of security we need, we also have to find a balance between functionality and security. A lot of this depends on the type of data we are trying to protect. Is it financial, healthcare, government related, or is it personal, like pictures from the last family camping trip. All of these will have different requirements and many of them are our clients’ requirements. As a company dealing with such diverse clientele, Service Objects needs to be ready to handle data and keep it as secure as possible, in all the different states that digital data can exist.

So what are the states that digital data can exist in? There are a number of states and understanding them should be considered when determining a data security strategy. For the most part, the data exists in three states; Data in Motion/transit, Data at Rest/Endpoint and Data in Use and are defined as:

Data in motion/transit

“…meaning it moves through the network to the outside world via email, instant messaging, peer-to-peer (P2P), FTP, or other communication mechanisms.” –

Data at rest/endpoint

“data at rest, meaning it resides in files systems, distributed desktops and large centralized data stores, databases, or other storage centers” –

“data at the endpoint, meaning it resides at network endpoints such as laptops, USB devices, external drives, CD/DVDs, archived tapes, MP3 players, iPhones, or other highly mobile devices” –

Data in use

“Data in use is an information technology term referring to active data which is stored in a non-persistent digital state typically in computer random access memory (RAM), CPU caches, or CPU registers. Data in use is used as a complement to the terms data in transit and data at rest which together define the three states of digital data.” –

How Service Objects balances functionality and security with respect to our clients’ data, which is at rest in our automated batch processing, is the focus of this discussion. Our automated batch process consists of this basic flow:

  • Our client transfers a file to a file structure in our systems using our secure ftp. [This is an example of Data in motion/transit]
  • The file waits momentarily before an automated process picks it up. [This is an example of Data at rest]
  • Once our system detects a new file; [The data is now in the state of Data in use]
    • It opens and processes the file.
    • The results are written into an output file and saved to our secure ftp location.
  • Input and output files remain in the secure ftp location until client retrieves them. [Data at rest]
  • Client retrieves the output file. [Data in motion/transit]
    • Client can immediately choose to delete all, some or no files as per their needs.
  • Five days after processing, if any files exist, the automated system encrypts (minimum 256 bit encryption) the files and moves them off of the secure ftp to another secure location. Any non-encrypted version is no longer present. [Data at rest and data in motion/transit]
    • This delay gives clients time to retrieve the results.
  • 30 days after processing, the encrypted version is completely purged.
    • This provides a last chance, in the event of an error or emergency, to retrieve the data.

We encrypt files five days after processing but what is the strategy for keeping the files secure prior to the five day expiration? First off, we determined that the five and 30 day rules were the best balance between functionality and security. But we also added flexibility to this.

If clients always picked up their files right when they were completed, we really wouldn’t need to think too much about security as the files sat on the secure ftp. But this is real life, people get busy, they have long weekends, go on vacation, simply forget, whatever the reason, Service Objects couldn’t immediately encrypt and move the data. If we did, clients would become frustrated trying to coordinate the retrieval of their data. So we built in the five and 30 day rule but we also added the ability to change these grace periods and customize them to our clients’ needs. This doesn’t prevent anyone from purging their data sooner than any predefined thresholds and in fact, we encourage it.

When we are setting up the automated batch process for a client, we look at the type of data coming in, and if appropriate, we suggest to the client that they may want to send the file to us encrypted. For many companies this is standard practice. Whenever we see any data that could be deemed sensitive, we let our client know.

When it is established that files need to be encrypted at rest, we use industry standard encryption/decryption methods. When a file comes in and processing begins, the data is now in use, so the file is decrypted. After processing, any decrypted file is purged and what remains is the encrypted version of the input and output files.

Not all clients are concerned or require this level of security but Service Objects treats all data the same, with the utmost care and the highest levels of security reasonable. We simply take no chances and always encourage strong data security.

The 2018 European Data Protection Regulation – Is Your Organization Prepared?

The General Data Protection Regulation (GDPR) is a regulation intended to strengthen and unify data protection for all individuals within the European Union (EU). It also addresses the export of personal data outside the EU. The primary objectives of the GDPR are to give citizens and residents back control of their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU.

According to research firm Gartner, Inc., this regulation will have a global impact when it goes into effect on May 25, 2018.  Gartner predicts that by the end of 2018, more than 50 percent of companies affected by the GDPR will not be in full compliance with its requirements.

To avoid being part of the 50 percent that may not be in compliance one year from now, organizations should start planning today. Gartner recommends organizations focus on five high-priority changes to help organizations to get up to speed:

    1. Determine Your Role Under the GDPR
      Any organization that decides on why and how personal data is processed is essentially a “data controller.” The GDPR applies therefore to not only businesses in the European Union, but also to all organizations outside the EU processing personal data for the offering of goods and services to the EU, or monitoring the behavior of data subjects within the EU.
    2. Appoint a Data Protection Officer
      Many organizations are required to appoint a data protection officer (DPO). This is especially important when the organization is a public body, is processing operations requiring regular and systematic monitoring, or has large-scale processing activities.
    3. Demonstrate Accountability in All Processing Activities
      Very few organizations have identified every single process where personal data is involved. Going forward, purpose limitation, data quality and data relevance should be decided on when starting a new processing activity as this will help to maintain compliance in future personal data processing activities. Organizations must demonstrate an accountable ground posture and transparency in all decisions regarding personal data processing activities. It is important to note that accountability under the GDPR requires proper data subject consent acquisition and registration. Prechecked boxes and implied consent will be largely in the past.
    4. Check Cross-Border Data Flows
      As of today, data transfers to any of the 28 EU member states, as well as 11 other countries, are still allowed, although the consequences of Brexit are still unknown. Outside of the EU, organizations processing personal data on EU residents should select the appropriate mechanism to ensure compliance with the GDPR.
    5. Prepare for Data Subjects Exercising Their Rights Data subjects have extended rights under the GDPR, including the right to be forgotten, to data portability and to be informed (e.g., in case of a data breach).

Having poor quality data has several impacts on an organization and could hinder your efforts to being in compliance. Visit Service Objects’ website to see how our global data quality solutions can help you ensure your contact data is as genuine, accurate and up-to-date as possible.

Maintaining a Good Email Sender Reputation

What are honeypot email addresses?

A honeypot is a type of spamtrap. It is an email address that is created with the intention of identifying potential spammers. The email address is often hidden from human eyes and is generally only detectable to web crawlers. The address is never used to send out email and it is for the most part hidden, thus it should never receive any legitimate email. This means that any email it receives is unsolicited and is considered to be spam. Consequently, any user who continues to submit email to a honeypot will likely have their email, IP address and domain flagged as spam. It is highly recommended to never send email to a honeypot, otherwise you risk ruining your email sender reputation and you may end up on a blacklist.

Spamtraps typically show up in lists where the email addresses were gathered from web crawlers. In general, these types of lists cannot be trusted and should be avoided as they are often of low quality.

Service Objects participates in and uses several “White Hat” communities and services. Some of which are focused on identifying spamtraps. We use these resources to help identify known and active spamtraps. It is common practice for a spamtrap to be hidden from human eyes and only be visible in the page source where a bot would be able to scrape it, but it is important to note that not all emails from a page scrape are honeypot spamtraps. A false-positive could unfortunately lead to an unwarranted email rejection. Many legitimate emails are unfortunately exposed on business sites, job profiles, twitter, business listings and other random pages. So it is not uncommon to see a legitimate email get marked as a potential spamtrap by a competitor.

Not all spamtraps are honeypots

While the honeypot may be the most commonly known type of spamtrap, it is not the only type around. Some of you may not be old enough to remember, but there was a time when businesses would configure their mail servers to accept any email address, even if the mailbox did not exist, for fear that a message would be lost due to a typo or misspelling. Messages to non-existent email address would be delivered to a catch-all box as long as the domain was correctly spelled. However, it did not take long for these mailboxes to become flooded with spam. As a result, some mail server administrators started to use catch-alls as a way to identify potential spammers. A mail server admin could treat the sender of any mail that ended up in this folder as a spammer and block them. The reasoning being that only spammers and no legitimate senders would end up in the catch-all box. Thus making catch-alls one of the first spamtraps. The reasoning is flawed but still in practice today. Nowadays it is more common for admins use firewalls that will act as catch-alls to try and catch and prevent spammers.

Some spamtraps can be created and hidden in the source code of a website so that only a crawler would pick it up, some can be created from recycled email addresses or created specifically with the intention of planting them in mailing lists. Regardless of how a spamtrap is created it is clear that if you have one in your mailing list and you continue to send mail to it, that you will risk ruining your sender’s reputation.

Keeping senders honest

The reality is that not all honeypot spamtraps can be 100% identified. Doing so would highly diminish their value in keeping legitimate email senders honest.

It is very important that a sender or marketer follows their regional laws and best practices, such as tracking which emails are received, opened or bounced back. For example, some legitimate emails can still result in a hard or permanent bounce back. This may happen when an email is an alias or role that is connected to a group of users. In these cases, the email itself is not rejected but one of the emails within the group is. Which brings up another point. Role based email addresses are often not eligible for solicitation, since they are commonly tied to positions and not any one particular person who would have opted-in. That is why the DOTS Email Validation service also has a flag for identifying potential role based addresses.

Overall, it is up to the sender or marketer to ensure that they keep track of their mailing lists and that they always follow best practices. They should never purchase unqualified lists and they should only be soliciting to users who have opted-in. If an email address is bouncing back with a permanent rejection then they should remove it from the mailing list. If the email address that is being bounced back is not in your mailing list then it is likely connected to a role or group based email that should also be removed.

To stay on top of potential spamtraps marketers should also be keeping track of subscriber engagement. If a subscriber has never been engaged or is no longer engaged but email messages are not bouncing back, then it is possible that the email may be a spamtrap. If an email address was bouncing back before and not anymore, then it may have been recycled as a spamtrap.

Remember that by following the laws and best practices of your region you greatly reduce the risk of ruining your sender reputation, which will help ensure that your marketing campaigns reach the most amount of subscribers as possible.

We Won’t Let Storm Stella Affect Your Data Quality

A macro-scale cyclone referred to as a Nor’easter is forecasted to develop along the East Coast starting tonight and estimated to continue throughout Tuesday. In addition to typical storm preparations, have you ensured your data is also ready for Storm Stella?

Although we cannot assist you directly with storm preparations (water bottles, canned foods, batteries, candles, backup generators, blankets…etc) we will always ensure the integrity and reliability of our Web services. Since 2001, we’ve been committed to providing a high level of uptime during all types of conditions including storms, even Nor’easters. All of which comes down to: redundancy, resiliency, compliance, geographic load balancing, great data security, and 24/7 monitoring, contributing to our 99.999% availability of service offerings with one of the industry’s only financially backed service level agreement.  We take great pride in our system performance and are the only public web-service provider confident enough to openly publish our performance reports.

To ensure you are fully prepared for this storm in particular, it is important to note that our primary and backup data centers are in separate geographic locations. If an emergency occurs, you can re-point your application from our production data center to our backup data center.

The failover data center is designed to increase the availability of our web services in the event of a network or routing issue. Our primary data center hostname is: and our backup data center hostname is

You can also abstract the actual hostname into a configuration file, in order to simplify the process of changing hostnames in an emergency. Even in the case where your application handles failover logic properly, an easy-to-change hostname would allow your application to bypass a downed data center completely, and process transactions more quickly.

For most clients, simply updating their application to use our backup data center hostname should immediately restore connectivity. Your existing DOTS license key is already permitted to use our backup data center and no further actions should be needed.

Many of our clients with mission critical business applications take this action of configuring for failover in their application. We are available 24/7 to help with best practices and recommendations if you need any assistance before, during or after the storm!

The Importance of Encryption

The information age has brought with it both convenience and risk. Consumers, for example, love the convenience of shopping online, yet they certainly don’t want their personal and sensitive information (like credit card numbers) to be revealed to unauthorized parties. Businesses have the responsibility, and in many cases, the legal obligation, to mitigate this risk and protect sensitive information from prying eyes. This is largely done through encryption.

What is encryption?

In simple terms, encryption is the process of taking human-readable information and translating it into an unreadable form. The information is protected by an encryption algorithm that can only be translated back into a human-readable form by authorized parties.

As a consumer, you’ve likely encountered basic HTTPS encryption while doing business online. You know to look for HTTPS (instead of HTTP) and the padlock symbol in the address bar. With HTTPS encryption, the website and web server have been authenticated and a secure, two-way connection has been established. Transactions made using HTTPS encryption are shielded from man-in-the-middle attacks, tampering, and eavesdropping.

Encryption typically uses “keys” to unlock the data. For example, with symmetric key encryption, the sender and receiver use a common key known only to them to decrypt the data. Thus, if a cybercriminal were to intercept the information, the payload would be gibberish. Since the cybercriminal doesn’t have the means to decrypt the data, it’s safe and sound despite the breach.

Why should you care?

Sensitive client data should be handled with the utmost care. This means that the companies that handle sensitive client data should be well informed about the best security practices, including end-to-end encryption.

Service Objects offers specialized services focused on data validation. The data that is sent to our services for validation usually comes from our clients’ customers. For example, let’s imagine a fictional Service Objects client called Medical Insurance Inc., a medical insurance company. As Medical Insurance Inc. collects information on their customers, prospects, and leads, they want to confirm that the data is valid. In order to validate the data, they must send the sensitive information over to one of the Service Objects’ web services. If Medical Insurance Inc. doesn’t use encryption, the data being transferred is at risk of being snooped on by a malicious third party. A simple man-in-the-middle attack could allow direct access to sensitive information that should not be exposed to anyone outside of Medical Insurance Inc. The risk of exposing sensitive data can be easily negated by any of the following recommended best practices.

What do we currently support/recommend using?

We currently support Pretty Good Privacy (PGP) encryption on incoming and outgoing list processing orders. End-to-end encryption is made possible by PGP’s hybrid-type cryptography, which uses a blend of private and public key encryption to help ensure your data is not exposed to anyone but the authorized parties.

For standard API calls, we highly recommend using the HTTPS protocol. Over HTTPS, the connection to the site will be encrypted and authenticated using a strong protocol (SSL/TLS), a strong key exchange (RSA), and a string cipher (AES256). By using HTTPS to make your web service calls, you can rest assured that any sensitive client data is well guarded.

3 Things to Consider When Signing a Cloud Computing Contract

Cloud computing entails a paradigm shift from in-house processing and storage of data to a model where data travels over the Internet to and from one or more externally located and managed data centers.

It is typically recommended that a Cloud Computing Contract:

  • Codifies the specific parameters and minimum levels required for each element of the service you are signing up for, as well as remedies for failure to meet those requirements.
  • Affirms your institution’s ownership of its data stored on the service provider’s system, and specifies your rights to get it back.
  • Details the system infrastructure and security standards to be maintained by the service provider, along with your rights to audit their compliance.
  • Specifies your rights and cost to continue and discontinue using the service.

In addition to the basic elements of the Contract listed above, here are three important points to consider before signing your Cloud Computing Contract.

1. Infrastructure & security

The virtual nature of cloud computing makes it easy to forget that the service is dependent upon a physical data center. All cloud computing vendors are not created equal. You should verify the specific infrastructure and security obligations and practices (business continuity, encryption, firewalls, physical security, etc.) that a vendor claims to have in place and codify them in the contract.

2. Disaster recovery & business continuity

To protect your institution, the contract should state the provider’s minimum disaster recovery and business continuity mechanisms, processes, and responsibilities to provide the ongoing level of uninterrupted service required.

3. Data processing & storage

  • Ownership of data: Since an institution’s data will reside on a cloud computing company’s infrastructure, it is important that the contract clearly affirm the institution’s ownership of that data.
  • Disposition of data: To avoid vendor lock-in, it is important for an institution to know in advance how it will switch to a different solution once the relationship with the existing cloud computing service provider ends.
  • Data breaches: The contract should cover the cloud service provider’s obligations in the event that the institution’s data is accessed inappropriately. The repercussions of such a data breach vary according to the type of data, so know what type of data you’ll be storing in the cloud before negotiating this clause. Of equal importance to the breach notification process, the service provider should be contractually obligated to provide indemnification should the institution’s data be accessed inappropriately.
  • Location of data: A variety of legal issues can arise if an institution’s data resides in a cloud computing provider’s data center in another country. Different countries, and in some cases even different states, have different laws pertaining to data. One of the key questions with cloud computing is, which law applies to my institution’s data, the law where I’m located, or the law where my data’s located.
  • Legal/Government requests for access to data: The contract should specify the cloud provider’s obligations to an institution should any of the institution’s data become the subject of a subpoena or other legal or governmental request for access.

The Cloud Computing Contract is for the benefit of both the consumer and the provider. While it can be highly technical and digitalized, the Contract will ultimately establish the partnership between the parties, and following these steps should help mitigate any potential problems.

8 Tips to Build a Successful Service Level Agreement

A Service Level Agreement (SLA) makes use of the knowledge of enterprise capacity demands, peak periods, and standard usage baselines to compose the enforceable and measurable outsourcing agreement between vendor and client. As such, an effective SLA will reflect goals for greater performance and capacity, productivity, flexibility, availability, and standardization.

The SLA should set the stage for meeting or surpassing business and technology service levels while identifying any gaps currently being experienced in the achievement of service levels.

SLAs capture the business objectives and define how success will be measured, and are ideally structured to evolve with the customer’s foreseeable needs. The right approach to an SLA results in agreements that are distinguished by clear, simple language, a tight focus on business objectives, and ones that consider the dynamic nature of the business to ensure evolving needs will be met.

1. Both the Client and Vendor Must Structure the SLA

Structuring an SLA is an important, multiple-step process involving both the client and the vendor. In order to successfully meet business objectives, SLA best practices dictate that the vendor and client collaborate to conduct a detailed assessment of the client’s existing applications suite, new IT initiatives, internal processes, and currently delivered baseline service levels.

Cropped shot of two businesspeople shaking handshttp://

2. Analyze Technical Goals & Constraints

The best way to start analyzing technical goals and constraints is to brainstorm or research technical goals and requirements. Technical goals include availability levels, throughput, jitter, delay, response time, scalability requirements, new feature introductions, new application introductions, security, manageability, and even cost. Start prioritizing the goals or lowering expectations that can still meet business requirements.

For example, you might have an availability level of 99.999% or 5 minutes of downtime per year. There are numerous constraints to achieving this goal, such as single points of failure in hardware, mean time to repair (MTTR), broken hardware in remote locations, carrier reliability, proactive fault detection capabilities, high change rates, and current network capacity limitations. As a result, you may adjust the goal to a more achievable level.

3. Determine the Availability Budget

An availability budget is the expected theoretical availability of the network between two defined points. Accurate theoretical information is useful in several ways, including:

  • The organization can use this as a goal for internal availability and deviations can be quickly defined and remedied.
  • The information can be used by network planners in determining the availability of the system to help ensure the design will meet business requirements.

Factors that contribute to non-availability or outage time include hardware failure, software failure, power and environmental issues, link or carrier failure, network design, human error, or lack of process. You should closely evaluate each of these parameters when evaluating the overall availability budget for the network.

4. Application Profiles

contractApplication profiles help the networking organization understand and define network service level requirements for individual applications. This helps to ensure that the network supports individual application requirements and network services overall.

Business applications may include e-mail, file transfer, Web browsing, medical imaging, or manufacturing. System applications may include software distribution, user authentication, network backup, and network management.

The goal of the application profile is to understand business requirements for the application, business criticality, and network requirements such as bandwidth, delay, and jitter. In addition, the networking organization should understand the impact of network downtime.

5. Availability and Performance Standards

Availability and performance standards set the service expectations for the organization. These may be defined for different areas of the network or specific applications. Performance may also be defined in terms of round-trip delay, jitter, maximum throughput, bandwidth commitments, and overall scalability. In addition to setting the service expectations, the organization should also take care to define each of the service standards so that user and IT groups working with networking fully understand the service standard and how it relates to their application or server administration requirements.

6. Metrics and Monitoring

Service level definitions by themselves are worthless unless the organization collects metrics and monitors success. Measuring the service level determines whether the organization is meeting objectives, and also identifies the root cause of availability or performance issues.

7. Customer Business Needs and Goals

Try to understand the cost of downtime for the customer’s service. Estimate in terms of lost productivity, revenue, and customer goodwill. The SLA developer should also understand the business goals and growth of the organization in order to accommodate network upgrades, workload, and budgeting.

8. Performance Indicator Metrics

Metrics are simply tools that allow network managers to manage service level consistency and to make improvements according to business requirements. Unfortunately, many organizations do not collect availability, performance, and other metrics. Organizations attribute this to the inability to provide complete accuracy, cost, network overhead, and available resources. These factors can impact the ability to measure service levels, but the organization should focus on the overall goals to manage and improve service levels.

In summary, service level management allows an organization to move from a reactive support model to a proactive support model where network availability and performance levels are determined by business requirements, not by the latest set of problems. The process helps create an environment of continuous service level improvement and increased business competitiveness.

Leveraging SSD (Solid-State-Drive) Technology

Our company recently invested in SSD (solid-state-drive) arrays for our database servers, which allowed us to improve the speed of our services. As you likely know, it’s challenging to balance cost, reliability, speed and storage requirements for a business. While SSDs remain much more expensive than a performance hard disk drive of the same size (up to 8 times more expensive according to a recent EMC study), in our case, the performance throughput far outweighed the costs.

Considerations before investing in SSD


As we researched our database server upgrade options, we wanted to make sure that our investment would yield both speed and reliability. Below are a couple of considerations when moving from traditional HDDs to SSDs:

  • Reliability: SSDs have proven to be a reliable business storage solution, but transistors, capacitors, and other physical components can still fail. Firmware can also fail, and wayward electrons can cause real problems. As a whole, HDDs tend to fail more gracefully in that there may be more warning than a suddenly failed SSD. Fortunately, Enterprise SSDs are typically rated at twice the MTBF (mean-time-between-failures) compared to consumer SSDs, a reliability improvement that comes at an additional cost.
  • Application: SSDs may be overkill for many workloads. For example, file and print servers would certainly benefit from the superior I/O of an SSD storage array, but is it worth the cost? Would it make enough of a difference to justify the investment? On the other hand, utilizing that I/O performance for a customer-facing application or service would be most advantageous and likely yield a higher ROI. In our case, using SSDs for data validation databases is a suitable application that can make a real difference to our customers.

How SSDs have improved our services

Our data validation services rely on database queries to generate validation output. These database queries are purely read-only and benefit from the fastest possible access time and latency — both of which have been realized since moving our data validation databases to SSD.

SSDs eliminate the disk I/O bottleneck, resulting in significantly faster data validation results. A modern SSD boasts random data access times of 0.1 milliseconds or less whereas a mechanical HDD would take approximately 10-12 milliseconds or more. This is the difference in time that it takes to locate the data that needs to be validated, making SSDs over 100 times faster than HDDs. By eliminating the disk I/O bottleneck, our data validation services can take full advantage of the superior QPI/HT systems used by modern CPU and memory architectures.

Celebrate Data Privacy Day with 4 Insider Tricks to Help Manage Your Data Security

Here’s a list of tricks you can do to help keep identity thieves from stealing your personal data without reading the 48 page fine print legal talk that shows up with every smartphone OS upgrade.

1. Protect the “Fab 4” with Obfuscation:

Opening a credit line generally requires just 4 things: Name (last, first, middle initial), DOB, SSN and Address. So safeguarding these is paramount. They can be obfuscated – made unclear – which is what you want when showing them in the general public.

Of course, Name is hard to hide, but nicknames or shorter unofficial ones are good to consider. For example using one for eBay shipping purchases and another for Amazon, etc.

With your DOB, try to refrain from showing your birthday online, including on Facebook, but if you must then change your birth date to a different day than the one on file with credit agencies. It’s ok if your Facebook friends wish you Happy Birthday 3 days early.

Don’t give out your Social Security Number except when absolutely necessary. Many companies and forms ask for it, but do so because it is an easy identifier when in fact it is seldom required by law. So you can ‘accidentally’ type yours in with the last two digits set to your birth year.

2. The Unique Address Trick

This is how you find out who’s selling you out. When you sign up for frequent flier program, insurance, credit card, rewards programs, the girl scouts cookie order form, etc., create a unique identifier in the 2nd line of your address. For example:

John Wayne
123 Bourbon Street
Attn Delta-FreqFlierPrgm
New Orleans, LA 70116

The USPS doesn’t care what you put in that line. In fact, the USPS doesn’t even recognize a second address line as part of properly formatted address. It is meant simply for personal sorting after it arrives, so when you get the Geico or Capital One offer in the mail you’ll know who sold them your address because it will be right there on the Attn: line.

Hint: you can do the same thing with Gmail using the + symbol, see examples here.

3. Tiered Passwords

It’s hard to remember a different password for every website, so create levels of passwords or incorporate the name of the site to make the password unique to every site. You can keep 3-4 different passwords of increasing complexity using the most complex one on the most sensitive sites, like online banking.

Most Complex: Banks, Credit Cards, Paypal, AND the email accounts that are associated with them for password resets.
Complex: Online ordering platforms with stored credit cards (Amazon, Ebay, airlines etc.)
Less Complex: Facebook, Twitter, LinkedIn, etc. Sites of importance but easily fixed without monetary loss.
Least Complex: Online trials, rewards programs and sweepstakes, Starbucks app, and the like.

*Be sure to change all passwords once every few months while keeping the underlying increase in complexity.

4. Revamp Password Challenge Questions

If you’re worth it, a criminal can likely figure out your mother’s maiden name by going to sites like As for your first car, based on your date of birth + 15 years, one can probably narrow the field down to about 40 models, so take the opportunity to use those challenge questions and come up with something harder to figure out. For example, change “Ford Escort” to something like “RedandWhiteFordEscort.”

Remember, it may be easier for a thief to hack your email address and then request a password reset with your bank, so keep that secure too!

Today, Service Objects is reflecting on our data security, and we hope you do too. We are proud to be one of several hundred organizations collaborating to generate awareness about the importance of respecting privacy, safeguarding data, and enabling trust.


Tips for Referencing a Web Service from Behind a Firewall

It’s not unusual for network administrators to lock down their server environments for security reasons and restrict inbound and outbound network activity. Basically, nothing can come in or go out without permission. As such, if your application requires an HTTP connection to call an external web service, then your network admin will most likely need to create a firewall rule to allow access to the service provider so that communication between your application and the web service may occur.

Most firewall rules are created to whitelist ports on a specific IP address. While opening up a port for a particular IP address will allow communication between the two endpoints to occur, most RESTful web services will make use of several IP addresses that point to geographically different data centers to help ensure maximum uptime and availability. So if your service provider has multiple IP addresses available then be sure to whitelist all of them in your firewall. Not only should you include all available IP addresses in your firewall rules, but you also need to make sure that your application utilizes proper failover code to use another IP address in the event that one responds slowly or becomes unavailable.

It is also recommended that you never hardcode a reference endpoint such as a domain or IP address. In the event of unexpected network related failure, a hardcoded endpoint will leave you vulnerable and leave you with no choice but to update your code. Depending on the complexity of your code and your deployment procedure, this could lead to more wasted downtime than necessary. Instead, it is considered a better practice to use an editable configuration location such as a database or config file to save your service endpoints. Using an easy to access editable location means that you can quickly switch to another service endpoint in the event that primary endpoint is unavailable.

Depending on how your failover code is written, using an external configuration location can also save your application from attempting a request to an unresponsive location. If your application is always attempting a call to a primary location first before failing over, then your application must first wait for the primary location to fail before attempting a call to the secondary location. Most default timeouts are around 30 seconds, so your application may be forced to wait for 30 seconds before switching to a secondary location, but with an editable configuration source you can easily swap out the bad location for a good one and save your application from any future failures.

Overall, here some basic tips for referencing a web service from a production application:

  • Do not hardcode your reference endpoints.
  • Do not reference by an IP address unless you are restricted behind a firewall. Otherwise always use the fully qualified domain name.
  • If you are behind a restricted firewall then be sure to include all IP Address endpoints if more than one is available.
  • Be sure to include failover code to make use of the available endpoints in the event that one or more may become unavailable.

Follow the above tips to help take full advantage of what your RESTful service provider has to offer and to also help ensure that you are doing everything you can to keep your application running smoothly.

How To Achieve An SLA Of 99.995% Uptime

Any business offering Web services needs to be concerned with uptime. After all, if the service goes down, it becomes unusable. Here at Service Objects, that’s unthinkable! We’ve built resiliency and data security into our systems to ensure the integrity and reliability of our Web services. Here’s a peek behind the scenes:


DatacenterMultiple data centers provide redundancy by using redundant components, systems, subsystems, or facilities to counter inevitable failures or disruptions. Hardware WILL fail eventually, but should that happen, the redundant element will take over and continue supporting services to the user base. Users of a resilient system may never know that a disruption has occurred.

We house our servers and devices in professional data centers. This allows us to access economies of scale, advanced infrastructure, greater bandwidth, lower latency, and specialist services and systems. It also delivers a high level of 24/7 security, redundancy, and a whole host of additional advantages.

For example, our servers operate in a virtualized environment, each utilizing multiple power supplies and redundant storage-arrays. Our firewalls and load-balancing appliances are configured in pairs, leveraging proven high-availability protocols, allowing for instantaneous fail-over. Internet connections are configured using the HSRP redundancy protocol which ensures there is no single-point-of-failure that could render services unavailable.


Compliance is an important benefit of professional data centers. In today’s business climate, data often falls under government or industry protection and retention regulations such as SSAE 16 standards, the Health Insurance Portability and Accountability Act, and the Payment Card Industry Data Security Standard. Compliance is challenging without dedicated staff and resources.

With the third party data center model, you can take advantage of the data center’s existing compliance and audit capabilities without having to invest in technology, dedicated staff, or training.

Geographic load balancing

Another key factor for ensuring uptime has to do with geographic load balancing and fail-over design. Geographic load balancing involves directing web traffic to different servers or data centers based on users’ geographic locations. This can optimize performance, allow for the delivery of custom content to users in a specific region, or provide additional fail-over capabilities.

Using geographic load balancing also reduces latency by routing requests to the closest data center. For example, a customer operating in Los Angeles, California would be routed to a San Jose, California data center, while a customer in Miami, Florida would be routed to a data center located in New Jersey.

Data security and management

Feel secure knowing that your data is protected and safe within the walls of a continually monitored data center. We’ve invested in “bank grade” security. Several of our data centers are guarded by five layers of security, including retinal scanners.

All systems are constantly monitored and actively managed by our data center providers — both from a data security and a performance perspective. In addition, we operate our own in-house alerting and monitoring suites.

Ensuring a high level of uptime comes down to: redundancy and resiliency, compliance, geographic load balancing, great data security, and 24/7 monitoring. All of these factors are equally important and contribute to our 99.995 percent uptime results — guaranteed!