Service Objects’ Blog

Thoughts on Data Quality and Contact Validation

Email *

Share

Posts Tagged ‘Data Security’

How secure is your ‘Data at Rest’?

In a world where millions of customer and contact records are commonly stolen, how do you keep your data safe?  First, lock the door to your office.  Now you’re good, right?  Oh wait, you are still connected to the internet. Disconnect from the internet.  Now you’re good, right?  What if someone sneaks into the office and accesses your computer?  Unplug your computer completely.  You know what, while you are at it, pack your computer into some plain boxes to disguise it.   Oh wait, this is crazy, not very practical and only somewhat secure.

The point is, as we try to determine what kind of security we need, we also have to find a balance between functionality and security.  A lot of this depends on the type of data we are trying to protect.  Is it financial, healthcare, government related, or is it personal, like pictures from the last family camping trip.  All of these will have different requirements and many of them are our clients’ requirements. As a company dealing with such diverse clientele, Service Objects needs to be ready to handle data and keep it as secure as possible, in all the different states that digital data can exist.

So what are the states that digital data can exist in?  There are a number of states and understanding them should be considered when determining a data security strategy.  For the most part, the data exists in three states; Data in Motion/transit, Data at Rest/Endpoint and Data in Use and are defined as:

Data in Motion/transit

“…meaning it moves through the network to the outside world via email, instant messaging, peer-to-peer (P2P), FTP, or other communication mechanisms.” – http://csrc.nist.gov/groups/SNS/rbac/documents/data-loss.pdf

Data at Rest/Endpoint

“data at rest, meaning it resides in files systems, distributed desktops and large centralized data stores, databases, or other storage centers” – http://csrc.nist.gov/groups/SNS/rbac/documents/data-loss.pdf

“data at the endpoint, meaning it resides at network endpoints such as laptops, USB devices, external drives, CD/DVDs, archived tapes, MP3 players, iPhones, or other highly mobile devices” – http://csrc.nist.gov/groups/SNS/rbac/documents/data-loss.pdf

Data in Use

“Data in use is an information technology term referring to active data which is stored in a non-persistent digital state typically in computer random access memory (RAM), CPU caches, or CPU registers. Data in use is used as a complement to the terms data in transit and data at rest which together define the three states of digital data.” – https://en.wikipedia.org/wiki/Data_in_use

How Service Objects balances functionality and security with respect to our clients’ data, which is at rest in our automated batch processing, is the focus of this discussion.  Our automated batch process consists of this basic flow:

  • Our client transfers a file to a file structure in our systems using our secure ftp. [This is an example of Data in Motion/Transit]
  • The file waits momentarily before an automated process picks it up. [This is an example of Data at Rest]
  • Once our system detects a new file; [The data is now in the state of Data in Use]
    • It opens and processes the file.
    • The results are written into an output file and saved to our secure ftp location.
  • Input and output files remain in the secure ftp location until client retrieves them. [Data at Rest]
  • Client retrieves the output file. [Data in Motion/Transit]
    • Client can immediately choose to delete all, some or no files as per their needs.
  • Five days after processing, if any files exist, the automated system encrypts (minimum 256 bit encryption) the files and moves them off of the secure ftp to another secure location. Any non-encrypted version is no longer present.  [Data at Rest and Data in Motion/Transit]
    • This delay gives clients time to retrieve the results.
  • 30 days after processing, the encrypted version is completely purged.
    • This provides a last chance, in the event of an error or emergency, to retrieve the data.

We encrypt files five days after processing but what is the strategy for keeping the files secure prior to the five day expiration?  First off, we determined that the five and 30 day rules were the best balance between functionality and security. But we also added flexibility to this.

If clients always picked up their files right when they were completed, we really wouldn’t need to think too much about security as the files sat on the secure ftp.  But this is real life, people get busy, they have long weekends, go on vacation, simply forget, whatever the reason, Service Objects couldn’t immediately encrypt and move the data.  If we did, clients would become frustrated trying to coordinate the retrieval of their data.  So we built in the five and 30 day rule but we also added the ability to change these grace periods and customize them to our clients’ needs.  This doesn’t prevent anyone from purging their data sooner than any predefined thresholds and in fact, we encourage it.

When we are setting up the automated batch process for a client, we look at the type of data coming in, and if appropriate, we suggest to the client that they may want to send the file to us encrypted. For many companies this is standard practice.  Whenever we see any data that could be deemed sensitive, we let our client know.

When it is established that files need to be encrypted at rest, we use industry standard encryption/decryption methods.  When a file comes in and processing begins, the data is now in use, so the file is decrypted.  After processing, any decrypted file is purged and what remains is the encrypted version of the input and output files.

Not all clients are concerned or require this level of security but Service Objects treats all data the same, with the utmost care and the highest levels of security reasonable.  We simply take no chances and always encourage strong data security.

The 2018 European Data Protection Regulation – Is Your Organization Prepared?

The General Data Protection Regulation (GDPR) is a regulation intended to strengthen and unify data protection for all individuals within the European Union (EU). It also addresses the export of personal data outside the EU. The primary objectives of the GDPR are to give citizens and residents back control of their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU.

According to research firm Gartner, Inc., this regulation will have a global impact when it goes into effect on May 25, 2018.  Gartner predicts that by the end of 2018, more than 50 percent of companies affected by the GDPR will not be in full compliance with its requirements.

To avoid being part of the 50 percent that may not be in compliance one year from now, organizations should start planning today. Gartner recommends organizations focus on five high-priority changes to help organizations to get up to speed:

    1. Determine Your Role Under the GDPR
      Any organization that decides on why and how personal data is processed is essentially a “data controller.” The GDPR applies therefore to not only businesses in the European Union, but also to all organizations outside the EU processing personal data for the offering of goods and services to the EU, or monitoring the behavior of data subjects within the EU.
    2. Appoint a Data Protection Officer
      Many organizations are required to appoint a data protection officer (DPO). This is especially important when the organization is a public body, is processing operations requiring regular and systematic monitoring, or has large-scale processing activities.
    3. Demonstrate Accountability in All Processing Activities
      Very few organizations have identified every single process where personal data is involved. Going forward, purpose limitation, data quality and data relevance should be decided on when starting a new processing activity as this will help to maintain compliance in future personal data processing activities. Organizations must demonstrate an accountable ground posture and transparency in all decisions regarding personal data processing activities. It is important to note that accountability under the GDPR requires proper data subject consent acquisition and registration. Prechecked boxes and implied consent will be largely in the past.
    4. Check Cross-Border Data Flows
      As of today, data transfers to any of the 28 EU member states, as well as 11 other countries, are still allowed, although the consequences of Brexit are still unknown. Outside of the EU, organizations processing personal data on EU residents should select the appropriate mechanism to ensure compliance with the GDPR.
    5. Prepare for Data Subjects Exercising Their Rights Data subjects have extended rights under the GDPR, including the right to be forgotten, to data portability and to be informed (e.g., in case of a data breach).

Having poor quality data has several impacts on an organization and could hinder your efforts to being in compliance. Visit Service Objects’ website to see how our global data quality solutions can help you ensure your contact data is as genuine, accurate and up-to-date as possible.

Maintaining a Good Email Sender Reputation

What are Honeypot Email Addresses?

A honeypot is a type of spamtrap. It is an email address that is created with the intention of identifying potential spammers. The email address is often hidden from human eyes and is generally only detectable to web crawlers. The address is never used to send out email and it is for the most part hidden, thus it should never receive any legitimate email. This means that any email it receives is unsolicited and is considered to be spam. Consequently, any user who continues to submit email to a honeypot will likely have their email, IP address and domain flagged as spam. It is highly recommended to never send email to a honeypot, otherwise you risk ruining your email sender reputation and you may end up on a blacklist.

Spamtraps typically show up in lists where the email addresses were gathered from web crawlers. In general, these types of lists cannot be trusted and should be avoided as they are often of low quality.

Service Objects participates in and uses several “White Hat” communities and services. Some of which are focused on identifying spamtraps. We use these resources to help identify known and active spamtraps. It is common practice for a spamtrap to be hidden from human eyes and only be visible in the page source where a bot would be able to scrape it, but it is important to note that not all emails from a page scrape are honeypot spamtraps. A false-positive could unfortunately lead to an unwarranted email rejection. Many legitimate emails are unfortunately exposed on business sites, job profiles, twitter, business listings and other random pages. So it is not uncommon to see a legitimate email get marked as a potential spamtrap by a competitor.

 

Not all Spamtraps are Honeypots

While the honeypot may be the most commonly known type of spamtrap, it is not the only type around. Some of you may not be old enough to remember, but there was a time when businesses would configure their mail servers to accept any email address, even if the mailbox did not exist, for fear that a message would be lost due to a typo or misspelling. Messages to non-existent email address would be delivered to a catch-all box as long as the domain was correctly spelled. However, it did not take long for these mailboxes to become flooded with spam. As a result, some mail server administrators started to use catch-alls as a way to identify potential spammers. A mail server admin could treat the sender of any mail that ended up in this folder as a spammer and block them. The reasoning being that only spammers and no legitimate senders would end up in the catch-all box. Thus making catch-alls one of the first spamtraps. The reasoning is flawed but still in practice today. Nowadays it is more common for admins use firewalls that will act as catch-alls to try and catch and prevent spammers.

Some spamtraps can be created and hidden in the source code of a website so that only a crawler would pick it up, some can be created from recycled email addresses or created specifically with the intention of planting them in mailing lists. Regardless of how a spamtrap is created it is clear that if you have one in your mailing list and you continue to send mail to it, that you will risk ruining your sender’s reputation.

Keeping Senders Honest

The reality is that not all honeypot spamtraps can be 100% identified. Doing so would highly diminish their value in keeping legitimate email senders honest.

It is very important that a sender or marketer follows their regional laws and best practices, such as tracking which emails are received, opened or bounced back. For example, some legitimate emails can still result in a hard or permanent bounce back. This may happen when an email is an alias or role that is connected to a group of users. In these cases, the email itself is not rejected but one of the emails within the group is. Which brings up another point. Role based email addresses are often not eligible for solicitation, since they are commonly tied to positions and not any one particular person who would have opted-in. That is why the DOTS Email Validation service also has a flag for identifying potential role based addresses.

Overall, it is up to the sender or marketer to ensure that they keep track of their mailing lists and that they always follow best practices. They should never purchase unqualified lists and they should only be soliciting to users who have opted-in. If an email address is bouncing back with a permanent rejection then they should remove it from the mailing list. If the email address that is being bounced back is not in your mailing list then it is likely connected to a role or group based email that should also be removed.

To stay on top of potential spamtraps marketers should also be keeping track of subscriber engagement. If a subscriber has never been engaged or is no longer engaged but email messages are not bouncing back, then it is possible that the email may be a spamtrap. If an email address was bouncing back before and not anymore, then it may have been recycled as a spamtrap.

Remember that by following the laws and best practices of your region you greatly reduce the risk of ruining your sender reputation, which will help ensure that your marketing campaigns reach the most amount of subscribers as possible.

We Won’t Let Storm Stella Affect Your Data Quality

A macro-scale cyclone referred to as a Nor’easter is forecasted to develop along the East Coast starting tonight and estimated to continue throughout Tuesday. In addition to typical storm preparations, have you ensured your data is also ready for Storm Stella?

Although we cannot assist you directly with storm preparations (water bottles, canned foods, batteries, candles, backup generators, blankets…etc) we will always ensure the integrity and reliability of our Web services. Since 2001, we’ve been committed to providing a high level of uptime during all types of conditions including storms, even Nor’easters. All of which comes down to: redundancy, resiliency, compliance, geographic load balancing, great data security, and 24/7 monitoring, contributing to our 99.999% availability of service offerings with one of the industry’s only financially backed service level agreement.  We take great pride in our system performance and are the only public web-service provider confident enough to openly publish our performance reports.

To ensure you are fully prepared for this storm in particular, it is important to note that our primary and backup data centers are in separate geographic locations. If an emergency occurs, you can re-point your application from our production data center to our backup data center.

The failover data center is designed to increase the availability of our web services in the event of a network or routing issue. Our primary data center hostname is: ws.serviceobjects.com and our backup data center hostname is wsbackup.serviceobjects.com.

You can also abstract the actual hostname into a configuration file, in order to simplify the process of changing hostnames in an emergency. Even in the case where your application handles failover logic properly, an easy-to-change hostname would allow your application to bypass a downed data center completely, and process transactions more quickly.

For most clients, simply updating their application to use our backup data center hostname should immediately restore connectivity. Your existing DOTS license key is already permitted to use our backup data center and no further actions should be needed.

Many of our clients with mission critical business applications take this action of configuring for failover in their application. We are available 24/7 to help with best practices and recommendations if you need any assistance before, during or after the storm!

The Importance of Encryption

Encryption

The information age has brought with it both convenience and risk. Consumers, for example, love the convenience of shopping online, yet they certainly don’t want their personal and sensitive information (like credit card numbers) to be revealed to unauthorized parties. Businesses have the responsibility, and in many cases, the legal obligation, to mitigate this risk and protect sensitive information from prying eyes. This is largely done through encryption.

What is Encryption?

In simple terms, encryption is the process of taking human readable information and translating it into an unreadable form. The information is protected by an encryption algorithm that can only be translated back into a human readable form by authorized parties.

As a consumer, you’ve likely encountered basic HTTPS encryption while doing business online. You know to look for HTTPS (instead of HTTP) and the padlock symbol in the address bar. With HTTPS encryption, the website and web server have been authenticated and a secure, two-way connection has been established. Transactions made using HTTPS encryption are shielded from man-in-the-middle attacks, tampering, and eavesdropping.

Encryption typically uses “keys” to unlock the data. For example, with symmetric key encryption, the sender and receiver use a common key known only to them to decrypt the data. Thus, if a cybercriminal were to intercept the information, the payload would be gibberish. Since the cybercriminal doesn’t have the means to decrypt the data, it’s safe and sound despite the breach.

Why Should You Care?

Sensitive client data should be handled with the utmost care. This means that the companies that handle the sensitive client data should be well informed about the best security practices, including end-to-end encryption.

Service Objects offers specialized services focused on data validation. The data that is sent to our services for validation usually comes from our clients’ customers. For example, let’s imagine a fictional Service Objects client called Medical Insurance Inc., a medical insurance company. As Medical Insurance Inc. collects information on their customers, prospects, and leads, they want to confirm that the data is valid. In order to validate the data, they must send the sensitive information over to one of the Service Objects’ web services. If Medical Insurance Inc. doesn’t use encryption, the data being transferred is at risk of being snooped on by a malicious third party. A simple man-in-the-middle attack could allow direct access to sensitive information that should not be exposed to anyone outside of Medical Insurance Inc. The risk of exposing sensitive data can be easily negated by any of the following recommended best practices.

What Do We Currently Support/Recommend Using?

We currently support Pretty Good Privacy (PGP) encryption on incoming and outgoing list processing orders. End-to-end encryption is made possible by PGP’s hybrid-type cryptography, which uses a blend of private and public key encryption to help ensure your data is not exposed to anyone but the authorized parties.

For standard API calls, we highly recommend using the HTTPS protocol. Over HTTPS, the connection to the site will be encrypted and authenticated using a strong protocol (SSL/TLS), a strong key exchange (RSA), and a string cipher (AES256). By using HTTPS to make your web service calls, you can rest assured that any sensitive client data is well guarded.

3 Things to Consider When Signing a Cloud Computing Contract

Business man signing a contract

Cloud computing entails a paradigm shift from in-house processing and storage of data to a model where data travels over the Internet to and from one or more externally located and managed data centers.

It is typically recommended that a Cloud Computing Contract:

  • Codifies the specific parameters and minimum levels required for each element of the service you are signing up for, as well as remedies for failure to meet those requirements.
  • Affirms your institution’s ownership of its data stored on the service provider’s system, and specifies your rights to get it back.
  • Details the system infrastructure and security standards to be maintained by the service provider, along with your rights to audit their compliance.
  • Specifies your rights and cost to continue and discontinue using the service.

In addition to the basic elements of the Contract listed above, here are three important points to consider before signing your Cloud Computing Contract.

1. Infrastructure & Security

The virtual nature of cloud computing makes it easy to forget that the service is dependent upon a physical data center. All cloud computing vendors are not created equal. You should verify the specific infrastructure and security obligations and practices (business continuity, encryption, firewalls, physical security, etc.) that a vendor claims to have in place and codify them in the contract.

2. Disaster Recovery & Business Continuity

To protect your institution, the contract should state the provider’s minimum disaster recovery and business continuity mechanisms, processes, and responsibilities to provide the ongoing level of uninterrupted service required.

3. Data Processing & Storage

  • Ownership of data: Since an institution’s data will reside on a cloud computing company’s infrastructure, it is important that the contract clearly affirm the institution’s ownership of that data.
  • Disposition of data: To avoid vendor lock-in, it is important for an institution to know in advance how it will switch to a different solution once the relationship with the existing cloud computing service provider ends.
  • Data breaches: The contract should cover the cloud service provider’s obligations in the event that the institution’s data is accessed inappropriately. The repercussions of such a data breach vary according to the type of data, so know what type of data you’ll be storing in the cloud before negotiating this clause. Of equal importance to the breach notification process, the service provider should be contractually obligated to provide indemnification should the institution’s data be accessed inappropriately.
  • Location of data: A variety of legal issues can arise if an institution’s data resides in a cloud computing provider’s data center in another country. Different countries, and in some cases even different states, have different laws pertaining to data. One of the key questions with cloud computing is, which law applies to my institution’s data, the law where I’m located, or the law where my data’s located.
  • Legal/Government requests for access to data: The contract should specify the cloud provider’s obligations to an institution should any of the institution’s data become the subject of a subpoena or other legal or governmental request for access.

The Cloud Computing Contract is for the benefit of both the consumer and the provider. While it can be highly technical and digitalized, the Contract will ultimately establish the partnership between the parties, and following these steps should help mitigate any potential problems.

8 Tips to Build a Successful Service Level Agreement

Cropped shot of two businesspeople shaking handshttp://195.154.178.81/DATA/i_collage/pi/shoots/805355.jpg

A Service Level Agreement (SLA) makes use of the knowledge of enterprise capacity demands, peak periods, and standard usage baselines to compose the enforceable and measurable outsourcing agreement between vendor and client. As such, an effective SLA will reflect goals for greater performance and capacity, productivity, flexibility, availability, and standardization.

The SLA should set the stage for meeting or surpassing business and technology service levels while identifying any gaps currently being experienced in the achievement of service levels.

SLAs capture the business objectives and define how success will be measured, and are ideally structured to evolve with the customer’s foreseeable needs. The right approach to an SLA results in agreements that are distinguished by clear, simple language, a tight focus on business objectives, and ones that consider the dynamic nature of the business to ensure evolving needs will be met.

1. Both the Client and Vendor Must Structure the SLA

Structuring an SLA is an important, multiple-step process involving both the client and the vendor. In order to successfully meet business objectives, SLA best practices dictate that the vendor and client collaborate to conduct a detailed assessment of the client’s existing applications suite, new IT initiatives, internal processes, and currently delivered baseline service levels.

2. Analyze Technical Goals & Constraints

The best way to start analyzing technical goals and constraints is to brainstorm or research technical goals and requirements. Technical goals include availability levels, throughput, jitter, delay, response time, scalability requirements, new feature introductions, new application introductions, security, manageability, and even cost. Start prioritizing the goals or lowering expectations that can still meet business requirements.

For example, you might have an availability level of 99.999% or 5 minutes of downtime per year. There are numerous constraints to achieving this goal, such as single points of failure in hardware, mean time to repair (MTTR), broken hardware in remote locations, carrier reliability, proactive fault detection capabilities, high change rates, and current network capacity limitations. As a result, you may adjust the goal to a more achievable level.

3. Determine the Availability Budget

An availability budget is the expected theoretical availability of the network between two defined points. Accurate theoretical information is useful in several ways, including:

  • The organization can use this as a goal for internal availability and deviations can be quickly defined and remedied.
  • The information can be used by network planners in determining the availability of the system to help ensure the design will meet business requirements.

Factors that contribute to non-availability or outage time include hardware failure, software failure, power and environmental issues, link or carrier failure, network design, human error, or lack of process. You should closely evaluate each of these parameters when evaluating the overall availability budget for the network.

4. Application Profiles

contractApplication profiles help the networking organization understand and define network service level requirements for individual applications. This helps to ensure that the network supports individual application requirements and network services overall.

Business applications may include e-mail, file transfer, Web browsing, medical imaging, or manufacturing. System applications may include software distribution, user authentication, network backup, and network management.

The goal of the application profile is to understand business requirements for the application, business criticality, and network requirements such as bandwidth, delay, and jitter. In addition, the networking organization should understand the impact of network downtime.

5. Availability and Performance Standards

Availability and performance standards set the service expectations for the organization. These may be defined for different areas of the network or specific applications. Performance may also be defined in terms of round-trip delay, jitter, maximum throughput, bandwidth commitments, and overall scalability. In addition to setting the service expectations, the organization should also take care to define each of the service standards so that user and IT groups working with networking fully understand the service standard and how it relates to their application or server administration requirements.

6. Metrics and Monitoring

Service level definitions by themselves are worthless unless the organization collects metrics and monitors success. Measuring the service level determines whether the organization is meeting objectives, and also identifies the root cause of availability or performance issues.

7. Customer Business Needs and Goals

Try to understand the cost of downtime for the customer’s service. Estimate in terms of lost productivity, revenue, and customer goodwill. The SLA developer should also understand the business goals and growth of the organization in order to accommodate network upgrades, workload, and budgeting.

8. Performance Indicator Metrics

Metrics are simply tools that allow network managers to manage service level consistency and to make improvements according to business requirements. Unfortunately, many organizations do not collect availability, performance, and other metrics. Organizations attribute this to the inability to provide complete accuracy, cost, network overhead, and available resources. These factors can impact the ability to measure service levels, but the organization should focus on the overall goals to manage and improve service levels.

In summary, service level management allows an organization to move from a reactive support model to a proactive support model where network availability and performance levels are determined by business requirements, not by the latest set of problems. The process helps create an environment of continuous service level improvement and increased business competitiveness.

Leveraging SSD (Solid-State-Drive) Technology

Our company recently invested in SSD (solid-state-drive) arrays for our database servers, which allowed us to improve the speed of our services. As you likely know, it’s challenging to balance cost, reliability, speed and storage requirements for a business. While SSDs remain much more expensive than a performance hard disk drive of the same size (up to 8 times more expensive according to a recent EMC study), in our case, the performance throughput far outweighed the costs.

HDD vs. SSD

Considerations Before Investing in SSD

As we researched our database server upgrade options, we wanted to make sure that our investment would yield both speed and reliability. Below are a couple of considerations when moving from traditional HDDs to SSDs:

  • Reliability: SSDs have proven to be a reliable business storage solution, but transistors, capacitors, and other physical components can still fail. Firmware can also fail, and wayward electrons can cause real problems. As a whole, HDDs tend to fail more gracefully in that there may be more warning than a suddenly failed SSD. Fortunately, Enterprise SSDs are typically rated at twice the MTBF (mean-time-between-failures) compared to consumer SSDs, a reliability improvement that comes at an additional cost.
  • Application: SSDs may be overkill for many workloads. For example, file and print servers would certainly benefit from the superior I/O of an SSD storage array, but is it worth the cost? Would it make enough of a difference to justify the investment? On the other hand, utilizing that I/O performance for a customer-facing application or service would be most advantageous and likely yield a higher ROI. In our case, using SSDs for data validation databases is a suitable application that can make a real difference to our customers.

How SSDs Have Improved Our Services

Our data validation services rely on database queries to generate validation output. These database queries are purely read-only and benefit from the fastest possible access time and latency — both of which have been realized since moving our data validation databases to SSD.

SSDs eliminate the disk I/O bottleneck, resulting in significantly faster data validation results. A modern SSD boasts random data access times of 0.1 milliseconds or less whereas a mechanical HDD would take approximately 10-12 milliseconds or more. This is the difference in time that it takes to locate the data that needs to be validated, making SSDs over 100 times faster than HDDs. By eliminating the disk I/O bottleneck, our data validation services can take full advantage of the superior QPI/HT systems used by modern CPU and memory architectures.

Celebrate Data Privacy Day with 4 Insider Tricks to Help Manage Your Data Security

Here’s a list of tricks you can do to help keep identity thieves from stealing your personal data without reading the 48 page fine print legal talk that shows up with every smartphone OS upgrade.

1. Protect the “Fab 4” with Obfuscation:

Opening a credit line generally requires just 4 things: Name (last, first, middle initial), DOB, SSN and Address. So safeguarding these is paramount. They can be obfuscated – made unclear – which is what you want when showing them in the general public.

Of course, Name is hard to hide, but nicknames or shorter unofficial ones are good to consider. For example using one for eBay shipping purchases and another for Amazon, etc.

With your DOB, try to refrain from showing your birthday online, including on Facebook, but if you must then change your birth date to a different day than the one on file with credit agencies. It’s ok if your Facebook friends wish you Happy Birthday 3 days early.

Don’t give out your Social Security Number except when absolutely necessary. Many companies and forms ask for it, but do so because it is an easy identifier when in fact it is seldom required by law. So you can ‘accidentally’ type yours in with the last two digits set to your birth year.

2. The Unique Address Trick

This is how you find out who’s selling you out. When you sign up for frequent flier program, insurance, credit card, rewards programs, the girl scouts cookie order form, etc., create a unique identifier in the 2nd line of your address. For example:

John Wayne
123 Bourbon Street
Attn Delta-FreqFlierPrgm
New Orleans, LA 70116

The USPS doesn’t care what you put in that line. In fact, the USPS doesn’t even recognize a second address line as part of properly formatted address. It is meant simply for personal sorting after it arrives, so when you get the Geico or Capital One offer in the mail you’ll know who sold them your address because it will be right there on the Attn: line.

Hint: you can do the same thing with Gmail using the + symbol, see examples here.

3. Tiered Passwords

It’s hard to remember a different password for every website, so create levels of passwords or incorporate the name of the site to make the password unique to every site. You can keep 3-4 different passwords of increasing complexity using the most complex one on the most sensitive sites, like online banking.

Most Complex: Banks, Credit Cards, Paypal, AND the email accounts that are associated with them for password resets.
Complex: Online ordering platforms with stored credit cards (Amazon, Ebay, airlines etc.)
Less Complex: Facebook, Twitter, LinkedIn, etc. Sites of importance but easily fixed without monetary loss.
Least Complex: Online trials, rewards programs and sweepstakes, Starbucks app, and the like.

*Be sure to change all passwords once every few months while keeping the underlying increase in complexity.

4. Revamp Password Challenge Questions

If you’re worth it, a criminal can likely figure out your mother’s maiden name by going to sites like ancestory.com. As for your first car, based on your date of birth + 15 years, one can probably narrow the field down to about 40 models, so take the opportunity to use those challenge questions and come up with something harder to figure out. For example, change “Ford Escort” to something like “RedandWhiteFordEscort.”

Remember, it may be easier for a thief to hack your email address and then request a password reset with your bank, so keep that secure too!

Today, Service Objects is reflecting on our data security, and we hope you do too. We are proud to be one of several hundred organizations collaborating to generate awareness about the importance of respecting privacy, safeguarding data, and enabling trust.

 

Tips for Referencing a Web Service from Behind a Firewall

It’s not unusual for network administrators to lock down their server environments for security reasons and restrict inbound and outbound network activity. Basically, nothing can come in or go out without permission. As such, if your application requires an HTTP connection to call an external web service, then your network admin will most likely need to create a firewall rule to allow access to the service provider so that communication between your application and the web service may occur.

Most firewall rules are created to whitelist ports on a specific IP address. While opening up a port for a particular IP address will allow communication between the two endpoints to occur, most RESTful web services will make use of several IP addresses that point to geographically different data centers to help ensure maximum uptime and availability. So if your service provider has multiple IP addresses available then be sure to whitelist all of them in your firewall. Not only should you include all available IP addresses in your firewall rules, but you also need to make sure that your application utilizes proper failover code to use another IP address in the event that one responds slowly or becomes unavailable.

It is also recommended that you never hardcode a reference endpoint such as a domain or IP address. In the event of unexpected network related failure, a hardcoded endpoint will leave you vulnerable and leave you with no choice but to update your code. Depending on the complexity of your code and your deployment procedure, this could lead to more wasted downtime than necessary. Instead, it is considered a better practice to use an editable configuration location such as a database or config file to save your service endpoints. Using an easy to access editable location means that you can quickly switch to another service endpoint in the event that primary endpoint is unavailable.

Depending on how your failover code is written, using an external configuration location can also save your application from attempting a request to an unresponsive location. If your application is always attempting a call to a primary location first before failing over, then your application must first wait for the primary location to fail before attempting a call to the secondary location. Most default timeouts are around 30 seconds, so your application may be forced to wait for 30 seconds before switching to a secondary location, but with an editable configuration source you can easily swap out the bad location for a good one and save your application from any future failures.

Overall, here some basic tips for referencing a web service from a production application:

  • Do not hardcode your reference endpoints.
  • Do not reference by an IP address unless you are restricted behind a firewall. Otherwise always use the fully qualified domain name.
  • If you are behind a restricted firewall then be sure to include all IP Address endpoints if more than one is available.
  • Be sure to include failover code to make use of the available endpoints in the event that one or more may become unavailable.

Follow the above tips to help take full advantage of what your RESTful service provider has to offer and to also help ensure that you are doing everything you can to keep your application running smoothly.

Service Objects is the industry leader in real-time contact validation services.

Service Objects has verified over 2.5 billion contact records for clients from various industries including retail, technology, government, communications, leisure, utilities, and finance. Since 2001, thousands of businesses and developers have used our APIs to validate transactions to reduce fraud, increase conversions, and enhance incoming leads, Web orders, and customer lists. READ MORE