Posts Tagged ‘Tech Support’

Service Objects’ Top 5 Technical Blogs

Customer Service Excellence is one of Service Objects’ core values, which we support in a number of ways, including creating a variety of technical content. Our engineers regularly contribute to our blog to help organizations implement data quality solutions and stay on top of trends. Many of our blogs continue to attract attention far after publication.

Here are 5 of our most popular technical articles to date, click through to read more.

Geocoding Resolution – Ensuring Accuracy and Precision

When geocoding addresses, coordinate precision is not as important as coordinate accuracy. It is a common misconception to confuse high precision decimal degree coordinates with high accuracy. Precision is important, but having a long decimal coordinate for the wrong area could be damaging. It is more important to ensure that the coordinates point to the correct location for the given area. Accurately geocoding an address is very complex. If the address is at all ambiguous or not properly formatted then a geocoding system may incorrectly return a coordinate for a location on the wrong side of town or for a similar looking address in an entirely different state or region. Read More

How to Identify Incorporated and Unincorporated Places in the United States

The US Census Bureau uses the term “place” to refer to an area associated with a concentrated population, such as a municipality, city, town, village or community. These statistical areas have a defined boundary and they may or may not have a legal administration that performs some level of government function. The US Census Bureau uses class codes to classify different types of places and areas. The Bureau currently lists 70 different codes; however, all places are either a legally incorporated place or a Census Designated Place. Read More

Looking Beyond Simple Blacklists to Identify Malicious IP Addresses

Using a blacklist to block malicious users and bots that would cause you aggravation and harm is one of the most common and oldest methods around (according to Wikipedia the first DNS based blacklist was introduced in 1997). There are various types of blacklists available. Blacklists exist for IP addresses, domains, email addresses and user names. The majority of the time these lists will concentrate on identifying known spammers. Other lists will serve a more specific purpose, such as IP lists that help identify known proxies, TORs and VPNs or email lists of known honey pots or lists of disposable domains. Read More

Catch-all Domains Explained

Imagine launching an online business and associating your email address with your business domain. For example purposes, let’s say your domain is XYZ.com and your name is John. Your email address would be john@XYZ.com. Now what if someone entered jon@XYZ.com? If you had a “catch-all” domain, you’d receive email messages sent to ____@XYZ.com — even if senders misspelled your name. In fact, that was originally part of the allure of catch-all email addresses. With a catch-all domain, you could tell people to send email to anything at your designated domain such as: sales@, info@, bobbymcgee@, or mydogspot@. No matter what they entered in front of the @ sign, you’d still get the message without having to configure your server or do anything special. Read More

Can Google Maps be Used to Validate Addresses?

In November of 2016, Google started rolling out updates to more clearly distinguish their Geocoding and Places APIs, both of which are a part of the Google Maps API suite. The Places API was introduced in March 2015 as a way for users to search for places in general and not just addresses. Until recently the Geocoding API functioned similarly to Places in that it also accepted incomplete and ambiguous queries to explore locations, but now it is focusing more on returning better geocoding matches for complete and unambiguous postal addresses. Do these changes mean that Google Maps and its Geocoding API can finally be used as an address validation service? Read More

Our most popular blogs have one thing in common: they offer insight to help your team leverage data quality to enhance your business practices. View all of our blog content or reach out to let us know what you’d like to see more of.

Live Chat Means Real People at Service Objects

Have you noticed that more of us are talking with inanimate objects lately?

When we ask Siri for directions on our smartphone, command Alexa to order more toilet paper, or tell Google Home to play Ed Sheeran’s latest album on repeat, we’re becoming part of a trend: engaging bots that seem almost-but-not-quite human.

We understand the attraction businesses have to bots. They are inexpensive, ubiquitous, and great for getting the weather in Phoenix or a link to a specific web page. But bots are increasingly making their way into more and more areas of our lives nowadays, including sales and customer service. For example, according to this article automated chatbots are the future of technical support, and nowadays there is even an entire magazine devoted to them.

This is where we draw the line, however. When you contact Service Objects, you will never deal with a disembodied piece of artificial intelligence. If you visit our website during business hours, for example, you will discover that you are greeted with a chat screen manned by a real, live human being.

Live technical support: Someone’s always home

Our highly-rated technical support is 100% live people too. Call us during regular business hours, and we’ll normally get someone on the line with you within 15 minutes – sooner if your issue is urgent. And when problems strike off-hours, production customers can reach a member of our Quick Response Team 24 hours a day, seven days a week.

Why we like the human touch

Nothing against automation – that is our business, after all – but here is why we insist on using live people for our sales and support:

  • First of all, we are not typical sales people. Sure, we enjoy people purchasing our products as much as anyone. But to us, you are not another data point in our sales funnel – we want to get to know you, learn about your specific needs and challenges, and brainstorm unique and cost-saving solutions. With no sales pressure whatsoever.
  • Second, support is the lifeblood of what we do. Our products are classified as services, and we take seriously what that word means: being of service. A large part of our reputation revolves around providing industry-leading customer support for our products, and in our view this starts with having access to live, knowledgeable people.
  • Third, bots are only human (pun intended). Stories about misinterpreted automated queries abound online, while Siri reportedly struggles with everything from decimal points to Texas accents. There is even a joke going around that when Amazon.com purchased the Whole Foods Market grocery chain, it was due to a misinterpreted Alexa command given by their CEO Jeff Bezos. Bot errors may be funny, but not when they happen to our customers.

Data quality and real people: A good combination

We’ll still keep using bots for things like talking to our GPS, or asking Siri when Harrison Ford was born. But when data quality is your business, and you are selling mission-critical products with a 99.999% uptime record, we believe that only real people will do. We personally feel the same way about sales and service as we do about food and recreation – we prefer natural to artificial. And if you work with us, we think you will too. We look forward to hearing from you, feel free to contact us or give us a call at 800.694.6269.

 

Troubleshooting API Connection Issues

You know what it is like when that new phone doesn’t get a dial tone, or your router isn’t getting an internet signal. In much the same way, connectivity issues are an annoying but common part of implementing web services in your applications software. Whether it is navigating your system’s firewalls, moving from development to production environments, or simply dealing with calling out to an API, few things are as frustrating as troubleshooting a piece of code that should be working but isn’t.

We are here to help. The Applications Engineering team here at Service Objects understands the full range of connection issues, and after 15-plus years in business we’ve seen it all. Between the sample code we’ve written and the support we provide our customers, chances are that we can help you get your application up and running as quickly as possible with the Service Objects’ API of your choice. But first let’s look at some of the most common issues we’ve seen, and how you can troubleshoot them.

Monitoring Tools

Tools like Wireshark and Fiddler can be invaluable when debugging a connection to a web service.  These tools allow you to see all the different network connections that your system is making, and can also allow you to view the information that is being sent to the service.  This can be helpful for cases where you want to see what exactly is being sent to our services. Often these tools can highlight malformed requests, blocks by your firewall or any other odd behavior that is getting in the way of receiving valuable data from one of our APIs.

Firewalls, IP Addresses and Connection Issues

Another issue we see quite frequently is a connection issue when moving a website or application from development to production.  If this happens, and you experience issues connecting to our services, a couple of quick checks can get you pointed in the right direction to debug your application.

First, check if your firewall needs to have IP addresses whitelisted. If it does, reach out to us at support@serviceobjects.com and we will be happy to provide the most up-to-date set of IP addresses that our services utilize. Still having a tough time connecting to our services? Check through a command prompt on the server in question to ensure you can ping our primary (ws.serviceobjects.com) and backup (wsbackup.serviceobjects.com) endpoints. After that, you can perform a trace route to our endpoints to determine if there is any packet loss between your system and ours.

Failover and Service Downtime

A frequent question that we receive about connectivity to our services is what to do if the Service Objects’ API is unreachable.

First, understand that this should normally be an extremely rare event. Our SLA guarantees “five 9” availability, meaning that our services will be available 99.999% of the time. The equates to less than 5 minutes service downtime per year or around 26 seconds of downtime per month. We have several data centers around the country to help provide redundancy to ensure that your data keeps on getting validated in the event that our primary servers are down.

In addition to this, we recommend that you implement a failover call to our web services. What this means is that if our primary endpoint at ws.serviceobjects.com is down or working unexpectedly, your application should call our backup endpoint at wsbackup.serviceobjects.com.  Below is an example of what failover configuration would look like while calling our AV3 service and using C# syntax.

Contacting Service Objects Tech Support

Finally, whether you are just beginning to troubleshoot or are at the end of your rope, don’t hesitate to reach out to us! We can assist you in tracking down issues you are encountering.  It always helps if you can provide any relevant information about the problem you are encountering, such as the error message received, inputs that are causing the error, or time that the issue started. This information will help us isolate whether your problem is an issue with our services or an issue on your end.

Whatever your connectivity issues are, Service Objects support is here to help you. We are always happy to speak with you by phone, do a live screen sharing session with you, share tips or troubleshooting steps by email, or help in any way we can. Don’t hesitate to reach out to us anytime, and best of success with your connections!

Types of Integrations

Searching for the proper tool to fit your business needs can be a daunting task. At Service Objects, ease of integration is engineered in as part of each of our products, ranging from seamless API interfaces to list processing services that work directly on your data files. This article discusses each of our integration strategies in detail, to simplify your research process and to help pinpoint the type of integration that will best suit your needs.

Service Objects products are created as web services. This means that any programming language that can make a web service request, can make use of our services. From programming languages like  PHP, Java, C#, Ruby, Python, Cold Fusion and many more, to CRM systems such as Salesforce, Marketo, Hubspot and beyond. Nearly all major languages and platforms can make use of Service Objects’ web services.

Below we discuss the most common types of integrations we see from our clients . And if you have a platform that isn’t listed below and would like more information on how it could tie in with our services, please reach out to us – we are happy to provide tips, sample code, plug-ins and recommend best practices and procedures.

API integration

This is our most popular option for real-time validations and it allows our capabilities to be integrated directly into your software. Our services can be called via web requests either by HTTP GET or SOAP/POST, and the service response can be delivered in XML or JSON format. These protocols and output formats generally allow enough flexibility to meet your needs. We also offer a web service description language (WSDL) file that can be consumed to auto generate the necessary methods and classes to call our various web services. If you have a specific language in mind, please check out our Sample Code page – chances are we have sample code already written for your needs.

List Processing

List processing involves sending us a list of your data to be validated. We take this list and process it through the appropriate web service and then return the results, appended to each record in your file. From there you can take the data, apply your business logic, and save it to your database.

This type of process is often the best approach for cleaning up existing data in bulk. A large export is generally easier than integrating via the API and processing it manually. However, depending on the resources you have available, both API or list processing are completely viable options and we have a number of clients that use both in concert.

We offer two convenient solutions for list processing: single batch runs for one-time processing or automated batches. Let’s look at the differences between them:

Single batch runs. A single batch run is one of the simplest ways to have your data processed. You send us a comma-separated value (CSV) file and we’ll run it against our services, append the data, and return it to you. It is perfect for cleaning up existing data. Many clients run a single one-time batch process to clean up their existing data and then implement a real-time solution into their product, giving them the best of both worlds: clean existing data together with a process to ensure that incoming data is the highest quality possible.

Automated List Processing. Your data can be processed securely and automatically by uploading the data file to our secure FTPS server. Once uploaded, our system will recognize the new list to process and get to work. The input file will be parsed, run through a web service, and the results will be appended to the original file. It is nearly identical to the one-time processing service that we offer, with the added benefit that you can upload files at your convenience to be processed automatically.

CRM integration

If you currently use one of the major customer relationship management (CRM) or marketing automation software platforms like Salesforce, Marketo, Hubspot, or others, chances are that our services integrate with it and we likely have sample code or plug-ins for themEach platform has its own level of customizability but they almost universally offer some variation on a plugin, API, or exposed interface to integrate with. Contact us to learn more about integrating our capabilities with your specific platform.

Whether you develop an API interface for your current software, use batch list processing, or integrate our capabilities with your CRM or marketing automation platform, Service Objects is with you every step of the way with support, sample code, tutorials, and the experience that comes with serving nearly 2500 customers. Get in touch with us today and see how easy it can be to integrate best-in-class data quality with your own applications environment.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Salesforce Data Quality Tools Integration Series – Part 3 – VisualForce

Welcome to our third installment in our Salesforce Data Quality Tools Integration Series. In the first two parts, we covered creating a plug-in that could be dropped on a flow and a trigger. Today, we are going to jump into creating a VisualForce app that you’ll be able to extend for your purposes. At the end, you will have all the code you’ll need to get started, so don’t worry about implementing this step by step as I have it laid out in this blog.

The goal of this app is to display to a table of contacts that can be selected to have their emails validated. We will add a filter to the table to better target certain emails for validation. We will also display a few charts that will provide a good overview of the state of the emails in their system. These charts will refresh according to the filter selected for the table of contacts.

As always, we are going to start with some basic setup, then switching to look at what the final VisualForce page is going to look like, after that we’ll run through the code. During this walk-through, it should be clear where there are opportunities for customizing this solution.

The very first thing you need to start with is setting up the Service Objects endpoint. I am not going to go over it this time because I go over it in the first and second parts of this series. So, if you need help setting up your endpoint or a description of what this is, please check out the previous blogs. If you have been following along from the first two blogs, then you have already completed this part. Once you have set up your endpoint, you will need to add the following custom fields to the Contact object. If you are customizing this for your own purposes, you will want to add these fields to the object you are working with. If you want to map more of the fields that our service returns, you’ll have to create the appropriate fields on the object, if there isn’t an existing field already at your disposal.

  • Field name
    • Internal Salesforce name
    • Type
    • Service Objects field name
  • Email Catch All
    • EmailCatchAll__c
    • Text(20)
    • IsCatchAllDomain
  • Email Score
    • Email_Score__c
    • Number(2,0)
    • Score
  • Email Score Name
    • EmailScoreName__c
    • Text(20)
    • DPVNotesDesc
  • Email Top Level Domain
    • EmailTopLevelDomain__c
    • Text(50)
    • TopLevelDomainDescription

Here is a view of the table with the filter and the Validate button.

Next is a screen shot of a couple of the charts.

As you can see, it is a pretty simple example that can be tailored to whatever custom solution you are looking for.

For this walk-through, we will be creating three files: one for the markup, one for the controller and one for the web service call.

MARKUP

Starting with the markup page, the actual VisualForce page, we will create a file called EmailValidation.vfp. For the first element, the Apex:page element, we define which controller we want to have associated to the page. We will make a custom controller for this app, “ContactEmailValidationController”, which is the name of the class that we will build later in the controller section. After we establish the controller that we are going to use, we override some styles so that we get the headers in the charts to stand out properly.

Next, we create the page block that will house the components to the page inside a form. The four main components are the filter, the table of contacts, the validate button(s) and the charts. You will more than likely want to implement a paging system for the table, so you can page through your contact records but I do not go into that here.

The filter section is basic:

Throughout the code you will see instances of values that come in the form of {! [Some variable name]}. That is simply a reference to a value in our custom controller. In the case of the filter, we have two of those instances, one for the filterId and one for Items. In this case, the filterId is telling the select dropdown list which item is selected. When the page first loads, nothing is selected, so the filterId is empty or null, which will render the default table view. You can certainly set this to have some other default value. The Items variable simply holds all the possible select options for the dropdown list. Items is populated in the controller, which we will take a look at later. Since we want the charts and the table to refresh when the filter is changed, we set the reRender attribute on the actionSupport element to target contacts_list. This is the id of the overarching page block section. Any markup outside of the contacts_list page block section will not be refreshed. The underlying code does an ajax call to refresh just a part of the page, which can be handy when you don’t want the whole page to reload.

In the next section of the markup, we setup the table of contacts and the columns that we will display back to the user. I have collapsed much of the code here so we can first focus on the apex:pageBlockTable element.

There are two things to notice in the page block table element. First, the contacts variable holds all the contacts coming back from the controller. Later, you will see a method on the controller that is called getContacts which is specifically named that way to sync up with this contacts variable. For example, if the variable was called people then the controller would need a method to retrieve those records called getPeople. The second thing to notice is the value cont for var. This value will be the container for each of the contacts in the contacts variable. The format and setup of the page block table element can be described such that it acts like your traditional for each loop. On a side note, there are many ways we could have created this table. For instance, we could have used a repeater or a couple of other elements to display the contacts to the user.

Next, we are going to look at the way we setup the columns and we’ll start with the checkbox column, since it is unique to the rest of them.

This column consists of an apex:facet and a checkbox input. The facet will implement its own checkbox input as well. We use the facet to customize the header of the column with a checkbox. Salesforce defines the facet as this “A placeholder for content that’s rendered in a specific part of the parent component, such as the header or footer of an <apex:dataTable>”. The checkbox in the header will act as the select all/select none functionality of the column. The checkboxes in the rows will be populated with a contact id so we can track which contacts were selected. The onclick functions in both input elements reference JavaScript functions that we will discuss in more detail shortly. Simply put, those functions will manage the storage of selected rows.

The columns I have highlighted with the red boxes are going to be your standard output columns and those outside of the red boxes will need the header label, for the respective columns, updated to be more readable.

Earlier we created the custom fields to house some of the values that will be coming back from the call to the email validation service. Creating custom fields can, at times, lead to having to create a more “technical” label for the field instead of “displayable”. One obvious reason is so new custom fields do not conflict with existing fields or any future fields. Keeping that in mind, we do not want to use the labels of a couple of our custom fields in the header, so we will update them using the facet like we did earlier for the checkbox header but this time without a checkbox.

In this simple example, we are updating the header text, but the values in the column will still be pulled from the cont.EmailTopLevelDomain__c variable. And that is really it for the columns, pretty straightforward. And easy to extend, with little effort you can alter this example to display any of the columns you want in the table, as long as you have access to them from the controller.

In the next section, we will focus on the pie charts. The sample code will have a chart for Email Scores, Catch All Domains and Top Level Domains. The code becomes redundant, so I will only demonstrate one of them here. With that said, you can add any chart you want that focuses on the particular situation you are solving for. The overarching chart container is a page block element that I titled “Email Details”. This will house the three charts.

Each pie chart is wrapped in a page block section with it’s own unique title. For the apex:chart element, we see the variable EmailScorePieData. That links up to the getEmailScorePieData method on the controller which pulls in a list of wedgeName and count combinations which we can see referenced in the apex:pieSeries element.

Next, we’ll jump into the JavaScript portion of the client code. The JavaScript code on the markup page was designed to handle the checkboxes and compiling a list of ids based on checked/unchecked boxes. I used the code from this source on the internet. My only contribution to the code was to update some of the variable names to match more of what was going on in it. As you can see there is a function for selecting/deselecting one or all checkboxes at a time.

I am not going to take a deep dive into this code, since reading through it should illustrate what is going on there. At this point, all you need to know is that it compiles a list of Contact ids into the ContactIdBuilder variable based on which boxes have been checked as I mentioned earlier.

I did skip over the last part of the markup because it would be easier to make the connection between the ContactIdBuilder variable and the following markup code.

This part of the code, when the Validate button is clicked, it takes the id list stored in ContactIdBuilder and assigns it to the returnString hidden input element. After the value is assigned, the ValidateCheckedEmails method on the controller is called. The returnString value associates to the returnString variable in the controller that we will see shortly. And that is it. That’s all for the user facing part of the code.

CONTROLLER

The controller code consists of two main parts. First, getting data from Salesforce and displaying it to the screen. And the second part is validating the selected rows from the user interface.

Based on the filter selected from the user interface, the getContacts method will return a list of contacts. The main thing that you need to do here is make sure you are pulling back all the fields that the user interface needs to work with, taking into account both the visible and hidden fields. For example, the contact id which is in the background on the checkbox columns.

The method, getItems, retrieves all the filter options for the dropdown list in the user interface. In this method, we put together a list of hardcoded options and then some dynamic options that will allow us to filter on each company in the list.

The three methods that get the data for the charts return a list of EmailData records which are simply key/value pairs. Key being the name of the pie wedge and value being the count or size of the wedge. You can use the pie methods here to copy or modify to suit your own purposes. The more stats you want to present to the user, the more fields you’ll need to retain from the call to our email validation web service. Some of the other fields you may be interested in adding here are, the warning notes, the notes descriptions and/or the SMTP flags (server and mailbox level). There are many other fields that our email validation service returns and you can look at them in more depth here.

The last part of the controller to go over is the call to the method that does the email validate request. The method ValidateCheckedEmails pulls all the contact ids from the returnString variable and sets them up for processing in the CallEV3ByIdList method of the EmailValidationUtil class.

WEB SERVICE CALL

The EmailValidationUtil.apxc is the last file left to discuss. This file does the actual request to the email validation web service. This is the part of the code that you can customize the most; from what you decide to process to what is returned by the service. It is also a good place for any additional logic you may want to implement.

This code should seem very familiar to you if you had read the previous parts of this blog series. It is setup in a very similar way. Just as with the other examples in this blog series, we demonstrate the best practices when it comes to implementing failover. The inputs to the service are the EmailAddress, AllowCorrections, Timeout and Licensekey.

In our example, the email address comes from the contact records selected in the user interface and the rest of the inputs are hardcoded (but they don’t have to be). AllowCorrections accepts true or false. The service will attempt to correct an email address if set to true. Otherwise, the email address will be left unaltered if set to false. Here, I hardcoded it to true but you may want it to be false or use some other business logic to make that determination. The Timeout value specifies how long the service is allowed to wait for all real-time network level checks to finish. Real-time checks consist primarily of DNS and SMTP level verification. Timeout time is in milliseconds. A minimum value of 200ms is required. I have hardcoded it to 2000ms. For the LicenseKey, you will want to either hardcode this into the call (depending on the access that people have to the code) or create a custom object and/or a custom field with the license key that you can lock down with user permission only available to the administrator.

Before I wrap this up I wanted to make mention of writing tests to cover the code. This example is complete but expects you to customize parts of it, so I have not provided any test code. You will want to do that. It will ensure that even as Salesforce updates their system or you make changes to your organization, everything will continue to work as expected.

In conclusion, VisualForce pages are mostly used with the old Salesforce UI the Classic UI, but it can be created in a way so that it will continue to work with the new Lightning experience. In a future blog, I will show a demonstration of how to create a Lightning App while incorporating our validation services. Service Objects has validation services for all kinds of solutions, making Salesforce a perfect platform to demonstrate our services on.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Salesforce Data Quality Tools Integration Series – Part 2 – Validation Plug-ins in Flows

Welcome back to our blog series where we demonstrate the various ways you can achieve high data quality in Salesforce through the use of Service Objects’ validation tools.

In the first part of this series, we showed how to create a trigger and an Apex future class to handle calls to our web service API. That blog described a common way, in code, to call our services in Salesforce. But, there are more simple ways to do this.

In this blog, we are going to step back and demonstrate an integration that requires little more than drag and drop to achieve your goals by implementing a plug-in on a flow. Because – why write code if you don’t have to? When we are done you will be able to create a flow and drop our plug-in anywhere in your process and wire it up.

We are going to start by looking at some basic setup. Then we will step through the code. What code? You are probably wondering, “Why do we need to look at the code, if we don’t need to write any”. Well, if you wanted to implement any additional business logic or tailor the plug-in, you will be equipped to do just that.

After we review the code, we will jump into creating a sample flow that allows a user to enter either a US address or Canadian address and process that data through our US and Canada validation APIs and then display those results back to the screen.

In this section, we will need to do some setup before we get started. We will need to register the Service Objects’ endpoint with Salesforce so that we can make the web service calls to the address validation API’s. We went over this in the previous blog but it is worth repeating here.

The page where we add the URL is called “Remote Site Settings” and can be found in Home->Settings->Security->Remote Site Settings. This is the line I added for this blog.

Be aware that this will only work for trial keys. With a live/production key you will want to add a URL for for ws.serviceobjects.com and wsbackup.serviceobjects.com. As a best practice, you’ll want to add both endpoints with your live key, to take advantage of our fail-over capabilities. We named this one ServiceObjectsAV3 because we are really just reusing it from the previous blog but you can name it whatever you want.

No need for custom fields to get this example to work but you will likely want to create the same ones or similar as those in the previous blog, seen here.

This section shows the structure of the plug-in class for the US address validation, that we are naming AddressValidationUSA3, and the signature of the invoke and describe methods. The invoke method will contain the business logic and the call to our web service API. The describe method sets up the input and output variables for the plug-in that end users will be able to connect to on a flow.

The describe method also allows you to supply definitions and descriptions of the variables on the plug-in itself that will appear in the flow interface. The definitions used here are important because it can save a lot of time for the end user that is developing a flow. I would resist skipping this to save time. The following is just a snippet of the code.

There really isn’t much else to the describe method, most of the business logic happens in the invoke method. In the invoke method, we will gather the inputs to the plug-in and do some initial formatting to make sure we have valid characters in the call to our API. In gathering the inputs, we make sure to use the names of the inputs that we used in the describe method.

Since we will be making a path parameters call to the API, we want to account for anything that could break the URL like missing parameters. A missing parameter using our API should break the call but on other APIs it could simply change the meaning of the call and end up returning unexpected results. To make sure there are no issues with missing parameters, we simply replace any ones that are missing with space character. Just as in the previous blog, there will be minimum field requirements before it even makes sense to call the operation. The operation we are using is GetBestMatches and these are the requirements.

  • Combination 1
    • Address
    • City
    • State
  • Combination 2
    • Address
    • Zip code

If you do not have some combination of these inputs then it is best to add code to avoid the call completely, since there is no way to validate an address otherwise. By “avoid the call,” we mean avoid even hitting the plug-in at all, since it would not be necessary. A decision component in a flow can help with the process.

In an effort to simplify the code, I pushed the logic for calling our API into a method called CallServiceObjectsAPI which should make things easier to read. Using this method, we pass in the input parameters that we cleaned up.

Below, I show how to setup the HttpRequest/HttpResponse and the request URL. After that, I add in some basic error checking to check for a couple results. First, I am checking to see if there was an error with the API call and/or the result from it. If other errors happen outside of that then we catch the general exception, which I would assume is a connectivity issue at that point. You can try to catch more specific exceptions on your own but what I have here will work in a general fashion. In the instance of an error or exception on the call to our API, we also demonstrate our standard best practice of failover by adding code to call another endpoint in these situations. In the case of this blog, we are walking you through a trial key scenario, so failover in this case will not be true failover since it is failing over to the same trial.serviceobjects.com endpoint. But in a live key scenario, you would be encouraged to use ws.serviceobjects.com for the first call and wsbackup.serviceobjects.com for the failover call.

Another thing you may have noticed in the code above is that the input parameters to the API call are URL encoded and “+” are switched to “%20”. The reason for this is that certain characters are not allowed in a URL or a path parameters call, so the built-in Apex function urlEncode cleans that kind of thing up. One side effect that the encoding has is it replaces spaces with “+” symbols. Though in a URL “+” symbols are becoming the norm, path parameter calls still have issues with them. So the proper way to make the call work is to replace the “+” symbol with a “%20” which will be deciphered correctly as a space in the end. The method returns a string response from the web service and based on the call made, it is more precisely returning a JSON string response which we save in the variable ServiceObjectsResult.

The first thing we do after we get the response from the method is deserialize it into a Map object so we can start processing the result. Here is the rest of the code.

This section of the code is checking to see the type of response that was returned. The response could have been either an address, error or network error response. Based on those variations, we populated the corresponding output values in Map variable called “result”. In the Map, we map the outputs from the service to the expected outputs described in the describe method. Those values are the output values of the plug-in and are directly interfaced with in the flow. Adding code anywhere in the method we just went through would be appropriate based on your own specialized business logic.

Now that we have gone over the code, we are ready to jump in and show an example of our plug-in in a flow. For this example, I also created a Canadian address validation plug-in to make it a little more interesting. However, I do not see any service we offer that would not make for an appropriate and powerful example.

As I mentioned on the outset of this blog, I will show you a demonstration of a flow where the end user will be presented with a data entry screen. They will have options for adding either a US address or a Canadian address. From there, we will wire it up to either the US address validation plug-in or the Canadian address validation plug-in and then finally display the results to the screen. This flow will be more of an example on how to wire up the plug-ins rather than creating an input and output screen. Though doing something with screens is not out of the question, it will be more realistic to have a flow that manipulates Contact, Account or Custom objects without an interface.

Start by creating a new flow in the Process Automation section. I am just going to open the one I already had saved. Dragging on the Screen object is the first step and be sure to set it to be the starting interface by setting the green down arrow on the object. A flow cannot be saved without one part of it being a starting point.

Inside this interface you will setup the input fields that you want to retrieve from the user. In this example, we forced the Address 1 field to be a required field and at the bottom we added a radio button selection for the desired country, defaulted to USA.

Once we have the inputs from the user, we need to find some way to route the variables to either a US address validation or a Canadian address validation. For that we can use Decision Logic Tool. We will configure it to look at the country field and make a decision on which way it should continue to process.

The actual logic simply decides to go down the US address validation path if USA is found otherwise it will assume it is a Canadian input address.

Now we are ready to drop our US and Canada address validation plug-ins on the screen. On the left, in the Tool area you can find the plug-ins under the respective names you gave them in the creation of the plug-in.

You can and will be forced to drag them onto the flow canvas one by one and set them up individually. Inside you will be mapping the inputs from the user data entry to the inputs for the plug-ins. Additionally, you will be mapping the outputs to variables you create or objects variables in the system. This can take some time depending on how many of the address validation outputs you want/need to use.

When you are done with that part, you will wire them up in the flow to the decision tool you added earlier as shown below.

In this last part, we will setup two output screens, one for the US address validation results and one for the Canadian address validation results. This time instead of adding text boxes to the interface, we just add a display object for each field we want to show.

After wiring the last screens, the completed flow will look like this.

From here you can choose to save the flow and then add layout options on other screens that will give you access to executing the flow, schedule the work flow to run at a specific time (not useful in our example though) or you can run it directly from this interface by clicking Run. We will demonstrate it by clicking Run. We’ll start with a US address and then view the results on the output screen. In this example, you can see there are several issues with this address. Street name is incomplete, it uses unit instead of suite, city name is misspelled and even the postal code is wrong.

Upon validation, we see that the system was able to correct the address and had a DPV score of 1 meaning the result is a perfect address. The DPV score is one of the most important fields to pay attention to in the output. It indicates the validity level of the address. You’ll see other information in the response that will give you an idea about what was changed or if there were any errors. You’ll also have access to the fragments of the address so you can process the response at a more granular level. More details about the fields can be found here.

In the last example, we will use a Canadian address. In this case the only thing wrong with the address is the postal code, so we’ll see how the system handles that.

Sure enough that address validated and the postal code was corrected. In the US address validation service, the DPV score and error result indicated the validity of an address. In the Canadian validation service, we really only need to look at the error fields. Empty or blank error fields will mean that the address is good and can receive mail.

In conclusion, we learned how to create a plug-in and then use it in a flow. Though you do not need to know how the plug-in was made to be able to use them, it is very helpful to know the details in the case that your business logic requires a more tailored solution. And you can see that in this demonstration adding additional code does not take much effort. These plug-ins allow you to get up and running very quickly without needing to know how to code. As I mentioned earlier, the flow created here is definitely a use case but more often than not I would imagine Salesforce administrators creating flows to work on their existing objects such as Contact, Account or some other custom object.

A quick note on the license key, you will want to add you own license key to the code. You can get one free here for US address validation and here for Canadian address validation (Each service will require a different license key).

The last thing I want to discuss about the license key is that it is good practice to not hard code the key into the code. I would suggest creating a custom object in Salesforce with a key field. Then restrict the permissions on the field so that it is not view-able by just anyone. This will help protect your key from unwanted usage or theft. At this point, we have the code for these two address validation plug-ins, but Service Objects will continue to flush out more for our other services and operations. With that said, if there is one you would like to request, please let us know by filling out the form here and describe the plug-in you are looking for.

No image, text reads Service Objects Tutorials

Salesforce Trigger Integration – Video Tutorial

Here at Service Objects, we are dedicated to helping our clients integrate our data quality services as quickly as possible. One of the ways we help is educating our clients on the best ways to integrate our services with whatever application they may be using. One such application where our tools are simple to implement is Salesforce.

Salesforce is, among other things, a powerful, extensible and customizable CRM. One of the advantages of Salesforce’s extensibility is that users can set up triggers to make external API calls. This is great for Service Objects’ customers, as it allows APIs calls to any our DOTS web services and helps ensure their contact data in Salesforce is corrected and verified.

In the video below, we will demonstrate how to set up a trigger that will call our DOTS Address Validation 3 service whenever a contact is added to our list of contacts. See full transcript below.

 

 

Hello, and welcome to Service Objects video tutorial series. For today’s tutorial we’ll be setting up a trigger and a class in Salesforce that will call out to our DOTS Address Validation 3 web service. If you don’t already know, Salesforce is an extremely powerful, extensible and customizable CRM. One of the great things that we like about Salesforce here at Service Objects is the ability to call out to APIs so that the data going into your CRM can be validated and verified before it gets entered. This means that you can call out to any of our APIs from Salesforce. You can use this video as an overview for how to integrate any of the service, but for this specific example, we’ll be using DOTS Address Validation 3.

To participate in this tutorial, you need the following items. A Service Objects web service key, whether that is a trial key or a production key. You can sign up for a free trial key at www.serviceobjects.com. You will need a developer account in Salesforce. You will also need a working knowledge of Salesforce and Apex, which is the native programming language inside Salesforce. We will go ahead and get started.

To start off, one of the first things we’ll need to do is add the Service Objects endpoint into the list of allowed endpoints that Salesforce is allowed to contact within your developer platform. To do this, you can navigate here and type in remote site settings, or remote, and the remote site settings field will pop up. Here, you’ll see a list of all the websites that your Salesforce platform is allowed to contact. In my account here you can see I have ws.serviceobjects.com and wsbackup.serviceobjects.com. To add a new site, you’ll go and select new remote site. Give an appropriate name, and you will type in the URL here. You can see for this example I’m going to type in trial.serviceobjects.com which will only work if you have a trial license key. If you have a production key, you want to add ws.serviceobjects.com and wsbackup.serviceobjects.com as those will be the two primary URLs that you will be hitting with your production Service Objects account.

This trial.serviceobjects.com URL will only work with trial license keys. Click save and new or just save. You see here if we go back to our remote site settings, you can see that trial.serviceobjects.com was successfully added to our remote site settings. Now that we have successfully added the Service Objects endpoint, we’ll want to add some custom objects in our contact field that will hold some of the values that are returned by our DOTS Address Validation 3 web service. To do that, we’ll scroll down and go to customize. In our example we’re using the contacts field, but you can add custom fields to whatever field is most appropriate for your application, and we’ll select add custom field to contacts. Once we are here, we will scroll down and scroll to this contact custom fields and relationship. You can see here I have several custom fields here already defined. I have a DPV, mostly DPV information and error information, which our field set will parse out from our Address Validation 3 response.

We’ll add another field here for the sake of example. For this field we’re going to add the Is Residential Flag that comes back from the Address Validation 3 service. For this we’ll select text, select next, and here we’re going to go ahead and enter an appropriate field name, which I have in my clipboard. We’re going to call it DotsAddrVal_IsResidential. If you hover over this little “i,” it will say this is the label. This is the label to be used on displays, pages layouts, reports, and list views. This will be a more of a pretty type display. You’ll want to name it something more appropriate and something that will work better in your workflow, but for our example we’re just going to name it this.

For length, we’re going to do length of 15, and for the field name we’re just going to call it AddrValIsResidential. This is the internal field name here. When you’re calling an internal field name, you’ll have to add a double underscore and C in the Apex class. We’ll see an example of that in the next piece of code that we’re going to add. We’ll select next. You’ll select the appropriate field level security here. Next again, and go ahead and click save. To add the actual code that will call out to our Address Validation 3 web service, we’ll scroll down here, go to develop Apex classes. I have already added the class to my developer console, but just for the sake of example, I’ll go ahead and delete it and re-add it. I already have the code in a text editor, so I’m just going to copy and paste that, and just go over the code and explain some key points of it.

Now that I have my code copy and pasted in, I’ll walk through some key elements of it. In the sample code that we have, we have some extra commented out information here that gives you some resources like the product page, the developer guide. You can download this sample code along with this tutorial so you don’t have to pause the video and type it out and everything. The first thing we do is substantiate some of the HTTP request objects in this call WS by ID method. We’ll pull back the contact that’s just been added, and so we’ll pull back all these fields. Mailing street, mailing city, postal code, and state as well as the custom DPV and error information fields that we’ve entered into Salesforce. To call an internal field, an internal custom field that you’ve created in Salesforce, you’ll need to add this double underscore C at the end of it. We can see that we’ve done that here and other place where we reference these objects in the code.

Here, you can see we set the endpoint of the request to the trial URL endpoint, and this will point to the GetBestMatches JSON operation, so this will return a JSON formatted output. We’ve URL encoded all of the address information here. As you can see with this EncodingUtil.urlEncode. We’ll encode it to the UTF-8 standard. Another thing to note here is that you’ll have to put in your license key in this field here. Right now we just have it as a generic WS72 XXX, etc, but you’ll want to put in your specific license key. Here, we’ll send a request to the service, and if the response back is null, then that means there was something wrong with the primary endpoint, so we’ll come back here and check out our backup endpoint. For this example, it’s pointing to the same URL, the same trial endpoint. If you have a production key, you will want to point this primary URL to ws.serviceobjects.com, and this backup URL to ws.backup.serviceobjects.com. You’ll want to be sure to change both the license keys to whatever your license key is.

After that failover configuration, we’ll see here we checked the status code. If it’s equal to 200, we’ll go into processing the response from the service. Create some internal address fields here, and we’ll initialize the error response here to none, which would indicate that no error was returned from the service. What this does is it traverses through the JSON response of the service, and it finds the appropriate field. For this case we’ll see if it finds address1, it will set our initial address field to the address1 that was returned from the service. That will be the standardize and validated address information that is returned. We do that with all the fields that are pertinent to us. The DPV and DPV description, DPV notes description, as well as the IsResidential and error fields down here.

Here, you can see if we get a DPV score equal to 1. That indicates that the address is mailable, it is deliverable, and it is considered good by the USPS. This is the L-statement for the 200 code check here. If the 200 code wasn’t right, then we’ll say put the error description as this generic error message. At the end of this, we’ll update the list of contacts, so we’ll go ahead and click save. Now that we have our TestUtil class made here, we’ll go ahead and scroll down, select Apex triggers. To add a new trigger, we’ll select developer console, select file, new, trigger. For a name, we’ll simply call it Test Trigger.

We’ll go down here and select the contact object. We have the little bit of code right here. I have the actual code in a text editor that will call the service, so I’ll just copy that in. Now that I have this copied, you can see here that whenever a contact is added, or before it’s inserted rather, it will call the class that we made which was called WS by ID, and it will send the contact to it. To save this, just simply go to file and save. Hit refresh. We can see we now have a test trigger here. Now, to add a contact and to test out our new trigger, we’ll simply go up here, select contacts. In recent contacts, you can see here we don’t have any, so let’s go ahead and add one. We’ll add in a fake person by the name of Jane Doe. Go down here to the mailing street information, and we’ll enter in an address. For this example, we’re just going to use our Service Objects office address. We’ll put some typos in there so you can see the standardization and validation that the Service Objects web service does.

We’ll do 27 East Coat. That’s suite number 500. We’ll do Sant Barb for Santa Barbara and CA and 93101. We’ll go ahead and save the contact. You can see here that we still have the old values here, and that’s because the Salesforce doesn’t immediately call the outside APIs. It cues it up a little bit, but if we go and select Jane Doe again, we can see that now we have a standardize address here. In our DPV description, we have a message that indicates, “Yes, this record is a valid mailing address.” For this DPV score, we get a score of one. We can find the “Is Residential,” says false, meaning this is a business address. Again here, we see that the validated address, we see the USPS standardize version of the address which is 27 East Cota Street, Suite 500, as well as the validated city and zip-plus four information.

This concludes our tutorial for how to add a trigger and a class that will call out to our Service Objects web service. If you have any questions or any requests to other tutorials, please feel free to let us know at support@serviceobjects.com. We’ll be happy to accommodate.

 

The Difference Between Customer Experience And User Experience

There are a lot of buzzwords thrown around in the customer sphere, but two of the big ones relate to experiences—customer and user. Although CX and UX are different and unique, they must work together for a company to have success.

User experience deals with customers’ interaction with a product, website, or app. It is measured in things like abandonment rate, error rate, and clicks to completion. Essentially, if a product or technology is difficult to use or navigate, it has a poor user experience.

Customer experience on the other hand focuses on the general experience a customer has with a company. It tends to exist higher in the clouds and can involve a number of interactions. It is measured by net promoter score, customer loyalty, and customer satisfaction.

Both customer experience and user experience are incredibly important and can’t truly exist and thrive without each other. If a website or mobile app has a bad layout and is cumbersome to navigate, it will be difficult for customers to find what they need and can lead to frustration. If customers can’t easily open the mobile app from an email on their phone, they likely won’t purchase your product. Likewise, if the product layout is clunky, customers likely won’t recommend it to a friend no matter how innovative it is. User experience is a huge part of customer experience and needs to play a major role when thinking like a customer.

Although UX and CX are different, they need to work closely together to truly be successful. Customer experience representatives should be working alongside product engineers to make sure everything works together. By taking themselves through the entire customer journey, they can see how each role plays into a customer’s overall satisfaction with the product and the company. The ultimate goal is a website or product that beautifully meshes the required elements of navigation and ease with the extra features that will help the brand stand out with customers.

When thinking about customer experience, user experience definitely shouldn’t be left behind. Make both unique features an essential part of your customer plan to build a brand that customers love all around.

Reprinted with permission from the author. View original post here.

Author Bio: Blake Morgan is a customer experience futurist, author of More Is More, and keynote speaker.

Go farther and create knock your socks-off customer experiences in your organization by enrolling in her new Customer Experience School.

DOTS Address Validation vs. Google Maps: What’s the Difference?

Many of us use Google Maps to quickly verify that a location exists or give us an idea of what that location looks like. However, there is a common misconception that it will validate that the address found is correct and deliverable. So although Google Maps is an extremely powerful lookup tool, it will not validate addresses nor does it include the robust features and support included with our DOTS Address Validation-US service. To jumpstart your understanding and dispel some standard misconceptions, let’s explore some of the differences in our Address Validation service and Google Maps.

What does DOTS Address Validation do?

Although Service Objects can verify and validate many contact data points such as name, phone and email, our specialty is address validation. For us, addresses consist of business names, address fields, cities, states, and postal codes. Our USPS CASS Certified address validation service is designed to improve internal business mail processes and delivery rates by standardizing contact records against USPS data.

It’s all in the documentation

Our Developer Guide is a great place to start for an in-depth breakdown of the service and response features for Address Validation. It is extremely useful while integrating and can be used as a reference guide as well when learning more about the information each output field conveys.

24/7 Support when your business needs it most

With the amount of information provided in the results, it is common to have questions along the road to understanding each of the outputs. Our team is here to help you in this process and provide 24/7 technical support. We can be reached by phone (805-963-1700), email and even live chat on our website. “Best Practice” and “Step by Step Tutorial” blogs are also posted on a regular basis.

Deliverability is key

One of the biggest misconceptions about Google Maps and Address Validation is the ability to determine DELIVERABILITY. Beyond correcting and standardizing an address, our advanced algorithms and wide-reaching data sources allow us to determine if an address is deemed deliverable by the United States Postal Service. The service response will contain a Delivery Point Validation (DPV) indicator of 1-4 that can be used based on specific business logic. A DPV score of 1 indicates a perfectly deliverable address whereas a score of 2-4 indicates missing or incorrect inputs in the address field. The corrected address, component fields, and extra information such as the DPV indicator, residential delivery indicator (RDI), vacancy flags and more will be included and can be leveraged in your workflow.

Primarily, the locations that Google Maps will mark aren’t necessarily mail deliverable. There is a lot of leniency within the Google algorithms that allows for guesswork to be made. Although Google can put a pin on the map for a given input address, it does not mean that a postal carrier will deliver mail at that location. However, if DOTS Address Validation marks a location as invalid, you can be sure you are getting genuine and accurate information.

When is Google Maps useful for address lookup?

With all of that said, Google Maps should not be discounted in its ability to investigate a location. If the image data was captured recently it can be used to understand why our service marked an address the way it did. A prime example of this is an address marked as having a “street number out of range.” By checking Google Maps data and cross-referencing our service response, more light can sometimes be shed about that address location.

While you can use Google Maps to potentially confirm if a location exists, it is imperative to use robust validation tools like DOTS Address Validation to ensure any mail your business sends can actually be delivered, saving time and money.

 

If you have any questions about validating, verifying or appending addresses, or any other contact data points including name, phone, email and device, feel free to contact us.

Leverage Service Objects’ Industry Expertise to Reach Your Data Quality Goals

At Service Objects, we are fully committed to our customers’ success, which is a main factor in why over 90% of our business is from repeat customers. And with over 16 years of experience in contact validation, we have accumulated a broad base of industry expertise, created numerous best practices and are considered thought leaders in global data validation.

It is because of this knowledge that some of our customers turn to us when they lack the internal resources to carry out their data quality project. Whether it is assistance in implementing a data quality initiative, asking for customization to our products to meet specific business needs or help integrating our solutions into Marketing or Sales Automation platforms, Service Objects’ Professional Services can assist your business in achieving optimal results on your project in a quick and efficient manner.

Here are just three of the ways we can help:

Integration Programming and Code Support

If your team is overwhelmed or lacks the technical resources to integrate data quality solutions into your existing systems, Service Objects can step in and quickly get your project moving. We provide your team with the technical knowledge, support, and best practices needed to implement your chosen solution in a timely fashion and within your budget.

CRM or Marketing Automation Platform Integration

We have created cloud connectors for the leading sales and marketing platforms and have developed extensive knowledge on how these systems work with our data quality solutions. We enable your organization to implement best practices, allowing your business to verify, correct and append contact data at the point of entry. The result is your contact database contains records that are as genuine, accurate and up-to-date as they can possibly be.

Custom Services

Our engineers have years of experience creating, implementing and supporting data quality services in many different programming languages. As a result, we can customize our existing services to solve a challenge that is specific to your business. Our proactive support services team will work with your technical team to refine, test and implement the custom service to work for your business’ specifications.

These are just some of the ways we can help. For more information about how you leverage our industry expertise and technical knowledge, contact us.

No image, text reads Service Objects Tutorials

Service Objects ColdFusion Integration Tutorial

As part of our commitment to making our data quality solutions easy to integrate, our Application Engineering team has developed a series of tutorials on how to integrate our services.  The series highlights various programming languages, with this tutorial exploring the “how-to’s” of applying our services using ColdFusion.

ColdFusion is a scripting language that has been around since 1995. It was created to make development of CGI scripts easier and faster.  ColdFusion has unique aspects, including use of its native ColdFusion Markup Language (CMFL for short) to allow HTML style tags for programming with systems. Like most things in the tech world, it can draw a lot of polarized opinions, where some are ardent supporters, and others, less than enthusiastic fans. If you fall in the supporter camp, and want to learn how to call a web service with ColdFusion, that is where our experts can step in and help.

To get started you will need a ColdFusion IDE (we’re using ColdFusion Builder 3) and a Service Objects’ License key. We’re using one for DOTS Lead Validation but you can follow along with your service of choice.

Project Setup

The first step is to launch your IDE and select an appropriate workspace for your project. Next, we will create a new project.

Select next for a blank template and then click next again.  On the following screen give your project an appropriate name and click finish.

Congratulations! You created a brand new ColdFusion project. Now it’s time to add some code. For starters, we’ll want to add a form and elements to initialize our form inputs so that we can create a sample page to input data to send to our web service. This likely won’t be what you will want to do in a live environment, but this is for demonstration purposes.

The DOTS Lead Validation service that we’re using has quite a few inputs so this may take a while. Once you are finished it should look like the following:

Making the Web Service Call

The next bit of code that we will add is to make the actual HTTP GET call to the Service Objects’ web service. Let’s use the CFML tags to make the actual web service call.

After the code makes the call to the trial.serviceobjects.com endpoint, we perform a failover check in the code. This failover check and the try catch blocks that it is nested in will help ensure that your integration of our web service will continue to work uninterrupted in the event that the primary web service is unavailable or not responding correctly.

The primary endpoint should be pointing to ws.serviceobjects.com and the backup endpoint should be pointed to wsbackup.serviceobjects.com.

Displaying the Results

Now that you have successfully called the web service, you will obviously want to do something with the results. For demonstration purposes we will simply display the results to the user.  You can use the code snippet below to display.

If you are having trouble figuring out how a particular output is mapped in the ColdFusion response, then you can use the <cfdump var=””> tag to dump the outputs onto the screen. This should allow for easy troubleshooting.

Now that our CFML is all set up, lets see an example input and output from the service. Below is sample lead information that you might encounter:

And here is some of the response that DOTS Lead Validation will return:

The DOTS Lead Validation service can return a multitude of information about your lead.  To download a trial key for any of our 23 contact validation solutions, please visit https://www.serviceobjects.com/products

P.S.  Here is the full ColdFusion script page in case you need it to get up and running.

 

Service Objects’ Application Engineers: Helping You Get Up and Running From Day 1

At Service Objects, one of our Core Values is Customer Service Above All. As part of this commitment, our Application Engineers are always available to answer any technical questions from prospects and customers. Whether users are beginning their initial investigation or need help with integration and deployment, our Engineers are standing by. While we continually make our services as easy to integrate as possible, we’d like to touch on a few common topics that are particularly helpful for users just getting started.

Network Issues

Are you are experiencing networking issues while making requests to our web services? It is a very common problem to face where outbound requests are being limited by your firewall and a simple rule update can solve the issue. When matters extend beyond simple rule changes, we are more than happy to schedule a meeting between our networking team and yours to get to the root cause and solve the issue.

Understanding the Service Outputs

Another common question revolves around the service outputs, such as how they should look and how they can be interpreted. From a high level, it is easy to understand what the service can provide but when it comes down to parsing the outputs, it can sometimes be a bit trickier. Luckily there are sets of documentation for every service and each of their operations. Our developer guides are the first place to check if you are having trouble understanding how individual fields can be interpreted and applied to your business logic. Every output has a description that provides insight into what that field means. Beyond the documentation, our Application Engineering team is available via multiple channels to answer your questions, including r email, live chat, and phone.

 Making the Move from Development to Production

Eventually everyone who moves from a being a trial user to a production user undergoes the same steps. Luckily for our customers, moving code from development to production is as easy as changing two items.

  • The first step is swapping out a trial license key to a production key.
  • The second step is to point your web service calls from our trial environment to our production environment. Our trial environment mirrors the exact outputs that you will find in production so no other code changes are necessary.

We understand that, even though we say it is easy, making the move to production can be daunting. That is why we are committed to providing your business with 24/7/365 technical support. We want the process to go as smoothly as possible and members of our team are standing by to help at a moment’s notice.

We have highlighted only a few broad cases that we have handled throughout our 16 years of providing genuine, accurate, and up-to-date data validation. Many technical questions are unique and our goal is to tackle them head on. If a question arises during your initial investigation, integration, move to production, or beyond, please don’t hesitate to contact us.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

API Integration: Where We Stand

Applications programming interfaces or APIs continues to be one of the hottest trends in applications development, growing in usage by nearly 800% between 2010 and 2016 according to a recent 2017 survey from API integration vendor, Cloud Elements. Understandably, this growth is fueling an increased demand for API integration, in areas ranging from standardized protocols to authentication and security.

API integration is a subject near and dear to our hearts at Service Objects, given how many of our clients integrate our data quality capabilities into their application environments. Using these survey results as a base, let’s look at where we stand on key API integration issues.

Web service communications protocols

This year’s survey results bring to mind the old song, “A Little Bit of Soap” – because even though the web services arena has become dominated by representational state transfer (REST) interfaces, used by 83% of respondents, a substantial 15% still use the legacy Simple Object Access Protocol (SOAP) – a figure corroborated by the experiences of our own integrators.

This is why Service Objects supports both REST and SOAP among most if not all services. We want our APIs to be flexible enough for all needs, we want them to work for a broad spectrum of clients, and we want the client to be able to choose what they want, whether it is SOAP or REST, XML or JSON.  And there are valid arguments for both in our environment.

SOAP is widely viewed as being more cumbersome to implement versus REST, however tools like C# in Visual Studio can do most of the hard work of SOAP for you. Conversely, REST – being URL http/get focused – does carry a higher risk of creating broken requests if care is not taken.  Addresses, being a key component in many of our services, often contain URL-breaking special characters.  SOAP inherently protects these values, while REST on a GET call does not properly encode the values and could create broken URLs. For many clients, it is less about preference and more about tools available.

Webhooks: The new kid on the block

Webhooks is the new approach that everyone wants, but few have implemented yet. Based on posting messages to a URL in response to an event, it represents a straightforward and modular approach versus polling for data. Citing figures from Wufoo, the survey notes that over 80% of developers would prefer this approach to polling. We agree that webhooks are an important trend for the future, and we have already created custom ones for several leading marketing automation platforms, with more in the works.

Ease of integration

In a world where both applications and interfaces continue to proliferate, there is growing pressure toward easier integration between tools: using figures cited from SmartBear’s State of the APIs Report 2016, Cloud Elements notes that this is a key issue for a substantial 39% of respondents.

This is a primary motivation for us as well, because Service Objects’ entire business model revolves around having easy-to-integrate APIs that a client can get up and running rapidly. We address this issue on two fronts. The first is through tools and education: we create sample code for all major languages, how-to documents, videos and blogs, design reference guides and webhooks for various CRM and marketing automation platforms. The second is a focus on rapid onboarding, using multiple methods for clients to connect with us (including API, batch, DataTumbler, and lookups) to allow easy access while APIs are being integrated.

Security and Authentication

We mentioned above that ease of integration was a key issue among survey respondents – however, this was their second-biggest concern. Their first? Security and authentication. Although there is a move toward multi-factor and delegated authentication strategies, we use API keys as our primary security.

Why? The nature of Service Objects’ applications lend themselves well to using API keys for security because no client data is stored. Rather, each transaction is “one and done” in our system, once our APIs perform validation on the provided data, it is immediately purged from our system and of course, Service Objects supports and promotes SSL over HTTPS for even greater protection.  In the worst-case scenario, a fraudster that gains someone’s key could do transactions on someone else’s behalf, but they would never have access to the client’s data and certainly would not be able to connect the dots between the client and their data.

Overall, there are two clear trends in the API world – explosive growth, and increasing moves toward unified interfaces and ease of implementation. And for the business community, this latter trend can’t come soon enough. In the meantime, you can count on Service Objects to stay on top of the rapidly evolving API environment.

Testing Through Batches or Integration: At Service Objects, It’s Your Choice

More times than not, you have to buy something to really try it.  At Service Objects, we think it makes more sense to try before you buy.  We are confident that our service will exceed expectations and are happy to have prospects try our services before they spend any money on them.  We have been doing this from the day we opened our doors.  With Service Objects, you can sign up for a free trial key for any of our services and do all your testing before spending a single cent.  You can learn about the multiple ways to test drive our services from our blog, “Taking Service Objects for a Test Drive.” Today, however, I am focusing on batch testing and trial integration.

Having someone go through their best explanations to convey purpose or functionality can be worthwhile but, as the saying goes, a picture is worth a thousand words.  If you want to know how our services work, the best way to see them is simply try them out for yourself.  With minimal effort, we can run a test batch for you and have it turned around within a couple hours…even less time in most cases.  Another way we encourage prospects to test is by directly integrating our API services into their systems.  That way you see exactly how the services behave and get a better feel for our sub-second response times.  The difference between a test batch and testing through direct integration is the test batch will show the results and the test through integration will demonstrate how the system behaved to deliver results.

TESTING THROUGH BATCHES

Test batches are great.  They give you an opportunity to see the results from the service first hand, including all the different fields we return.  Our Account Executives are happy to review the results in detail and you always have the support of the Applications Engineering team to help you along.  With test batches, you can quickly see that a lot of information is returned regardless of the service you are interested in.  Most find it is far more information than expected and often clients find that the additional information helps them solve other problems beyond their initial purpose.  Another aspect that becomes clearer is the meaning of the fields. You get to see the fields in their natural environment and obtain a better understanding than the strict academic definitions.  Lastly, it is important to see how your own data fairs through the service and far more powerful to show how your data can be improved rather than just discussing it conceptually.  That is where our clients get really excited about our services.

TESTING THROUGH INTEGRATION

Testing through integration is a solid way to gain an understanding of how the service behaves and its results.  It is a great way to get a feel for the responses that come back and how long it takes.  More importantly, you can identify and fix issues in your process long before you start paying for the service.  Plus, our support team is here to assist you through any part of the integration process.  Our services are built to be straightforward and simple to integrate, with most developers completing them in a short period of time.  Regardless, we are always here to help.  Although we highly recommend prospects run their own records through the service, we also provide sample data to help you get started.  The important part is you have a chance to try the service in its environment before making a commitment.

Going forward with either of these approaches will quickly demonstrate how valuable our services are. Even more powerful is when you combine the two testing procedures with your own data for the best understanding of how they will behave together.

With all that said, if you’re still unsure how to best begin, just give us a call at 805-963-1700 or sign up for a free trial key and we’ll help you get started.

No image, text reads Service Objects Tutorials

Introducing Service Objects New Open API

Service Objects is committed to constantly improving the experience our clients and prospective clients have with our data quality solutions. This desire to ensure a great experience has led us to revamp and redesign our lookup pages. These pages are easy to use and give all the information necessary for integrating and using our API in your application. This blog presents some of the key features.

Sample Inputs

One request we often receive is a quick sample lookup that will show our customers and prospects what to expect when calling our API. We are implementing just that in our new lookup pages.

In the example below, we are using our Lead Validation International lookup page. If the “Good Lead” or “Bad lead” link is selected, sample inputs will be filled into the appropriate fields. For this example we’ve selected, “Good Lead.”

We implemented this option so that users can get a quick idea of what types of inputs our services accept and what type of outputs the service will return. The form simply needs a license or trial key and it will return the validated data.

All Operations and Methods

Another benefit of these new pages is that they concisely and easily display all the methods available for an API along with all the potential HTTP methods that can be used to interact with the service.

If you want a JSON or XML response, select the appropriate GET operation and you will have everything you need to make a successful request to the service. If you want to make a POST request to the service, simply select the post operation and it will detail all that you need to have your data validated in your method of choice.

Detailed Requests and Responses

Arguably the most important pieces for a developer looking to integrate with an API would be to know how to make a request to the service, and what type of response to expect. These new lookup pages provide that information in a very easy way as shown below.

 

After making a sample request, you will see the URL used to fetch the validated data, the actual response from the web service, and the response headers that the service provides. These are all vital pieces of information that will have you up and running in no time. The new pages also list what type of response object will be returned from the service. This can be seen below the response body and headers.

Additional Resources

The page also offers up extra pieces of information that will assist with integration. The link to our developer guides, WSDL (for SOAP integrations) and host paths can be found on the page as well. These resources will help you have your application up and running as quickly as possible.

Feel free to sign up for a Service Objects trial key to test with our new look up pages!

Getting Started with Service Objects

Service Objects has worked hard to make testing our APIs as simple as possible, and this in-depth guide to getting started will have you prepped for whenever you are ready. To get the ball rolling, simply fill out the “Free API Trial Key” form for the service you are interested in testing. This form is located on the right side of each our product pages.

If you are an Engineer/Programmer and it’s your first time signing up, you will receive an email confirming your registration.  Shortly after, you will receive your Welcome email with the Trial Key and testing information. The Welcome email can be broken down into four main parts; the sample code downloads section, our detailed developer guides, sample input data downloads, and the service’s endpoint. All this information will help you get started testing quickly and smoothly.

Sample Code – We have made it our mission to provide sample code in a majority of the most widely used programming languages. This includes Ruby on Rails, Java, Python, NodeJS, C#, and many others. If your desired programming language is missing from our repository, please feel free to reach out to us. We are more than happy to provide integration advice and impart our best practices and procedures.

Within each set of sample code you will find our recommended methods of obfuscating your license key, setting request timeouts, response/error handling, and failover logic. Applying these methodologies to your code will help to ensure security and service up time.

Developer Guide – As the name implies, this is where developers (and others) can go to get into the nitty gritty of the service. This is where you can find detailed explanations for each of the inputs and outputs. The fastest way to understand the service outputs is to approach the developer guide with a clear understanding of your business logic. With your goal in mind you can make note of the various note codes, description codes, scores, and other outputs then handle the service response accordingly.

Sample Input Data – Need a data set to test with? We provide input files with records that match the operations input parameters. Running these records will result in varying service responses. These responses can be used to gain an understanding of what will be returned by the service and how the fields can be leveraged to fit your business’s needs.

Service Endpoint – The Service Objects DOTS web services allow you to make both GET and SOAP/POST requests. By clicking on the service path link in your welcome email you will be directed to the main service landing page for the particular service you signed up for. From there you can click on your preferred operation, plug in data, add your license key and click invoke. These service landing pages act as both a quick lookup tool as well as an informative page that shows the various methods of calling the service. The query string and path parameter endpoints are described on these pages.  If you prefer to consume a file and have all your classes and clients auto-generated we also provide a WSDL.

Additionally, if you prefer to have us run the results for you, you can also upload your list (up to 500 records) and we will send the results back to you.

Now that you’ve read how easy it is getting started with Service Objects’ APIs, we look forward to assisting with your data needs!

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Follow This Checklist to Ensure a Smooth API Integration

There can be a lot of “i’s” to dot and “t’s” to cross when integrating with any API.  Here at Service Objects, we certainly recognize there can be a lot on the to-do list when starting an integration project. Integrating with our APIs is pretty straight forward, but we have developed a quick checklist that will ensure it is as easy as possible to follow our best practices.

Failover, Failover, Failover

Service Objects prides ourselves as having 99.999% server uptime. However, in the unlikely event that we do experience an issue with one of our servers, implementing a failover configuration is arguably the most important aspect of integrating with any of our APIs. Proper failover configuration will ensure that your application continues to operate unhindered in an event that the primary Service Objects web server is unavailable or not responding as expected. Below is an example (using C# syntax) of proper failover configuration.

The example above is for our DOTS Address Validation 3 – US service, but this scenario will be relatively similar for our other services. The main thing to note is that the primary call is pointed towards ws.serviceobjects.com and the backup call within the catch statement is pointed to wsbackup.serviceobjects.com.  In the event that the primary web server is unresponsive, producing strange errors or behaving abnormally, then the backup URL will be called and your application will continue to function as expected.  Another important item to note is that proper failover will check the web service for an error response with a TypeCode of 3. This indicates that a fatal error has occurred in the web service and that the backup URL should be called. If you are using one of our older services, then the error object that service will return may be different (there will be only “Number” and “Desc” fields present in the Error object) and you will need to check for a Number value of 4 to indicate a fatal error.

URL Encoding

Properly encoding the URL you are using is also an item you will want to place on your to-do list for integration. If you are using a path parameter to access our services, then you’ll need to use what’s called RFC3986 encoding to encode your URLs. If you are using query string parameters to hit our services, then you can use the RFC3986 encoding or the older RCF2396 encoding. What do both of those RFC standards mean? Well in short, if you are using a query string URL, spaces can be acceptably replaced with “+” in the URL. If you’re using a path parameter URL, then spaces will have to be encoded with its hex equivalent %20. Using the RFC3986 standard encoding is generally the safer bet since it is the most accepted and newest.

Logging (we’re not talking cutting down trees)

We also highly recommend implementing some code that will log the requests and responses that our services provide. For the sake of customer privacy, we do not log information on our end. Logging can be a big help when troubleshooting your code. It can also be a big help to us if you ever need technical support. Since we do not log customer requests, it is very helpful for us to have the exact inputs or URL used when contacting our services. This will allow us to provide the stellar customer support that comes with integrating with a Service Objects’ API. If you do run into any issues, please send us your request and response as this will help us get to the bottom of whatever issue you are encountering as quickly as possible.

Null or Nothing Strings

Our output structure for most of our services is consistent and won’t change without notice.  For the most part, our structure will stay the same and we’ll return the elements in our output structure as blank strings as opposed to null objects. That being said, we still highly recommend that your application performs null checks before using the response or any of the nested elements from our service. Even though the output structure for our services is very consistent, appropriately null checking our response can save you and your application a lot of headaches if something unexpected occurs.

URLs, IP Addresses and Whitelisting

Some clients or prospects will ask us if they can access our web services by IP address or whitelist the IP address in their firewall. Well, you certainly can, but we highly recommend whitelisting or hitting our web service by the domain URL. Most modern firewalls will support whitelisting domains by names and we highly recommend utilizing this. The reason being that IP addresses can and will change. We have a lot of backups and redundancy set up in our web services that will go unnoticed if you are accessing the service via a domain. If you absolutely need to hit our service by IP address or whitelist them like that, please reach out to our support team and we will be happy to make recommendations on best practices and provide you the information you will need.

Use Cases

If you are curious to know if you are understanding the results correctly or want to know if you are using the right operation to get the functionality you want, our developer guides can help provide more clarity about certain inputs and outputs from the service of choice.

Conclusion

As discussed, integrating with any API can bring up a lot of questions. If this list didn’t cover your questions or particular use case, please feel free to send your requests or questions to support@serviceobjects.com and we will be happy to help you get up and running with any of our 24 data validation APIs.

Service Objects’ Average Response Time Ranks Higher Than Google…

When comparing SaaS providers, one of the key metrics that is often measured is the service response time. That response time, or latency, is a succinct measurement of the approximate time it will take for a response to be returned for a given query. Often, the major challenge in reducing response time is determining which service component is adding latency. At Service Objects, we are continually scrutinizing application optimization, network congestion, and monitoring real-time, real-world API calls to ensure our SaaS response times are second to none.

Our goal is to exceed availability and response times in our industry as a SaaS provider. We’ve invested in bank grade infrastructure and security, with data centers operating throughout the US. All of our databases are operating on the latest flash storage technology, returning query responses in less than 0.1s, and we are constantly enhancing and expanding our web server application pools. Bundle all of that with robust VMware clusters, multiple layers of network redundancy, and one of the industry’s only financially back service level agreements of 99.999% uptime and the result: we don’t just achieve industry standard availability and response times, we’ve raised the bar.

Third-party monitoring providers have ranked many of our DOTS Web Services average response times within the same echelon as leading tech companies, such as Apple and Google. In many cases, we are better than some of the biggest and well-known technology companies. Just how fast are we? If you are connecting from Los Angeles, our DOTS Address Validation service hosted in San Jose, CA boasts an incredible 0.089s response time. If your business is connecting from New York, we have you covered, with a lightning fast average response time of 0.27s from our New Jersey data center.

Service Objects recognizes how important it is to our customers to have little to no downtime. We are so committed to achieving this goal that we made Outstanding Network Performance one of our Core Values. We are continually monitoring our servers and measuring our response times, and as the graphic below illustrates, the results speak for themselves.

 

What Has Changed in Customer Service?

Every week, I’m asked, “What is changing in customer service?” The expected answer is that I’ll talk about all the new ways customer service and support is conducted – and I do. There’s self-service solutions that include robust frequently asked questions and video. There’s social media customer service with multiple channels like Facebook and Twitter. And, AI (Artificial Intelligence) that the experts – myself included – say will potentially change everything.Yes, there is a lot that is changing about how we deliver customer service, so I’m about to make a bold statement. If you look at what customer service is, it is the same as it was fifty years ago. And, it will be the same fifty years from now. Customer service is just a customer needing help, having a question answered or a problem resolved. And, in the end the customer is happy. That’s it. When it comes to the customer’s expectations, they are the same. In other words:

Nothing has changed in customer service!

Okay, maybe it’s better said a different way. When it comes to the outcome of a customer service experience, the customer’s expectations haven’t changed. They just want to be taken care of.

That said, there are different ways to reach the outcome. What has changed is the way we go about delivering service. We’ve figured out how to do it faster – and even better. Back “in the day,” which wasn’t that long ago – maybe just twenty or so years ago – there was typically just two ways that customer service was provided: in person and over the phone. Then technology kicked in and we started making service and support better and more efficient.

For example, for those choosing to focus on the phone for support, there is now a solution that lets customers know how long they have to wait on hold. And sometimes customers are given the option of being called back at a time that is more convenient if they don’t have time to wait. We now have many other channels our customers can connect with us. Beyond the phone, there is email, chat, social media channels and more.

So, as you are thinking about implementing a new customer service solution, adding AI to support your customers and agents, or deciding which tools you want to use, remember this:

The customer’s expectations haven’t changed. They just want to be taken care of, regardless of how you go about it. It starts with someone needing help, dealing with a problem, upset about something or just wanting to have a question answered. It ends with that person walking away knowing they made the right decision to do business with you. How you get from the beginning to the end is not nearly as important as how they feel when they walk away, hang up the phone or turn off their computer.

It’s really the same as it’s always been.

Reprinted from LinkedIn with permission from the author. View original post here.

Shep Hyken is a customer service expert, keynote speaker and New York Times bestselling business author. For information contact or www.hyken.com. For information on The Customer Focus™ customer service training programs go to www.thecustomerfocus.com. Follow on Twitter: @Hyken

No image, text reads Service Objects Tutorials

C# Integration Tutorial Using DOTS Email Validation

Watch this video and hear Service Objects’ Application Engineer, Dylan, as he presents a 22 minute step-by-step tutorial on how to integrate an API using C#. In order to participate in this tutorial, you will need the following :

  1. A basic knowledge of C# and object-oriented programming.
  2. Visual Studio or some other IDE.

Any DOTS Validation Product Key. You can get free trial keys at www.serviceobjects.com.

In this tutorial, we have selected the DOTS Email Validation web service.  This service performs real-time checks on email addresses to determine if they are genuine, accurate and up-to-date. The service performs over 50 tests on an email address to determine whether or not it can receive email.  If you are interested in a different service, you can still follow along in this tutorial with your service of choice. The process will be the same, but the outputs, inputs, and objects that we’ll be dealing with in the integration video will be slightly modified.

Enjoy.

Why Providing Feedback is Important to Improving Software

Developing user-friendly software and an amazing user experience requires listening to users. As a software developer, we rely on user feedback to continuously improve our data validation APIs. As a user, you may not feel compelled to provide feedback to software developers, but if you value a great experience, your role is an essential one.

Examples of User Feedback Collection Mechanisms

You’ve likely encountered user feedback collection features in various forms. For example, Microsoft Office prompts you to click on a happy or sad face in order to express user suggestions. This simple menu is available from the File tab, allowing you to tell Microsoft what you like, what you dislike, and your suggestions.

You may also find that user feedback is naturally baked into installed software, accessible via the Help menu.

If you’re a .NET developer, you’re probably familiar with the Visual Studio Experience Improvement Program. The little helper icon in the top right of the program has become the ubiquitous symbol for feedback, client support, and general help desk tasks. With just a click of a button, users can instantly share their experiences with the software.

What about when an application crashes? You’ll often be prompted to send a crash/bug report to the developer. These reports may even contain hardware/software configurations, usage patterns, and diagnostic information helpful to developers — and all you need to do is click a button.

These are but a few of the many ways that modern applications send information back to the software company. Obtaining user feedback as well as any crash/bug report information is crucial to the development of a piece of software or service. This information helps software developers isolate where and why problems occurred, leading to product updates.

User Feedback Challenges: Privacy Concerns

But what about “Big Brother” or other potential snoops? With the various means of providing feedback and the different collection schemes (opt in / automatic), privacy concerns are valid. With these data collection tools baked into the software, it is hard to know how much information is actually being sent back to the company. It could range from the harmless crash/bug report from a software crash or diagnostic information to controversial GPS breadcrumb data.

However, many people don’t want other entities collecting data on them or analyzing their usage patterns. While not all software is intentionally spying on you, it would be nice to know what exactly is collected. More often than not, it’s unclear what’s collected and how it’s used. This lack of transparency concerning data collection inevitably leads to unease, which is why many users opt not to participate in “Experience Improvement Programs” and other data collection schemes.

 

Another challenge for developers is that not all companies have installed software on client’s devices, making data collection challenging, even when users are willing to opt in. For example, the normal avenues for collecting data, such as hardware/software configurations, from users are not seamlessly integrated with web-based technologies such as web services or certain SaaS. Many companies struggle with this and must use other means of getting user feedback.

 

Despite privacy concerns and a lack of openness, the bottom line is that user feedback is valuable. When utilized properly, the information can be used to fix existing problems in the software as well as lead to new features. The reason why subsequent versions of software are so much better than 1.0 is directly related to user feedback.

How Service Objects Gathers User Feedback

Service Objects does not collect data on clients, so the privacy concerns discussed above are irrelevant. Potentially sensitive data processed through our services is not monitored or collected. This is a highly sought after data validation “feature” for our clients, but at the same time, it presents a challenge for us to gather detailed user feedback.

We offer several ways for our customers to provide feedback: You can connect with us via phone (805-963-1700), email, and via support tickets.

Any user feedback we receive is taken very seriously and can lead to bug fixes, updates, and even new services/operations. A great example of this is the FindAddressLines operation in DOTS Address Validation 3. The operation was initially born to help a particular client and has been utilized to great effect to clean up messy address data.

If you have any feedback you would like to share to help us improve our data validation services, we encourage you to reach out to us at anytime.

Demonstrating JavaScript and NuGet with DOTS Address Validation 3

In this demonstration I am going to show you how to implement our DOTS Address Validation 3 web service with JavaScript and no Asp.Net markup tags using Visual Studio. We are going to create a form for address inputs, similar to what you would see on a checkout page, which will call our API to validate the address. Although, we could call the web service directly from JavaScript, we would then be forced to expose our license key. So we will use an aspx.cs page to handle our call to the service API, keeping our license key safe from prying eyes. In this demonstration, I will also show you how to add the NuGet version of DOTS Address Validation 3 to the project so that you are always using our best practices and speeding up your integration.

First, create a new empty web site, which I am going to call, cSharpJavascriptAjax. Then I am going to add a new html page called, Addressvalidation.html.

Next, we are going to add the markup for the form. Add the following code in between the body tag.

This form was designed using the table structure for the layout. You will likely want to create the layout using divs, which is more appropriate. Here is what the page looks like in the browser.

To speed things up a bit, I am going to include jQuery in the header so we can utilize some of the short cut functions. Of course, all of this can be done in pure JavaScript.

For the action we will take when the Complete button is clicked, we will create the submit function. This will take in all the values from the form, call the web service and then display the response to the screen. I added an alert and left the URL blank, this will allow for a quick test to make sure what we have is working. Later we will update the failure and success sections as well. For now, they will simply display an alert for testing.

First, let’s start by making sure we are getting all the inputs into the submit function and are getting all the inputs back. I am going to display an alert with DataValue variable so we can see what we have.


Good, now that we know that is working, we will need to make the call to the web service. Like I mentioned before, we could call the service directly but using JavaScript would make the license key visible. So we are going to leverage asp.net for security by creating and empty aspx page and add a method to the aspx.cs page to handle the call. Let’s go ahead and add the aspx/aspx.cs page to the project and call it, ServiceObjectsAPIHandler.

Here is our empty aspx.cs page, where we will be adding our function. It will go below the Page_Load method.

Let’s call the method, CallDOTSAddressValidation3_GBM. There are two things to notice with the signature for this method. We decorate the method with the WebMethod attribute so that our new method can be exposed as a web service method. And, we declare it as static because of the stateless nature of the static methods. An instance of a class does not need to be declared.

To test things out, for now we will just take one string input and return it to the Ajax call where we will display it. In order to do this, we will need to adjust our DataValue call in the script on the html page.

We will also need to add the URL and method to the Ajax call.

I also adjusted the alerts so we can see the success or fail.

Great! That worked! Now, let’s write up the code in the aspx.cs page to make the call to the service. We could grab some sample code from the web site and walk through it, but we have documentation and tutorials already on how to do that. Instead, I am going to take a short cut and grab what I need from NuGet. We offer NuGet packages for most of our services, which include best practices such as failover configurations and speeding up the integration time.

To do this, we will need to open up the NuGet Package Manager in Visual Studio under the Tools menu option. When browsing for our services, you can type “Serviceobjects”. You should see a list of available Service Objects NuGet packages to choose from.

Select DOTSAddressValidation3US, you will be prompted to select the project you want to install it under. Select the project and then click install.

Once the installation is complete, a ReadMe file will automatically open that will contain a lot of information about the service. You will also notice a few things were added to your project. First, a DLL for DOTSAddressValidation3US was added to the bin folder and was referenced in your project.

Next, an AV3LicenseKey app setting was added to your web.config file with the value “WSXX-XXXX-XXXX”. You will need to substitute this value for the key you received from Service Objects.

Also in the web.config you will see several endpoints were added. These will be used in the pre-packaged code to determine how to call the service based on your key.

Now that we have the NuGet package added to the project, we can use it. Next is the code that will use the DOTSAddressValidation3US DLL by loading it up with the input parameters, making the call to the service, then working with the response and finally sending the data back out to our JavaScript.

First we get the license key from the web configs file.

Then we make the call to the service. You will notice that we throw in a Boolean parameter at the end of the call to GetBestMatches for live or trial. This is an indicator that will tell the underlying process which endpoints to use. A mismatch between your license key and this Boolean flag will cause the call to the service to fail.

After we make the call to the service, we will process the response and send the data back to the JavaScript. If there is an error, we will return the error details. Otherwise, we will return a serialized version of the response object. Note that you can also just deal with the response completely here or completely in JavaScript. I mixed it up so you can see a bit of both.

Now we will turn our attention back to the JavaScript and update our submit function. All we really have to deal with now is the response from the aspx method call. Here is the portion of the Ajax call implementing that.

Success or failure here does not mean that the address is good or not. It is really just a success or failure of the call to the aspx web method. On failure, we simply make an alert stating something went wrong. On success, we examine the response value, determine if the address was good or not and display the appropriate response. Here is the whole submit function.

 

Here is a good address.

 

And here is a bad address.

 

In the submit function, we did the most basic check to see if an address was good or not.

But many more data points from the response can be used to make any number of conclusions about an address. One thing you will notice is that AddressResponse.Addresses is an array. A check to see if multiple addresses are returned can be valuable because in certain situations more than one address may be returned. One example is when an East and a West street address are equally valid responses to an input address. In this case you may want to display both addresses and let the user determine which to use. You may want to evaluate the individual address fragments or the corrections that were made to the address.

The data points associated to the DOTS Address Validation US 3 service can be found on our development guide page for the service.

The following is commented out code that I added to the submit method for easy access to the various output data points.

And here is the same thing but for the Error object in the aspx page.

I hope this walk-thru was helpful in demonstrating how to implement our DOTS Address Validation 3 web service with JavaScript and no Asp.Net markup tags using Visual Studio. If you have any question, please feel free to contact Service Objects for support.

Best Practices for List Processing

List processing is one of the many options Service Objects offers for validating your data. This option is ideal for validating large sets of existing data when you’d rather not set up an API call or would simply prefer us to process the data quickly and securely. There is good reason to have us process your list: we have high standards for security and will treat a file with the utmost care.

As part of our list processing service, we offer PGP encryption for files, SFTP file transfers, and encryption to keep your data private and secure. We also have internal applications that allow us to process large lists of data quickly and easily. We have processed lists ranging from tens of thousands of records to upwards of 15 million records. Simply put, we consider ourselves experts at processing lists, and we’ll help ensure that your data gets the best possible return available from our services.

That said, a few steps can help guarantee that your data is processed efficiently. For the best list processing experience – and the best data available, we recommend following these best practices for list processing.

CSV Preparation

Our system processes CSV files. We will convert any file to the CSV format prior to list processing. If you want to deliver a CSV file to us directly, keep the following CSV preparation best practices in mind:

Processing international data – If you have a list of international data that needs to be processed, make sure the file has the right encoding. For example, if the original set of data is in an Excel spreadsheet, converting it to a CSV format can destroy foreign characters that may be in your file. When processing a list of US addresses, this may not be an issue but if you are processing an International set of addresses through our DOTS Address Validation International service, then something like this could highly impact your file. One workaround is to save the file as Unicode text through Excel and then set the encoding to UTF-8 with BOM through a text editor. Another option is to send us the Excel file with the foreign characters preserved and we will convert it to CSV with the proper encoding.

Preventing commas from creating unwanted columns – Encapsulating a field containing commas inside quotation marks will prevent any stray commas from offsetting the columns in your CSV file. This ensures that the right data is processed when our applications parse through the CSV file.

Use Multiple Files for Large Lists

When processing a list with multiple millions of records, breaking the file into multiple files of about 1 million records each helps our system more easily process the list while also allowing for a faster review of the results.

Including a unique ID for each of the records in your list helps when updating your business application with the validated data.

Configure the Inputs for the Service of Choice

Matching your input data to ours can speed up list processing time. For example, some lists parse address line 1 data into separate fields (i.e., 123 N Main St W would have separate columns for 123, N, Main, St, and W). DOTS Address Validation 3 currently has inputs for BusinessName, Address1, Address2, City, State and Zip.  While we can certainly manipulate the data as needed, preformatting the data for our validation service can improve both list processing time and the turnaround time for updating your system with freshly validated data.

These best practices will help ensure a fast and smooth list processing experience. If you have a file you need cleansed, validated or enhanced, feel free to upload it here.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

NCOA Integration Tutorial

The reality about any set of residential customer data is that given enough time, addresses and the people living there are bound to change. Occasionally, businesses and organizations can rely on the customer to notify them of changing addresses but when people move, this often times falls to the wayside on the list of priorities.

For cases like these, accessing the USPS National Change of Address database can provide a helpful solution to ensure that mail gets delivered to the correct person. The USPS maintains a large data set of address forwarding notifications, and with the DOTS NCOA Live service, this information is right at your finger tips.

Our DOTS NCOA Live service is a bit different than the rest of our products. Most of our other products process validation requests in a one at a time manner. NCOA is a little different in that in order to start a request, a minimum list of 100 addresses must be sent to the service, and from there anywhere from 1 to 500 records can be processed at a time. To show you how it works, we’ve put together a quick step by step tutorial.

What You Will Need

  • C# experience
  • Visual Studio ( VS 2015 is used for this example)
  • A DOTS NCOA Live license Key. A free Trial Key can be obtained here.

Project Creation and Setup

To get started, go ahead and launch Visual Studio and select File->New->Project.  From here you can choose whatever project will meet your needs. For this example, we’ll be creating a very basic console application.  So if you want to follow along step by step, you can choose the same project details as shown below.

Select OK and wait for Visual Studio to finish creating the project. Once that is done, right click the project, select Add and then select Service Reference. Here, we’ll enter the URL to the WSDL so that Visual Studio will create all the necessary classes, objects and methods that we’ll use when interacting with the DOTS NCOA Live service. To successfully do this, add the necessary information into the pop up screen as shown in the screenshot below.

3

Select OK. Now that the service reference is successfully set, open up the App.Config. Below is a screenshot of the App.Config that has been modified.

We’ve added the appSettings section and within that we’ve added two key value pairs. The first is the license key field where you will enter your key. Storing the license key in the app or web config files can be helpful and easy when transitioning from a trial to a live environment in your application. When you are ready to use a production key, changing it in the app config is an easier option than having to change a hard coded license key.

We’ve also put the path to a csv that will contain the address and name information that we will be sending to the NCOA service. You may not want to read in a CSV for your application but the process of building the input elements for the service will be relatively similar. For this example, we’re just going to put the file in the BIN folder of our project, but you can add any path you want to the file.

We’ve also increased the maximum buffer size in the http Binding. Since we’ll be sending a list of 100 addresses to the DOTS NCOA Live service, we’ll indeed need to increase the buffer size.

Lastly, we’ve changed the name of the original endpoint to “PrimaryClient,” made a copy of the endpoint and changed that name to “BackupPoint.” Currently, both of these endpoints point to the trial Service Objects environment, but when a production key is purchased, the PrimaryClient url should point to http://ws.serviceobjects.com/nl/ncoalive.asmx and the BackupClient should point to http://wsbackup.serviceobjects.com/nl/ncoalive.asmx.

Calling the DOTS NCOA Live web service

The first thing we’ll do is instantiate two static strings outside of the Main method. One will be for the input file, and the other will be for the license key that we’ve placed in the app.config. Inside the scope of the Main method, we’ll instantiate a NCOAAddressResponse object called response and set it equal to null. We’ll also create a string called jobID and set that equal to null as well.  This jobID will be passed as a parameter to our NCOA service call. A JobID can be seen as a unique identifier for all the records that are run against the service.

Now we’ll create the following method that will read our input file.

This method will return a List of the NCOAAddress object that will have all the inputs we need to send to the service. In my particular file the fields are as follows: Name, Address, Address2, City, State, Zip.  Your code will need to be modified to read the specific structure of your input file. This code reads in the file and then loops through each of the lines and adds the appropriate values to the fields of the NCOA address object. After the line is successfully read, we add the individual NCOAAddress objet to the list called inputAddresses and then return that object once the code has finished looping through the file.

Now we’ll insert a try catch block into the main method. Within this try catch block we’ll create a List of the NCOAAddress object and call the readInputFile method to fill it. We’ll also make a JobID with today’s date appended to the end of it. You will likely want to customize your job id to fit into your business application. Jobs close on Sunday at 11:55 PM so that is also something to take into consideration when designing your code.

Failover Configuration

Now that we have all our inputs successfully set up, we are able to call the NCOA web service. The first step we’ll take is create another try catch block to make the web service calls in. We will also create an instance of the DOTSNCOALibraryClient and pass in the trip “PrimaryClient” as a parameter. This will ensure that our first call to the NCOA service will point to our primary URL that we have defined in the web config. Next we’ll make the call to the webservice using the library client and set it equal to our response object.

After we get a response back from the service we’ll perform our failover check. We check if the response is null or if an error TypeCode of 3 was returned from the service. An error type code of 3 would indicate that a fatal error occurred with the service. If either of these criteria are met, then we’ll throw an exception that will be caught by the catch block we have created. Within this catch block we’ll set the library client object to a new instance with the “BackupClient” string passed to it to ensure that we call the backup client. The code should look like the following.

This failover logic will ensure that your application stays up and running in the event that a Service Objects web service is unresponsive or in the event that it is not behaving as expected.

Processing the Response

Now that we have successfully called the service, we’ll need to do something with the response. In this tutorial, we’ll take the results from the service and download them as a CSV into our bin folder.

To do this, we will call and create a method called processResponse that will take a NCOAAddressResponse as an input. This method will take the outputs from the service and build a DataTable that we will use to eventually turn into a CSV. This can be done as shown in the following screen shot.

 

 

Now that our output data table has been created, we’ll call and create two methods will loop through our DataTable and convert it to a CSV string, and then write that CSV to the bin folder.  The code to perform this is shown below.

More information on all the elements in the response object can be found on our developer guide for the NCOA service.

Now that our output data table has been created, we’ll call and create two methods will loop through our Data Table and convert it to a CSV string, and then write that CSV to the bin folder. The code to perform this is shown below.

11

Now our code is ready to run and the service is ready to test.

We’re always available for assistance, so be sure to contact us with any questions about integration for NCOA or any of our other services.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Taking Service Objects for a Test Drive

You’ve found Service Objects, you’ve read about our services so now you want to test drive. Well, there are several ways to do just that and as I go through the options you will also get a pretty good picture about how you can integrate our services into your applications or processes. Testing can be a tedious task, delving into the unknown of a third party process, but we make it easy for you to jump right in by giving you several ways to test such as our Quick Lookup Tool, DataTumbler, Batch Processing and our real-time API services.

Quick Lookup Tool
The Quick Lookup tool is used to test one off examples on our website.  It is as simple as navigating to the Quick Lookup page, selecting the particular service you are interested in, filling out the fields and clicking submit. You’ll receive a real time response from the service containing the results of your test.

Since our services often offer multiple operations the Quick Lookup pages will inform you which operation is being used for the particular service in the form. If there are other operations you are interested in testing then we have you covered there as well with links to the other operations.

DataTumbler
The DataTumbler is a PC-based desktop application you can download from our site to run tests on multiple records. If you have ever used Excel then this application will be easy for you to drive. It works like a spreadsheet where you can paste multiple records for processing, in real-time.


Here are the basic steps: Choose your service, choose your operation, drop your data in and click validate. Choosing the service and desired operation is important because often it will change the input columns needed for the records to process properly. In the screenshot above you can see that there are 5 input columns designated by the yellowish cell background. Here we have inputs for Address, Address2, City, State and Zip. If your particular purposes do not require Address2, for instance, then that column can be removed by simply clicking on the “Customize Input Columns” button and removing it from the input.  You can do the same thing for the output columns as well but in that case you would need to access the “Customize Output Columns” popup.  The output columns are designated by the cells with the greenish background.

You can also add additional columns that are not predefined by the application by right clicking a column and selecting “Insert Column”.  This is handy for situations where you want additional data to stay together with your test like a unique identifier or other data unrelated to testing the service.

Once one of the validation buttons at the bottom is pressed, the DataTumbler will make requests to our services in real-time and populate the output columns as the data is being processed.

To get the application please call Customer Support at 1.800.694.6269 or access Live Chat and we will get you started.

Batch Processing
Batch processing is another way you can test drive our services.  When you have a file ready to go for testing you can simply hand it over to us.  We will process it and send back the results to you along with a summary.

This is one of the more preferred ways to test drive our services for several reasons:

  • We can see the data you send us first hand and give you feedback about what the file looks like with respect to items like formatting.
  • By seeing all the fields of data we can quickly recommend the most beneficial service.
  • It gives us an opportunity to see the results with you and go over any interesting data points from a data validation/cleansing expert point of view.

All that is needed for this is a test file containing the records you want to try out. The file can come in several formats including txt, csv and xls to name a few. You can send us the file directly to an email or for a more secure means we can provide you a secure ftp for the file transfer. We can also handle working with encrypted data when an extra security layer is needed. An additional way to get us your test file is through our web site itself. You can drag and drop a file and be on your way. Once we have the file we will process it against the service you are testing and return the results along with a summary breakdown of the processing.

If your test run is a success and you’re ready to move forward with a larger file, we can also run one-time paid batch. Clients often use a this as an initial data scrub before a switching to our real time API or automated batch system which will run batches virtually on demand.

Integrating the API
The last way you can test our services is by implementing our API in your code. Most clients use the API when they integrate with us so testing this way gives you the closest representation of how your production process will work with our services.

When it comes to doing a direct software integration test we have you covered. We make it easy to integrate and get testing quickly by means of sample code, code snippets, step-by-step integration walk-through blogs, developer guides and NuGet for Visual Studio.

We have sample code for C#, Java, PHP, Rails, Python, NodeJS, VB.Net, Classic ASP, Microsoft SQL Server, APEX and Cold Fusion.  This list does not mean that we stop there.  Additional sample code can be requested and our team will review to find the best solution for you.  When applicable, our sample code will be available in both REST and SOAP.  All of our examples will implement best practices and demonstrate failover.

If you are a C# developer and use Visual Studio you will have us at the hands of your finger tips.  Using the NuGet Manager in Visual Studio you can have our API injected into your code and ready to go.

All of our walk-through tutorial blogs and documentation are presented in a straight forward and easy to understand format and as always the Service Objects team is here to assist with any questions or help you may need.

When it comes to test driving our services we give you options to make it easy. A trial key will give you access to all the options I mentioned. Going through these options also gave me a chance to show you how you can integrate with our services.  The beauty behind the way we set this system up is that you can become fully integrated and tested before you even purchase a live production key.  In all of these cases, usually only two things need to be updated when switching from trial testing to live production use.  In the Quick Lookup Tool you will need to switch the “Select Key Type” to “Customer Production Key” and then use your live production key instead of your trial key.  In the DataTumbler you will be similarly swapping those fields out as well.  When it comes to doing a code integration you just need to update your endpoints from trial.serviceobjects.com to ws.serviceobjects.com and the trial key for a live production key.

Whenever you want to test one or two records or a whole file, simply put, our team has you covered.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Java Integration Tutorial

Java is easily one of the most popular programming languages in the world. It is a general purpose language that users have managed to implement in a variety of applications. It is popular with web applications; which is why it is one of our most requested sample code languages for our DOTS validation products. This is no surprise either, since Java is able to run on a wide range of architecture, applications and environments.  Since it is so popular, we’re here to show you the ropes and get a Service Objects web service up and running.

For this example we’ll be using a fairly new operation in our DOTS Address Detective service. Our Address Detective service is a power house address validation service that can leverage multiple different data sources to validate an address.  The DOTS Address Detective service has input fields for a person’s name, a business name, and a phone number along with the traditional address fields. These additional data points help the service leverage other data sources to get your address validated.  This service even has an operation that will take your data in any order it’s given and return a standardized address. It’s called FindAddressLines and has 10 inputs ranging from “Line1” to “Line10” and can work wonders on standardizing messy data sources.  We’ll be integrating with this operation in our tutorial today, so let’s get started!

What You’ll Need

  • A Java IDE (We’re using Eclipse for this example)
  • Basic Java Knowledge
  • A Service Objects DOTS Validation product License Key (We’re using DOTS Address Detective for this case)

Setting Up the Project

Launch Eclipse and select a workspace if it asks you do so. Once everything has finished loading,  select File->New->Other. In the search field type “Dynamic Web Project” and click next.

On the next screen, type in an appropriate project name and configure the settings to our specific needs. Congratulations, you’ve built a project! We’ll need to add several files to take in our inputs, send them to the DOTS Address Detective service, serialize the XML response and display the results back to the user. To start, add two jsp files by right clicking the project and selectingNew->JSP File as shown below:

In this tutorial we’ll name them “inputForm.jsp” and “webServiceCall.jsp.”  These will function as the input form and the display page for the results from the service.

These new JSP files will obviously need something more than they currently have. We’ll place all the necessary HTML elements in the inputForm.jsp file. Make your page look like it does below:


These 10 input lines will send all the necessary information to the FindAddressLines operation.  Now that we have the necessary fields to take our input parameters, we’ll need to put the code in place to actually call the DOTS Address Validation web service. We’ll create a package with the necessary class files for the response from the Address Detective service and within that package we’ll create a method that will actually perform the call to the web service.

To add a package, right click the project and select New->Package.  For this project we’ll name the package “serviceObjects” as shown below.

Java Tutorial 5

Be sure to select the “Create package-info.java” checkbox as we’ll need to add some necessary import statements into the package.info file.  After the package as been added, right click it and select New->Class and add all the following classes to the project.

Java Tutorial 6
We’ll talk briefly about the necessary code to add to each of the different objects and classes so that the XML response from the web service will be successfully deserialized into the objects that we are defining here. One thing that needs to be added to the package-info.java file is as follows.

Java Tutorial 7

This will let the code know that it should expect the http://www.serviceobjects.com namespace in the XML response from the service.

Like most classes, all the values in each of the code files will need the “getters and setters” for each of the objects that the service returns in order to properly work with the returned object. The highest level object that is returned from the service (meaning all the other objects and values will be contained within that one) will be the FixedAddressResponse. This object will contain a possible array of FixedAddress objects that is called “Addresses” and it will contain an Error object that the Address Detective service can potentially return. See below for an example of how to format the declarations and, XML annotations and the “getters and setters.”

Java Tutorial 8

As mentioned this is a general example of how to do this and we’ll include all the class files with this tutorial so that they can be easily added to your own project.

Now we’ll point out some things to be aware of in our “ADClient” file.

Inside the class declaration, we’ll define a few things that we’ll need to make the actual call to the service.

Java Tutorial 9

In the above code, we’ve defined both the trial and the backup URLs. Currently, they both point to the trial environment Service Objects servers. In the event that a license key is purchased the primary call should be to ws.serviceobjects.com and the backup URL should be to wsbackup.serviceobjects.com. We’ve also defined our LicenseKey in the class which will allow us to keep it hidden from outside view and we’ve defined a method called “FindAddressLines” that will eventually call the same operation. Notice that it returns the FixedAddressResponse object.

Within the actual method we have some cleanup logic that is performed on the input strings, and then the URLs are assembled for the HTTP call. See below:

Java Tutorial 10

In the above snippet of code URL strings are assembled and sent to the DoHTTPRequest method. After the web service is called, it is necessary to do a check to ensure that the call was completed correctly. The code will check for a null response from the service or a ‘TypeCode” of 3 is returned which would indicate a fatal error from the Service Objects web service.  If either of those conditions is true, then the code will throw an exception and use the backup URL.  The above functionality and logic will ensure that your application will be uninterrupted in the unlikely event that the Service Object servers are non responsive or returning a TypeCode of 3.

Displaying The Results from the Web Service

Since our failover logic and call to the DOTS Address Detective web service are set up, we can now create the objects that will call the web service with the inputs from the input form, and display the results to the user.

Navigate to the webServiceCall.jsp file and implement the following code:

Java Tutorial 11

In this bit of code we’ll grab the inputs from the inputForm.jsp page and place them in strings of the same name.  We’ve also instantiated an instance of the “FixedAddressResponse” object which will hold the results from the service and the “ADClient” object which will make the actual call to the web service.  Since our FindAddressLines method in the ADClient object returns a FixedAddressResponse object we can set it equal to it.

Our call to the web service is all set up and now we can implement some logic that will either display the results from the service.  Implement the following logic in your JSP page.

This code will first check if an error is present in the response; if it is present it will display that error to the screen. If the error response is null, it will display the validated response from the service.  Notice that we call the “getters” that we have previously defined in our response class to display the results to the user.

We now have everything we need to use the service, let’s see it in action!

As mentioned previously, the FindAddressLines operation can take the inputs for an address in any order and return the validated output address.  See below for an example.

Java Tutorial 13

After we send that input to the service, we will receive this response:

Java Tutorial 14

You are now all set to test the FindAddressLines operation in the DOTS Address Detective web service. Sample code downloads are also available for all of our other services. As always if you have any questions about integrations, service behavior or best practices, feel free to contact us and we will gladly assist in any way that we can!

Service Objects Provides Customized Sample Code

One of our primary goals as Application Engineers at Service Objects, is to do whatever we can to ensure that clients and prospective clients get up and running with their DOTS validation service and programming language of choice. That’s why we have over 250 different pieces of sample code available to those who want to test our services!

But what if you are interested in integrating multiple services in your application?

Lucky for you, this commitment to getting the data hungry masses up and running with testing our services goes even further. We are dedicated to ensuring that you get the most out of the service(s) that you are testing and assisting with any integration related questions. One of the ways we do this is by writing custom sample code to help our clients and prospective clients integrate our services into their business logic.

What are some examples of custom sample code?

Well I am glad you asked! Need some sample code that will run our NCOA service against 500,000 addresses in a couple hours? No problem.  Do you want to get geocode

coordinates from the contact address that comes back from our DOTS Geophone Plus 2? We’ll write you some sample code that will get that done. Does a portion of your address data include a PO Box number reflected as the unit or suite? We can help you leverage the results from our DOTS Address Validation 3 service to programmatically identify those records.  Need to use any of our DOTS validation products with asynchronous calls? We can certainly help with that as well.

There are a multitude of other combinations that our services can be used to get you your desired result! If you’re interested in any DOTS validation products and need some assistance in how to get the intended result, please reach out to us here! We will gladly provide a consultation on how to best integrate your service (or services) of choice into your application or we’ll go ahead and write a piece of sample code for you to illustrate best practices when calling a DOTS validation web service.

No image, text reads Service Objects Tutorials

C# Integration Tutorial

C# may very well be our most requested sample code. There is good reason for that too; C# and the .NET framework are many developer’s first choice for creating a web page or any other type of application because of the versatility of the language and framework as well as the robust features that Visual Studio offers. One of those features is the ability to consume a WSDL (Web Services Description Language) and create all the necessary classes and methods to successfully call a web service. This makes using SOAP squeaky clean! Ok, that was the first and last SOAP pun, I promise. Here’s what you will need for this tutorial.

Requirements

  • Visual Studio (2015 is used in this tutorial but the process should be relatively similar to any other version)
  • DOTS Web Service License Key (We are using DOTS Address Validation International for this example)
  • Some familiarity with C# and the .NET Framework. This tutorial will be pretty basic so it should be accessible even if you are a beginner.

Setting up the visual studio project

For starters, launch Visual Studio and create a new ASP.NET Web Application and choose an appropriate project name. Your screen should look similar to the following:

Click ok and then select “Empty” to create an empty web form. We will add the necessary aspx page momentarily.

But, our first step for our new project will be to add the service reference to the DOTS Address Validation International web service. To do this, right-click on “References” in the Solution Explorer and select “Add Service Reference” a pop up should appear to add the Service Reference. Here we will add the URL to WSDL that contains the information on how the project should interact with the DOTS Address Validation International web service and we will name the Service Reference.

For reference here is the WSDL URL and here is what the pop-up page should look like

WSDL: http://trial.serviceobjects.com/avi/soap.svc?wsdl

Now that we have successfully added the service reference, we can add an aspx page that will have our input form and display our results. Right-click the project name and select “Add” and then select “Web Form” to add a blank web form to the project.  For our example, we’ll name the form “AVIForm”.

Creating the input form and code behind

Now that our form is present, we’ll add some simple HTML and ASP elements to take in our inputs and display them to the screen after we get a response from the service. Make your ASPX page look like the following.

The above code will allow us to take the inputs send them to the code behind and then display the results in the outputGrid and InformationComponentsGrid.  We have to separate grids; one to account for the standard outputs from the service, and the other to account for the InformationComponents field to account for some variable information and data to be returned by the service.  This field can change based on the country or data available for a specific international address. Now that our input form is all set up, we’ll add the proper code behind that will display the results to the user.

We won’t look at every part of the code here, to download a .txt version of the code, click here.

One thing we like to stress to clients who are integrating our services is proper Failover Configuration. In the unlikely event that our primary datacenters are offline or producing strange errors, we want to ensure that our clients are pointing their code to our backup datacenters so that their applications and business processes go uninterrupted. Here is a full picture of the proper way to integrate failover into an application.

We’ve found that to ensure uninterrupted service the best practice is to have the calls to the web service nested in a try-catch block of code. In our current setup, the backup call will hit the same data center as the primary call; but if a License Key is purchased the primary call should point to ws.serviceobjects.com and the backup call should point wsbackup.serviceobjects.com. The screenshot below highlights some of the primary failover logic that will allow the code to run uninterrupted.

This code occurs right after the primary call to the web service if it detects that the response from the service is null or if an Error TypeCode of “3” is returned, then the code will throw a new exception and the catch statement will call the backup web service call.

If a successful response is received from the service, the code will call a method named “ProcessValidResponse” which takes in the response from the web service and display the results into a DataGrid and then send that data grid to the ASPX page for the user to see. This method is pretty straight forward for the most part, as it simply assigns the outputs and their respective values into separate columns for the user to see.

The only part that may be mildly tricky would be the InformationComponents field that is returned from the service.  This field is an array of InformationComponent which contains two strings; one for the “Name” of the variable returned and one that indicates the “Value” of the variable returned.  For example is you pass a US address into the AVI service, one InformationComponent that can be returned will have a Name of “DPV” and a Value of “1” indicating that it is considered delivered by the USPS.  Below is an example of the XML output.

 

This array of fields allows us to add new outputs to the service over time without potentially breaking any existing client’s code. To account for this array of information we have a brief For loop below that will loop through all the elements of InformationComponents and add their names and values to the InfoCompTable so that they can be seen by the user.

Making a successful call to the service

If we go ahead and run our project, our webpage will look like the following.

Not very exciting, but it will get the job done. As an example address, we’ll use the following in France:

3 Place de la Victoire, 33000 Bordeaux, France

If this address is sent to the service you should see the following response.

As you can see, this address is considered Valid by the service and the service has a premise level resolution for this address. The Address1-Address8 fields will also display how the address should look for mailing purposes. For this particular example we only need Address1 and Address2 but for other countries, all 8 address lines may be used. The InformationComponents field was also parsed out and the two Name and Value pairs were shown below the standard outputs from the service.

That wraps up our tutorial for DOTS AVI integration in C#. Congratulations! You are now on your way to being a DOTS Address Validation International expert! Feel free to go and test more International addresses, and as always if you have any questions feel free to reach out to us at support@serviceobjects.com!

No image, text reads Service Objects Tutorials

How to Validate International Addresses

Dealing with international addresses is no simple task. An address can often be misspelled, incorrectly formatted or simply written in a foreign language that you do not understand. The simple fact that many international addresses are foreign to us means that we are unable to recognize when something is wrong.

Take the simple word “street” for example. It is one of many commonly used words in an address. The French word for street is “rue”, in German it’s “Straße”, in Portuguese it’s “rua”, and it’s the character “路” pronounced “lu” in Chinese and so on. That’s not to mention common abbreviations either. In many cases a person will have a hard time identifying the name of a city or a street in an address and they would be unable to distinguish one from the other.

Let’s take a look at a few international examples:

China (中国)

Address:

Shanghai DPF Textile Co., Ltd.
200331
上海市普陀区武威路259号
98 -A3

Unless you are able to read Chinese you would be hard pressed to make sense of the above example. The first line is in English but it appears to simply be the name of a business. Business names are not required to validate addresses with the AVI service and it is unlikely that it would prove helpful in the validation process. To the contrary, extraneous information like this is often regarded by most systems as garbage data; however, let’s go ahead and pass the address as is to the AVI service and see how it handles it.

URL Query:url1

Here is what the query looks like when using the web service test page:

getaddressinfo

Here is the output in JSON, although the service also supports XML:json1Examining the output, we see that the AVI service fixed the order in which we entered our input values. This was done not only in the transliterated Romanized spelling of the address but also in the localized Chinese format.

Here are both versions of the address parsed from the JSON response output:

Roman Character Format

Shanghai DPF Textile Co , Ltd
98 – A3
No. 259 Wuwei Lu
Putuo Qu, Shanghai Shi
200331

Local (Chinese) Character Format

200331
上海市普陀区武威路259号
Shanghai DPF Textile Co , Ltd
98 – A3

The AVI service identified the street name, city name, postal code as well as other useful information.

 

Greece (Ελλάδα)

Address:

114 71 Αθηνα
Ασκληπιου 104
Το ΝΟΣΤΙΜΟ

Unless you can read Greek the above address would be difficult to decipher. Let’s see what the AVI service returns.

URL Query:
url2

Here is what the query looks like when using the web service test page:

getaddressinfo2

JSON Output:json2

Parsing out both of the address formats from the JSON response we get the following:

Roman Character Format:

To NOSTIMO
Asklepiou 104
114 71 Athens

Local (Greek) Character Format:

114 71 Αθηνα
Ασκληπιου 104
Το ΝΟΣΤΙΜΟ

As it turns out, “To NOSTIMO” or “Το ΝΟΣΤΙΜΟ” (in Greek), is the name of a café that resides at the address. Even though the name of the café is not technically a part of the address nor is it necessary for validation, we see that its inclusion did not impede the AVI service from performing its job.

 

Germany

Let’s see how well the AVI service handles an address when several lines of extraneous data are included.

Address: 

Accemic GmbH & Co. KG
C/O World Express (GmbH)
Gunther Meyer, Phone: +49 (0) 8033 6039790
Franzhuber Str 39
Kiefersfelden

In this example, the address is in English, but there is a mess of extraneous information included. What will the AVI service make of this example?

URL Query:url3
Here is what the query looks like when using the web service test page:
getaddressinfo3

JSON Output:json3Parsing out the address from the JSON response we get the following:

Roman Character Format:

Accemic GmbH & Co KG
C/O World Express (GmbH)
Gunther Meyer, Phone: +49 (0) 8033 6039790
Franz-Huber-Str. 39
83088 Kiefersfelden

In the above example, we see that the AVI service was able to ignore the three lines of extraneous information and identify the pertinent address information. From there the service standardized the street name, corrected the locality name and appended the missing postal code.

The Importance of Encryption

The information age has brought with it both convenience and risk. Consumers, for example, love the convenience of shopping online, yet they certainly don’t want their personal and sensitive information (like credit card numbers) to be revealed to unauthorized parties. Businesses have the responsibility, and in many cases, the legal obligation, to mitigate this risk and protect sensitive information from prying eyes. This is largely done through encryption.

What is encryption?

In simple terms, encryption is the process of taking human-readable information and translating it into an unreadable form. The information is protected by an encryption algorithm that can only be translated back into a human-readable form by authorized parties.

As a consumer, you’ve likely encountered basic HTTPS encryption while doing business online. You know to look for HTTPS (instead of HTTP) and the padlock symbol in the address bar. With HTTPS encryption, the website and web server have been authenticated and a secure, two-way connection has been established. Transactions made using HTTPS encryption are shielded from man-in-the-middle attacks, tampering, and eavesdropping.

Encryption typically uses “keys” to unlock the data. For example, with symmetric key encryption, the sender and receiver use a common key known only to them to decrypt the data. Thus, if a cybercriminal were to intercept the information, the payload would be gibberish. Since the cybercriminal doesn’t have the means to decrypt the data, it’s safe and sound despite the breach.

Why should you care?

Sensitive client data should be handled with the utmost care. This means that the companies that handle sensitive client data should be well informed about the best security practices, including end-to-end encryption.

Service Objects offers specialized services focused on data validation. The data that is sent to our services for validation usually comes from our clients’ customers. For example, let’s imagine a fictional Service Objects client called Medical Insurance Inc., a medical insurance company. As Medical Insurance Inc. collects information on their customers, prospects, and leads, they want to confirm that the data is valid. In order to validate the data, they must send the sensitive information over to one of the Service Objects’ web services. If Medical Insurance Inc. doesn’t use encryption, the data being transferred is at risk of being snooped on by a malicious third party. A simple man-in-the-middle attack could allow direct access to sensitive information that should not be exposed to anyone outside of Medical Insurance Inc. The risk of exposing sensitive data can be easily negated by any of the following recommended best practices.

What do we currently support/recommend using?

We currently support Pretty Good Privacy (PGP) encryption on incoming and outgoing list processing orders. End-to-end encryption is made possible by PGP’s hybrid-type cryptography, which uses a blend of private and public key encryption to help ensure your data is not exposed to anyone but the authorized parties.

For standard API calls, we highly recommend using the HTTPS protocol. Over HTTPS, the connection to the site will be encrypted and authenticated using a strong protocol (SSL/TLS), a strong key exchange (RSA), and a string cipher (AES256). By using HTTPS to make your web service calls, you can rest assured that any sensitive client data is well guarded.

No image, text reads Service Objects Tutorials

A Node JS Step By Step Tutorial

A few weeks ago we published a blog article titled “Why You Should Never Put Sensitive Data in Your JavaScript” and it described some of the dangers that accompany client-side scripts like javascript.  More specifically, if you are using an API like a Service Objects web service, the license key that is passed to the web service can be plainly visible to anyone who can inspect the page source.  Obviously, if you are looking to keep your page secure, this is bad.  Luckily enough, there are solutions available for you if you are keen on using javascript.

One such solution for this would be to use NodeJs.  NodeJS is a versatile server-side platform that can be used for networking or server-side applications.  This short step by step tutorial will give you the bare bones server to begin testing a Service Objects web service using Node JS.

What you will need:

-NodeJS installed on your machine

-Your favorite Text Editor

-Familiarity with the Command Line

-A license key to the DOTS Address Validation 3 web service. Get a free trial key here!

Creating the Server

Once you have NodeJS and all your environment variables set correctly, you will have to navigate to the directory on the command line where your node.js file will be.

For this example create a file called GetBestMatches.js with your text editor and enter the following code in the file:

1

This small bit of code will create our server and run it on port 8081. When the server is launched through the command line using the command “node GetBestMatches.js” we’ll see a message in the command line that the server is running:

2

Seems simple enough right? Now, let’s add some more code that will allow us to display some information to the client side.  Inside the http.createServer function, add the following code:

3

Here we have a switch statement that contains 3 different cases that will set up the response header on the client side, have our URL that will eventually call the web service and display some default information on the localhost URL shown above. When this bit of code is launched the URL above will show the following:

4If we do navigate to the GetBestMatches link, well, nothing will happen. But we’re going to add some more code to change that.

Setting the Inputs and URLs

Here we will add code to instantiate the input values and the URL necessary to successfully call DOTS Address Validation 3. This process will be relatively similar to the necessary process to connect to most of the DOTS validation products, but you will need the specified input for that service and the respective license key for the service.

Here’s what the code will look like:

5

As shown above, the values for our inputs are hardcoded here just to simple test how to interact with the Service Objects API within Nodejs.  When using this in production, you will obviously want to dynamically pass in the values to be validated, but for this tutorial, we’ll use the hardcoded inputs just for example and testing purposes. In the screenshot above there is also a primary and backup URL that we create. Currently, both of the URLs point to the same environment; but if the service is being used in a production environment, the primary URL should point to ws.serviceobjects.com and the backup URL should be pointed to wsbackup.serviceobjects.com.

Setting up the Call to the Web Service

Now that our inputs are created, we’ll add the code that will actually perform the GET call to the service. to do this, we’ll use the http.get function of the server that we’ve created and parse the results from the service in a readable format:

6

In this set of code we set the encoding and set the response from the service equal to the results object.  We also have included the xml2js package to parse the XML that comes back from the service.  This will simply allow us to display the results from the service in a more readable way.

Implementing FailOver Configuration

For the last bit of code that we’ll add, we will include proper failover configuration so that in the event that there is a service outage or that the web service is not responding correctly, your code and calls the Service Objects API will continue to function normally.

In the code below, if an error code Type 3 is found from the web service, then the code will use the backup URL.  If a successful response is received from the service then the code will display the results on the client side:

7

This is all the code that we will need to make a successful call to the DOTS Address Validation 3 service. We’ll change the inputs to use the Service Objects address (27 E Cota St STE 500, Santa Barbara, CA 93101) to show an example response from the service.  If the server is restarted, you will see the output that is displayed to the user:

{
“BestMatchesResponse”:  {
“xmlns”: “http://www.serviceobjects.com”,
“xmlns:i”: “http://www.w3.org/2001/XMLSchema-instance”
},
“Addresses”:  {
“Address”: {
“Address1″: 27 E Cota St Ste 500”,
“Address2”: “”,
“City”: “Santa Barbara”,
“State”: “CA”,
“Zip”: “93101-7602”,
“IsResidential”: “false”,
“DPV”: “1”,
“DPVDesc”: “Yes, the input record is a valid mailing address”,
“DPVNotes”: “26,28,39”,
“DPVNotesDesc”: “Perfect address, The input address matched the ZIP+4 record,The input address matched the DPV record,Highrise apartment/office building address”,
“Corrections”: “”,
“CorrectionsDesc”: “”,
“BarcodeDigits”: “931017602254”,
“CarrierRoute”: “C006”,
“CongressCode”: “24”,
“CountyCode”: “083”,
“CountyName”: “Santa Barbara”,
“FragmentHouse”: “27”,
“FragmentPreDir”: “E”,
“FragmentStreet”: “Cota”,
“FragmentSuffix”: “St”,
“FragmentPostDir: “”,
“FragmentUnit”: “Ste”,
“Fragment”:,”500″,
“FragmentPMBPrefix: “”,
“FragmentPMBNumber: “”
}
},
“IsCASS”: “true”

}

That completes our tutorial for JodeJS. If you have any questions, feel free to reach out to our tech team anytime!

8 Tips to Build a Successful Service Level Agreement

A Service Level Agreement (SLA) makes use of the knowledge of enterprise capacity demands, peak periods, and standard usage baselines to compose the enforceable and measurable outsourcing agreement between vendor and client. As such, an effective SLA will reflect goals for greater performance and capacity, productivity, flexibility, availability, and standardization.

The SLA should set the stage for meeting or surpassing business and technology service levels while identifying any gaps currently being experienced in the achievement of service levels.

SLAs capture the business objectives and define how success will be measured, and are ideally structured to evolve with the customer’s foreseeable needs. The right approach to an SLA results in agreements that are distinguished by clear, simple language, a tight focus on business objectives, and ones that consider the dynamic nature of the business to ensure evolving needs will be met.

1. Both the Client and Vendor Must Structure the SLA

Structuring an SLA is an important, multiple-step process involving both the client and the vendor. In order to successfully meet business objectives, SLA best practices dictate that the vendor and client collaborate to conduct a detailed assessment of the client’s existing applications suite, new IT initiatives, internal processes, and currently delivered baseline service levels.

Cropped shot of two businesspeople shaking handshttp://195.154.178.81/DATA/i_collage/pi/shoots/805355.jpg

2. Analyze Technical Goals & Constraints

The best way to start analyzing technical goals and constraints is to brainstorm or research technical goals and requirements. Technical goals include availability levels, throughput, jitter, delay, response time, scalability requirements, new feature introductions, new application introductions, security, manageability, and even cost. Start prioritizing the goals or lowering expectations that can still meet business requirements.

For example, you might have an availability level of 99.999% or 5 minutes of downtime per year. There are numerous constraints to achieving this goal, such as single points of failure in hardware, mean time to repair (MTTR), broken hardware in remote locations, carrier reliability, proactive fault detection capabilities, high change rates, and current network capacity limitations. As a result, you may adjust the goal to a more achievable level.

3. Determine the Availability Budget

An availability budget is the expected theoretical availability of the network between two defined points. Accurate theoretical information is useful in several ways, including:

  • The organization can use this as a goal for internal availability and deviations can be quickly defined and remedied.
  • The information can be used by network planners in determining the availability of the system to help ensure the design will meet business requirements.

Factors that contribute to non-availability or outage time include hardware failure, software failure, power and environmental issues, link or carrier failure, network design, human error, or lack of process. You should closely evaluate each of these parameters when evaluating the overall availability budget for the network.

4. Application Profiles

contractApplication profiles help the networking organization understand and define network service level requirements for individual applications. This helps to ensure that the network supports individual application requirements and network services overall.

Business applications may include e-mail, file transfer, Web browsing, medical imaging, or manufacturing. System applications may include software distribution, user authentication, network backup, and network management.

The goal of the application profile is to understand business requirements for the application, business criticality, and network requirements such as bandwidth, delay, and jitter. In addition, the networking organization should understand the impact of network downtime.

5. Availability and Performance Standards

Availability and performance standards set the service expectations for the organization. These may be defined for different areas of the network or specific applications. Performance may also be defined in terms of round-trip delay, jitter, maximum throughput, bandwidth commitments, and overall scalability. In addition to setting the service expectations, the organization should also take care to define each of the service standards so that user and IT groups working with networking fully understand the service standard and how it relates to their application or server administration requirements.

6. Metrics and Monitoring

Service level definitions by themselves are worthless unless the organization collects metrics and monitors success. Measuring the service level determines whether the organization is meeting objectives, and also identifies the root cause of availability or performance issues.

7. Customer Business Needs and Goals

Try to understand the cost of downtime for the customer’s service. Estimate in terms of lost productivity, revenue, and customer goodwill. The SLA developer should also understand the business goals and growth of the organization in order to accommodate network upgrades, workload, and budgeting.

8. Performance Indicator Metrics

Metrics are simply tools that allow network managers to manage service level consistency and to make improvements according to business requirements. Unfortunately, many organizations do not collect availability, performance, and other metrics. Organizations attribute this to the inability to provide complete accuracy, cost, network overhead, and available resources. These factors can impact the ability to measure service levels, but the organization should focus on the overall goals to manage and improve service levels.

In summary, service level management allows an organization to move from a reactive support model to a proactive support model where network availability and performance levels are determined by business requirements, not by the latest set of problems. The process helps create an environment of continuous service level improvement and increased business competitiveness.

Five Elements of a Customer Success Program

Why focus on customer success? Retaining customers, maintaining customer loyalty, and getting new customers requires a holistic approach that goes beyond the basics of providing service, ensuring satisfaction, and resolving problems.

According to the Customer Success Association, “…it’s about customer relationship retention and optimization. And the most effective way to keep your customers is to make them as successful as possible in using your technology product.”

Customers who feel engaged and heard and who have experienced a real value in doing business with you are your true success stories. Their interactions at each point have been positive, both in terms of personal interactions with your team and in using your products or services. These are the customers who will remember your brand, who will tell their friends how wonderful your company is, and who will absolutely return because the relationship and the value they receive are both so strong.

Having a Customer Success Program clearly communicates to customers and prospects what they can expect if they buy your company’s product or service. Below are five key elements to consider when developing your Customer Success Program:

1. Commitment to customer service — Demonstrate that your company is deeply committed to customer service. While your marketing materials may tout your commitment, this is one area where actions speak louder than words. Include specific metrics – e.g. we will respond to support requests within 30 minutes, we have 24/7 support – and make sure they are adhered to and backed by a Service Level Agreement (SLA).

2. Company wide buy-in — Customer service is no longer the realm of front office staff or the support center. Get buy-in from all departments in the company so that the customer has a positive experience whether they are talking to operations, engineering, IT, sales, accounting, etc. Having a centralized CRM database is important so each department can clearly see what is being communicated to the customer.

3. Assign a single point of contact — Establish a dedicated account manager to proactively communicate product updates, important features the customer may not be effectively utilizing, and serve as the main conduit of information for the customer regarding their account.

4. Proactive monitoring — Proactively monitor the customer’s account and alert them to any unusual activity to ward off potential complaints or unexpected account usage surprises. This is a great way to cultivate a “we are looking out for you” feeling.

5. Solicit feedback — Soliciting feedback — and responding to it — shows your customers that you value their insights and are listening. Make soliciting customer feedback a regular task, and respond promptly. It’s crucial to thank them for their feedback and address concerns. Have a regularly scheduled check-in call to address any issues or concerns that may have recently come up.

At Service Objects, our philosophy is customer service above all, and our Customer Success Program reflects this core value. Below are just some of the features our program includes:

  • 24/7 critical emergency support
  • Direct access to Product Engineers to discuss Best Practices
  • Guaranteed response times and server uptimes backed by a money back guarantee
  • Dedicated Account Manager
  • Priority customer support across our customer contact channels (phone, email, chat)

In Customer Service Chat, You Have to do More Than Answer

Customer service chat is popular with companies and customers alike. It’s easy, it’s quick, and it works well on mobile devices. But easy and popular doesn’t always equal good. Read this chat with customer service agent “Jack” at Vizio. It is a set of customer service blunders, large and small.

Here’s the chat transcript

Visitor: Hi I just bought a 50″ M501d-A2R tv. i am trying to set it up. I can’t put in the password to my wifi because my password is longer than the number of characters allowed. I don’t want to reset my password on my Cisco cable router. Can you help?

Jack: Here at VIZIO we pride ourselves in providing best in class U.S. based support. I’m happy to assist you today. How many digits is your wireless password?

Visitor: 26 digits

Jack: The TV will support up to 22 digits. Unfortunately the password would need to be shortened to work with our TVs.

Visitor: Hmm i am not glad to hear that

Jack: I apologize for the inconvenience.

Visitor: Ok. Please email me a transcript of this chat. Thank you.

Jack: You’re welcome! You will receive a copy of this chat transcript as soon as the chat window has closed. Thank you for chatting with VIZIO today. If you have any questions feel free to contact our support team at 1-877-878-4946, online at chat.vizio.com, or email us at techsupp@vizio.com! We would also like you to join VIZIO Fandemonium today to earn points and win prizes only at VIZIOfanzone.com Thanks again, and have a great day.

Here’s how this chat needs to be improved:

Stop the chest-thumping about being US-based. This should NOT be the first thing Jack says to the customer. In fact, Jack shouldn’t say this at all. It doesn’t matter whether Vizio’s support is based in the US. The customer wants a high-quality chat. He wants a quick, correct, complete answer. Jack’s first statement really causes problems because the support he provides isn’t worth the company’s pride and it isn’t best-in-class. The cultural elitism of this statement is really unattractive, especially given the poor quality of the chat.

Use the customer’s name. The impersonal use of “Visitor” rather than the customer’s name clashes with the parts of the chat that are quite good. Some of Jack’s replies are specific and personal. For example, when he asks, “How many digits is your wireless password?”, it is clear he’s read what Visitor has written. The chat system should be configured to use the customer’s name. Why would any customer service organization want to refer to a customer by an anonymous term?

Be sincere. I was really sad when Jack laid down the classic customer service trope: “I apologize for the inconvenience.” In this case, this statement is insincere and unnecessary. There’s no need for an apology because neither Jack nor Vizio has done anything wrong. And it’s a true service failure to simply apologize when the customer needs help solving the problem.

Help the customer. Don’t merely answer the customer’s question. Visitor got an answer to his question about the length of his password. His is four digits too long. But Jack never helped him. Even if Jack can’t actually help Visitor reset the password on the Cisco router, he should have written something like, “Refer to the user guide that came with your Cisco router to find instructions on how to reset and shorten your password…”

Omit the marketing. Vizio clearly thinks, “We’ve got Visitor’s attention, so let’s pitch him Fandemonium.” But this pitch doesn’t belong in this chat, especially given the poor service Vizio has provided. And it’s not good marketing copy, either. Points? For what? Prizes? What kind? What’s In It For Me?

Reprinted from LinkedIn with permission from the author. View original post here.

 

Editor’s Note: Service Objects prides itself on customer service and tech support for effective resolutions to all questions, issues and inquiries. We’re always striving to improve our customer support, and have found chat to be an integral part of our everyday communication with those who visit our site seeking answers to their data validation problems.

Author’s Bio: Leslie O’Flahavan, Principal, E-WRITE

As E-WRITE owner since 1996, Leslie has been writing content and teaching customized writing courses for Fortune 500 companies, government agencies, and non-profit organizations. Leslie is a frequent and sought-after conference presenter, a faculty member at DigitalGov University, and the co-author of Clear, Correct, Concise E-Mail: A Writing Workbook for Customer Service Agents. Leslie can help the most stubborn, inexperienced, or word-phobic employees at your organization improve their writing skills.

 

Tips To Consider When Calling A Web Service For Large Batch Jobs

Web services are great tools for completing one or more tasks that you would otherwise not have the resources to complete on your own. Web services can be relatively quick as well, returning a response in tenths of a second. While a tenth of a second may seem fast, web services in general are regarded as slow processes and are not commonly considered when performing large batch jobs. This is because a batch may consist of millions of records, and if each web request took a tenth of a second to complete then it would take approximately 28 hours to complete one million requests. Every millisecond counts when performing large batches, so it is not uncommon for web services to be regarded as bottlenecks. However, with the right preparation and integration, making a call to a web service is almost no different than making a call to a local database. You may actually be surprised to discover that potential bottlenecks exist in areas that you may not have previously thought of.

Network connection

Web services rely on an internet connection and they normally communicate via HTTP or HTTPS on port 80 and port 443, respectively. Large batch processes typically run on designated servers with elevated security, and it is not uncommon for a network admin to lock down a server so that it cannot access the internet. Be careful not to let your admin block internet communication on the machine that will be calling the web service and ensure that the network is in good condition.

DNS lookup

Depending on the platform you are on and how your environment is configured, your application may be performing a DNS lookup for every web request, regardless of what the DNS time-to-live (ttl) is set to. Depending on how your local DNS resolver is configured, the average DNS lookup time should range between 10-50 milliseconds. There is no need to perform a DNS check for every single request. Instead, take advantage of the DNS cache and ttl so that a DNS lookup is only performed after the ttl has expired.

Connection leaks

Most platforms will have connection pools available to quickly perform requests. Keep in mind though that the number of connections in a pool can be exhausted, so always remember to properly close and dispose your connections when you are finished using them. If a connection is not closed then it will remain open and unusable until it returns to the connection pool. This can sometimes take 30 seconds for every connection depending on which framework you are on, and your application will be forced to wait until a connection becomes available or worse yet your application may crash. The problem with connection leaks is that they do not occur right away. Instead, they commonly pop up unexpectedly after your batch job has been long running. This can sometimes mean that all of the work that was performed up to the point of the error will be lost, leaving you empty-handed and possibly behind schedule. Connection leaks are not just limited to web service calls either. They can occur in all types of connections, such as database calls. So be sure to check your entire application for potential connection leaks and not just the web service calls.

Database design

Many batches are performed against large datasets. Make sure your tables are designed and indexed to support fast read and insert times. In general, inserts are faster than updates with where clauses, so try to avoid update commands if possible. You can’t really blame a web service for being slow if your local database queries are slower than the web service call. When processing a large record set it is sometimes faster to load parts of the data into memory, call the web service as you iterate through the in-memory record set, store the results in memory and then when done iterating, insert the results in bulk to your destination table.

Simultaneous requests

Web services accept simultaneous requests, so instead of processing one record at a time, you can instead process multiple simultaneous connections to complete a batch in a fraction of the time. Most platforms will allow you to perform asynchronous requests, which can be used to process multiple web requests at the same time. Asynchronous requests will use a thread other than the main program thread to perform the web request, and therefore will consume additional resources. In general, most applications run on a single worker process. Depending on the platform, a single worker process may have between 12 to 24 threads at its disposal, with the number of threads being configurable or dynamically managed. The number of threads available to a worker process is also determined by the number of cores available on the CPU. It is important to not spawn too many asynchronous requests as doing so can lead to performance degradation in your application, local machine and other areas such as your network connection. It is best to test your application with a small number of simultaneous requests first and then work your way up. Take the time to evaluate the performance of your application and the health of your machine when performing each test. In general, most batch jobs can be quickly processed by 10, 20 or even 40 simultaneous requests. Larger batches may require 100+ simultaneous requests, but be aware that doing so could expose/introduce deficiencies in other areas of your program, local machine and/or database.

Good application design can require careful consideration in many areas that are sometimes not commonly apparent. The contents of the short list above were aimed towards web services, but the basic list can easily apply to any scenario where a call to a method that resides outside of memory is necessary.

Security Update Performed to Address OpenSSL Vulnerability

We understand that your information is your livelihood and it must be kept secure.

We maintain a strict privacy policy and contact data sent to us via our web service interface is never recorded to persistent memory. All our sensitive data is encrypted and our redundant infrastructure ensures the highest levels of service availability.

Our engineering team has been working to assess the impact for our users in the wake of the April 7th disclosure of CVE-2014-0160, known as Heartbleed.

We join nearly every Internet service provider in responding to this critical vulnerability in SSL. Our obligation as a custodian of your data compels a unique urgency with disclosures such as these – here’s what we know, what you need to do and where you can find additional help from us.

Service Objects Audit Results

HeartbleedWe have reviewed all Service Objects Web Services for impact for the issue described in CVE-2014-0160.

We have determined that our Web Services were unaffected and do not require customer action.

The Service Objects website and Service Objects account portal was discovered to be vulnerable and was patched at 8:30 AM PDT on April 9, 2014. After our review of the account activity there is no evidence that any Service Objects user accounts were compromised.

How to Determine If Your Application Is Affected By Heartbleed

While Service Objects Web Services appear to be unaffected, we recognize a number of you may be using hosting providers or OpenSSL deployments that may be. Here is a quick walkthrough of how to determine if your application is affected.

Still Have Questions?

We hope this answers your questions about the impact of CVE-2014-0160 on your Service Objects applications. Feel free to reply this post by reaching out to Customer Support with follow up questions.

We’ll continue to monitor this issue as the community and vendors investigate this vulnerability further.

No image, text reads Service Objects Tutorials

Pro-tip: Use the Service Path

Updated from November, 2010.

One of the most important tools in my utility belt for troubleshooting, referencing, and integration is the “service path.”

The service path is basically a URL that takes you to a sort of “dashboard” for a service. You can use this dashboard to: test a service; see the XML request/response structure; review the operations that are available to you, and a slew of other useful information. You can find the Service Path for any product on the product detail page, or in the Dev Guide.

Here are few valuable pieces of information that you can get from the service path.

View all available operations for a Web Service

We’ll use Address Validation US as an example. Let’s go to its service path: DOTS Address Validation – US 3 To run a transaction, you’ll need to use your existing Address Validation trial or production key, or get one here. Without a key, you won’t be able to run a test, but you’ll still be able to see operations and their respective inputs and outputs, etc. (Please note: Trial and production keys are product-specific. To run a test on another product, you will need to secure a key for that product. Please click here for a complete section.)

Run a quick transaction against an operation

Once at the service path, click on an operation, place values in the textboxes, and click “Invoke”. The service will return a response in raw XML format. This is what your application sees before it parses the data.

View all required inputs for an operation

Click on an operation and note the input fields on the form that appears. For example, “ValidateAddressWithDPV” expects Address, Address2, City, State, PostalCode, and LicenseKey as input fields. All of these fields must be sent to our service (even if they are sent without any data) for it to accept the request.

View all possible outputs for an operation

Scroll down past the form and look for “ValidateAddressWithDPV” and click on “GET”. The 2nd shaded box will show you the list of XML outputs that our service returns on a successful transaction. For example, and are both possible outputs for ValidateAddressWithDPV.

See what the output will look like in XML or JSON

Go to the operation page, and either run a transaction (which will output the raw XML in a browser window by default), or scroll down to the operation name and click on the protocol you will be using (“GET” for example). You will see a shaded box showing the XML that will be returned to you. Scrolling further down will show you the JSON output. Note: you can also view the JSON in your browser request by changing “format=xml” to “format=json” directly in the URL.

View the WSDL

Go to the main service path URL for your specific DOTS Web Service and click “Service Description”. For example, go here and click on “Service Description” (located in the first line above the operations). The Web Services Definition Language, or WSDL (pronounced Whizz-dull) describes all these operations, inputs, operation descriptors, data types, etc. In fact, all the service path pages you see were generated from the WSDL. If you use a language that requires a WSDL URL, then it likely is generating entire objects from that document and handling all the request and response structures for you.

So there you have it! I hope you get to add this to your utility belt too. It’s an invaluable tool that I use on a daily basis.

No image, text reads Service Objects Tutorials

Top Three Most Common Customer Service Questions

Taking care of our customers is the highest priority of the Service Objects customer care team. We know that customers are busier than ever, and want to find answers to their customer service-related questions quickly and easily. Here are the three most common questions we receive from current and prospective customers.

(Note: If we haven’t addressed your question below, visit our comprehensive FAQ page on our website or call, email or live chat a customer care representative any time.)

Question 1: Can I check my transaction usage online for my trial/production key?

Absolutely! To check your current usage, follow these simple steps:

 

  1. Go to www.serviceobjects.com
  2. In the upper right hand corner, click on “Login”
  3. Enter your user name/email address and password.  Not sure what they are?  Call your Customer Care Representative at (800) 694-6269 or email Support
  4. Go to the Usage Reports page and fill out the form
  5. The page will automatically refresh, providing a breakdown of your daily transactions

<bHelpful tip: Your billing dates may not coordinate with the first and last days of the month.  By entering the exact start and end dates for the billing period, you get the most accurate view of your monthly usage.

Question 2: I would like to update the credit card on my account.  How do I do this?

The best way to make changes to your account is to send an email to sobilling@serviceobjects.com with your request.  Your customer care representative will send you the necessary paperwork to fill out and fax back to (805) 963-9179.

Question 3: I am currently using one of the DOTS Web Services, and would like to add an additional service to my account.  How do I go about doing that?

We are so glad you asked!  If you need to test the integration of a new service before deciding to purchase, visit our website and login to your account (see login instructions above).  Then go to the DOTS product you are interested in and click on the Free API Trial Key link.  You’ll receive your new trial key via email within moments.  If you would like to bypass the trial key process, and order a production key immediately, click on the Order Now link on the specific product page and complete the order process online.

We hope that these answers to your most common questions help you find what you need quickly. And remember, we are always just an email, phone call or live chat away!

New Service Objects Website Designed for Developers

We know, we know… it’s been a while since our last post (seven months, but hey, who’s counting) and we promise we’re trying to do better. A LOT has been happening these past few months, and we are so excited to tell you about our latest project, something known affectionately around the office as the 300lb Gorilla – our new and totally redesigned website!

home-hero-blog

Immediately you will notice streamlined menus, simple navigation and easy access to the details you need the most. With the addition of the Developers section, our new site has been transformed into a complete resource filled with DOTS sample code, plugins and more – every developer and IT professional’s dream. #dataquality #datavalidation

Our CEO, Geoff Grow, had this to say: “Our new website is the next step in Service Objects’ brand transformation. We think this new look expresses our culture of a modern company: Innovative, reliable and open for business.”

So… What’s new the website?

EASY-TO-NAVIGATE PRODUCTS SECTION

One of the best features of this website is the enhanced, easy-to-navigate Products section. We found that the best way to enhance the navigational experience was to simplify – allowing each visitor to immediately be able to hone in on the web service they are looking for.

UPDATES FOR DEVELOPERS

During our design and development process, our main focus was to create a more “developer-based” experience. Our Marketing and Engineering teams worked closely to find better ways to encourage usability and make developer-related tools easy to access.

  • Developers section on main navigation: A new section has been added to our site where developers and IT integrators can easily find sample code, developer guides, and quick-test playground called Lookups.
  • Sample Code section has been reorganized: Sample Code can be found under the Developers section or within unique Product pages under “Developer Tools” and is organized by Product category and programming language.
    address-val-us-samplecode-blog
  • Product specific pages : Each DOTS Web Service is very unique, and so we needed to find a way to display the right information in an organized structure. We found that allowing each user the opportunity quickly select the product-related information that was most important for them to read, eliminated time the person had to search. The “Developers Tools” area for each product also displays quick links to the service path and WSDL.
    expanded-dev-tools-add-val-plus

FREE TRIAL KEY SIGN-UP:

As the leader in real-time data validation, we believe in letting the accuracy of our data speak for itself. Our Free Trial allows anyone interested in integrating one of our DOTS Web Service APIs to instantly access a Trial Key with 500 free transactions. We also offer a Free Batch Upload so non-engineers can test our data as well.

We hope you’ll find this website sophisticated and smart, and we’d love to hear what you think!

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

New Contact Validation Design References for Microsoft CRM, Microsoft SQL & Oracle DB

Our development team has been working hard on some new design references in an effort to make our contact validation services easier to integrate.  Many services can be strengthened with Service Objects products for address validation, email validation, NCOA service, phone number validation and much more.  Our design references are intended to make the integration process easier for web developers and to remove the guess work of what additional downloads and preparations may be needed for success.

Our NEW additions to our design reference library are:

Visit the above links to learn more and to download the Design References for your project.

You can use a FREE Trial Key for any of our products to test them out on your system.

Our goal is to help you prevent fraud and mistakes before, and even after, they enter your system.

No image, text reads Service Objects Tutorials

Failover of Contact Validation Services, Ensuring the Continuous Flow of Your Data

Many companies talk about up-time and service level agreements that look great on paper but don’t perform up to customer expectations in a crisis. A good backup process, covering application software, is necessary and should be standard, but it’s not enough.

You need an architecture that provides failover from a primary server, to a back-up server. It should pick-up and provides the same service, with minimal interruption in the flow of data to your application. This is more complicated, and valuable, than a mere backup of the application or even load balancing. Proper failover can minimize the interruption of access to your contact validation services. For mission critical and business critical applications this involves automatic failover to a fully separate alternate location. This dual datacenter level of failover protects against single datacenter failure of servers, LAN and WAN network access failure and physical location failure modes such as fire, power, or natural disruptions.

When considering real-time contact validation services for a business application, where continuous uptime is critical, here are some things you should look out for:

  • Live backup servers that perfectly mirror the live primary servers, they should be identical both in count and content
  • Multiple datacenters, one hosting primary servers and the other backup servers, located in different locations of the country, if a disaster takes out one data center, or the network access, it should not affect the other
  • 99.95% uptime, with assurances that both datacenters have never gone down at the same time
  • With the application and content exactly the same in both production servers, doing a failover should be as simple as changing the URL from one server to the next. For example, you would code your application to automatically failover from the URL of the primary server to the URL of the back-up server based on a condition such as response timeout
  • In the event of massive datacenter failure, your contact validation provider should redirect traffic to the backup server at the backup datacenter
  • XML code should contain failover suggestions and any support team should be able to help their clients implement failovers

Having a data response failure in your live application, that utilizes contact validation, can mean unexpected losses as your customers usage of that applications disrupted. If a company is accustomed to having validated contacts imported into their CRM system or used in some application from their website, a loss of service, even temporary, can lead to corrupted or non-corrected data. Failure of your contact validation processes can cause sales people to waste effort following bogus leads or packages to be delivered to bad addresses, all of which could be avoided if the contact validation provider offers a business critical level of service as noted above.

Happy Birthday Microsoft Office

Happy Birthday MicrosoftIn a few days Microsoft Office will turn 21. It’s used in 80% of all businesses with only 8% using alternatives like Oracle StarOffice, Google Apps, and Lotus Symphony. To celebrate this momentous occasion, we thought it would be appropriate to develop some new code samples that integrate DOTS Web services with Microsoft Office 2007 and 2010.

MS Office offers a great development environment as well as a great set of applications (Excel, Word, Access, PowerPoint). The new examples we created use the latest in .NET technologies to validate email addresses, correct postal addresses and look up sales tax rate information, all within Excel. We’ve provided the source code too, so developers can easily integrate and customize DOTS Web services into their Office applications.

Our new Excel examples enable developers to transparently leverage the power of DOTS Web services directly from within their Microsoft Office applications: after programming in the desired functionality, they can easily distribute the enhanced Office documents to their team, using 100% native Microsoft Office functionality.

So, Happy birthday, Microsoft Office, and thank you for making it possible for companies like us to design products that enhance your core functionality. DOTS Web services are now compatible with Microsoft Word, Excel, PowerPoint and Outlook, versions 2000, 2003, 2007 and 2010.

Service Objects Customers Benefit From Seventh Straight Month of 100% Availability

At Service Objects, our commitment to contact data quality doesn’t stop at the data services we provide. Availability and access to that data is critical to our business and our customer’s businesses. Ensuring our data is available “on-demand” is something we take very seriously. For the seventh straight month, we are pleased to report 100% Availability of our DOTS Web Services.

To monitor the performance and accuracy of our services we use several automated quality assurance tools. Our motto is to test the accuracy and speed of every service, every operation, every hour, every day: 24/7/365. On a typical day at Service Objects we perform over 2,000 internal self-tests on our delivery network for a variety of systems. We also utilize 3rd Party systems to monitor true uptime, application availability and performance of our DOTS Web Services as an additional, impartial test. One of these 3rd Party systems is AlertSite.

“External monitoring is essential for measuring and understanding customers’ experience of today’s applications, which often includes a number of participants in the application delivery supply chain,” said Ken Godskind, Chief Strategy Officer for AlertSite. “In providing full disclosure of their AlertSite metrics, Service Objects offers admirable transparency to the users that count on their services.”

AlertSite is a hosted provider of Web performance products. AlertSite maintains 40 monitoring stations in data centers on every continent but Antarctica. They ensure and provide independent analysis of our DOTS Web Service Network to track and verify our systems are always available and running at peak performance. Using AlertSite, we test our data as often as every five minutes from multiple cities around the globe. Real-time alerts are generated and logged if page errors or performance problems occur. Service Objects is the only provider of contact validation web services that uses an independent third party to corroborate the company’s performance and fulfillment of its Service Level Agreement (SLA).

“Third-party monitoring of our network, applications and performance is critical for maximum reliability”, said Geoffrey Grow, CEO and Founder of Service Objects, “not only does AlertSite give us the confidence our network is running worldwide, it also independently ensures we are meeting or exceeding our Service Level commitment to our customers.”

Since 2006, Service Objects has published its monthly performance reports to demonstrate our networks perform consistently at the preferred levels. Click here to view our archive of Performance Reports from AlertSite.

Posted by Gretchen N.

The Top 10 Reasons I Enjoy Working at Service Objects

10. Our game room with a ping pong table, foosball table, big screen projector, XBOX360, and Rock Band.

9. Blazing fast network access and dual 20″ LCD monitors.

8. Did someone say BBQ?! We love to cook on the Q on our beautiful patio overlooking the mountains of Santa Barbara.

7. Wellness benefits coverage that lets us spend it how we want it! $500 per month on top of our salaries!

6. Fifteen paid personal days off and seven paid holidays = over a month off PAID!

5. Flex time! As long as we work 8 hours in a day we can always get all of our appointments and personal items taken care of.

4. Service Objects encourages us to support our community and gives us work time to spend on community outreach, service, or mentoring.

3. Working in Santa Barbara, CA and being only one mile away from the beautiful SB beaches.

2. Our profit sharing plan that pays monthly and YES, we have been profitable every quarter since day one!

1. And the #1 reason I enjoy working at Service Objects…our office is a dog friendly environment so I get to bring my dog (Honey) to work everyday!

Check it out for yourselves!

Thanks for reading this week,

Chris M.

So, What Are Those DOTS Things Anyway?

Service Objects was created in 2001 out of the necessity to validate contact information. Since then we have become a well oiled machine offering over 20 different DOTS Web Services

Service Objects DOTS Web Services offer capabilities that leave our competition in the dust scratching their heads. But a question that keeps rising to the top is…”What does DOTS stand for”?

DOTS” stands for Dynamic On Time Services, an acronym that was created by one of our founders and current CEO. The reason that I wanted to blog about this topic with you is because I’m in sales. I’m on the front lines everyday and one of the most consistent questions I receive is, “what does DOTS stand for?”

If you notice throughout our Web site you will find small “dots” at the top left potion of each page, next to our company logo. These “dots”, or symbols, are what ties our logo and company name with our actual product names, which always begin with the word “DOTS”, case in point our DOTS Lead Validation Web Service.

I hope that I was able to I clarify one thing for you today…when you think of DOTS, think:
Dynamic On Time Services”, and of course, think Service Objects!

Thanks for reading,

Ryan M.

Want to connect with Ryan M.? Email him today at communications@serviceobjects.com.