Posts Tagged ‘Integration’

Address Suggestion with Validation: A Match Made in Heaven

In an ideal world, data entry would always be perfect. Sadly, it isn’t – human errors happen to end users and call center employees alike. And while we make a good living cleaning existing bad data here at Service Objects, we would still much rather see your downstream systems be as clean as possible.

To help with that, many organizations are getting an assist with Google, in the form of the Autocomplete with their Places API.  If you setup your form properly and use their API you can have address suggestions appear in a dropdown for your end users to take advantage of to help enter correct data into your system. That’s great, isn’t it?

It does sound great on the surface, but when you dig a little deeper there are two problems:

  • First, Google Places API often does not often suggest locations to the apartment or suite level of detail. The point is that a considerable segment of the population lives in apartments or does business on separate floors, suites or buildings.
  • Second, the locations the Google Places API suggests are often not mail deliverable. For instance, a business may have a physical location at one address and receive mail at a completely different address.  Or sometimes Google will just make approximations as to where an address should be.

For example, check this address out on Google Maps: 08 Kings Rd, Brighton BN1 1NS, UK.  It looks like a legitimate address, but as the street view shows, it does not seem to correspond to anything.

These issues can leave gaping holes in your data validation process.  So, what can you do? Contact us at Service Objects, because we have the perfect solution: our Address Suggestion form tool. When combined with the Google Places API you will have a powerful tool that will both save time and keep the data in your system clean and valid.

This form tool is a composite of the Google Places API and our Address Validation International service.  The process consists of the data entry portion, the Google Paces API lookup, then the Address Validation International service call, finally displaying selectable results to the end user.

Let’s start by discussing the Google API Key, and then the form, and finally the methods required to make that Google Places API call.

Google Places API requires a key to access it.  You can learn more about this API here.  Depending on your purposes you may be able to get away with using only Google’s free API key but if you are going to be dealing with large volumes of data then the premium API key will be needed.  That doesn’t mean you can’t get started with the free version: we in fact use it to put our demos together, and it works quite well.

When setting up your key with Google, remember to also turn on the Google Maps Javascript API, or else calls to the Places API will fail.  Also, pay particular attention to the part about restricting access with the API key.  When you have a premium key this will be very important because it will allow you to set the level at which you want the key to be locked down, so that others can’t simply look at your Javascript code and use your key elsewhere.

The form we need to create will look like a standard address data entry form, but with some important details to note.  First let’s look at the country select box: we recommended that this be the first selection that the user makes. Choosing a country first benefits both you and the user, because it will limit suggested places to this country, and will also reduce the number of transactions against your Google Places API key.  Here is a link to how Google calculates its transaction totals.

Another important note is that we need to have the Apt/Suite data entry field.  As mentioned earlier, the Google Places API often does not return this level of resolution on an address, so we add this field for the information be provided by the end user.

The rest of the fields are really up to you in how you display them.  In our case, we display the parsed-out components of the results from selected address back into the rest of the address fields.  We keep all the address input fields editable so that the end user can make any final adjustments they want.

The methods associated with this process can be summarized by a set of initializations that happen in two places: first, when a country is selected, and second, when the focus is on the Address field by a user clicking into it.  For our purposes we default the country selection to the United States, however when the country is changed the Autocomplete gets reinitialized to the selected country. And when a user clicks into the Address field, the initialization creates a so-called bias, e.g. Autocomplete returns results based on the location of your browser.  For this functionality to work, the end user’s browser will ask to let Google know its location.  If the user does not permit Google to know this the suggestion is turned off and does not work.

This bias has a couple of interesting features.  For instance, you can change the code to not utilize the user’s browser location but instead supply a custom latitude and longitude.  In our example, the address suggestion does not end up using the user’s current position when the selected country is not in the same country as the user.  But when the user is in the same country as the selected country then the results returned by the Google Places API are prioritized to your location.  This means that if you are in Santa Barbara, CA and select the United States as the country, when you start typing a United States address you will first see matching addresses in Santa Barbara, and then work outward from there.

You can customize the form bias to any particular location that you have a latitude and longitude for.  The ability to change this bias is very useful in that setting the proper bias will reduce the number of lookups against the Google Places API before finding an address match, and will also save manual typing time.

Now let’s discuss the Address Validation International service API call, which consists of a key, the call to the service and setting up best practices for failover.

Let’s start with the key.  You will need to either have a live or free trial license key from us, the latter of which can be gotten here.  For this example, a trial key will work fine for exploring and tinkering with this integration.  One of the great things about using our service is that when you want to turn this into a live production-ready solution, all you have to do is switch out the key from the trial to the production key and switch the endpoint to the production URL, both of which can be done in minutes.

The call to the Address Validation International service will be made to either the trial or production endpoints, which will depend on the key that you are using.  The details of the service and how to integrate with it can be found in our developer guides.  In the Javascript code you will round up all the data in the fields that were populated by the address suggestion selection and send them off to the service for validation.  The code that manages the call to the Address Validation International service needs to be executed on some back-end server client.

It is strongly discouraged to make the call to the service directly from Javascript, because it will expose your license and allow someone to take it and use your transactions maliciously.  You can read more about those dangers here.  Also, here is a blog about how to make a call to another one of our services using a proxy.  The basic idea is that your Javascript call will call the proxy method that contains your license key, essentially hiding it from the public.  This proxy method will make the final call to the Address Validation International service, get the results from it and pass those results back to the original call in the Javascript.  In this situation, the best place to implement failover is in the proxy method.

So what is failover? Failover, from the perspective of an end user developer, is just a secondary data center to call in the unlikely event that one of our primary data centers go down or does not respond in a timely manner.  Our developer guides can again help with this topic.  There you will also find code snippets that demonstrate our best practice failover.

Once this call is set up, all that is left is evaluating the results and displaying the appropriate message back to the end user. While you can go through our developer guides to figure this out, the first important field to examine in the response from the Address Validation International service is the Status field – here is a table of what is expected to be returned:

Address Status

Name Description
Invalid For addresses where postal and/or street level data is available, and the street was not found, bad building number, etc.
InvalidFormat For addresses where Postal data is unavailable and only the address format is known.
InvalidAmbiguous For possibly valid addresses that may be correctable, but ultimately failed validation.
Valid For addresses with postal level data that passed validation.
ValidInferred For addresses where potentially far reaching corrections/changes were made to infer the return address.
ValidApproximate For addresses where premise data is unavailable and interpolation was performed, such as Canadian addresses
ValidFormat For addresses where Postal data is unavailable and only the address format is known.

 

Another important field will be the ResolutionLevel, which can be one of the three following values: Premise, Street and City.  The values returned in these two fields will help you make a decision in the code with respect to what exactly you want to display back to the end user.  What we do in our demo is display the Status and ResolutionLevel to the end user along with the resulting address.  Then we give the user a side-by-side view of both the resulting address just mentioned and the original address the user entered.  This way the end user can make a decision based on everything we found. In the case shown here, for example, we updated Scotland to Dunbartonshire and validated to the premise resolution level.

There are many customizations that can be made to this demo, such as the example we mentioned earlier about setting up the bias.  Additionally, instead of using the Address Validation International service you could also create an implementation of this demo using our Address Validation US or our Address Validation Canada products.

Want to try this out for yourself? Just contact one of our Account Executives to get the code for this demo – we’ll be glad to help.

Types of Integrations

Searching for the proper tool to fit your business needs can be a daunting task. At Service Objects, ease of integration is engineered in as part of each of our products, ranging from seamless API interfaces to list processing services that work directly on your data files. This article discusses each of our integration strategies in detail, to simplify your research process and to help pinpoint the type of integration that will best suit your needs.

Service Objects products are created as web services. This means that any programming language that can make a web service request, can make use of our services. From programming languages like  PHP, Java, C#, Ruby, Python, Cold Fusion and many more, to CRM systems such as Salesforce, Marketo, Hubspot and beyond. Nearly all major languages and platforms can make use of Service Objects’ web services.

Below we discuss the most common types of integrations we see from our clients . And if you have a platform that isn’t listed below and would like more information on how it could tie in with our services, please reach out to us – we are happy to provide tips, sample code, plug-ins and recommend best practices and procedures.

API integration

This is our most popular option for real time validations, and allows our capabilities to be integrated directly into your software. Our services can be called via web requests either by HTTP GET or SOAP/POST, and the service response can be delivered in XML or JSON format. These protocols and output formats generally allows enough flexibility to meet  your needs. We also offer a web service description language (WSDL) file that can be consumed to auto generate the necessary methods and classes to call our various web services. If you have a specific language in mind, please check out our Sample Code page – chances are we have sample code already written for your needs.

List Processing

List processing involves sending us a list of your data to be validated. We take this list and process it through the appropriate web service and then return the results, appended to each record in your file. From there you can take the data, apply your business logic, and save it to your database.

This type of process is often the best approach for cleaning up existing data in bulk. A large export is generally easier than integrating via the API and processing it manually. However, depending on the resources you have available, both API or list processing are completely viable options and we have a number of clients that use both in concert.

We offer two convenient solutions for list processing: single batch runs for one-time processing, or automated batches. Let’s look at the differences between them:

Single batch runs. A single batch run is one of the simplest ways to have your data processed. You send us a comma separated value (CSV) file and we’ll run it against our services, append the data, and return it to you. It is perfect for cleaning up existing data. Many clients run a single one-time batch process to clean up their existing data and then implement a real time solution into their product, giving them the best of both worlds: clean existing data together with a process to ensure that incoming data is the highest quality possible.

Automated List Processing. Your data can be processed securely and automatically by uploading the data file to our secure FTPS server. Once uploaded, our system will recognize the new list to process and get to work. The input file will be parsed, run through a web service, and the results will be appended to original file. It is nearly identical to the one-time processing service that we offer, with the added benefit that you can upload files at your convenience to be processed automatically.

CRM integration

If you currently use one of the major customer relationship management (CRM) or marketing automation software platforms like Salesforce, Marketo, Hubspot, or others, chances are that our services integrate with it and we likely have sample code or plug-ins for themEach platform has its own level of customizability but they almost universally offer some variation on a plugin, api, or exposed interface to integrate with. Contact us to learn more about integrating our capabilities with your specific platform.

Whether you develop an API interface for your current software, use batch list processing, or integrate our capabilities with your CRM or marketing automation platform, Service Objects is with you every step of the way with support, sample code, tutorials, and the experience that comes with serving nearly 2500 customers. Get in touch with us today and see how easy it can be to integrate best-in-class data quality with your own applications environment.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Salesforce Data Quality Tools Integration Series – Part 2 – Validation Plug-ins in Flows

Welcome back to our blog series where we demonstrate the various ways you can achieve high data quality in Salesforce through the use of Service Objects’ validation tools.

In the first part of this series, we showed how to create a trigger and an Apex future class to handle calls to our web service API. That blog described a common way, in code, to call our services in Salesforce. But, there are more simple ways to do this.

In this blog, we are going to step back and demonstrate an integration that requires little more than drag and drop to achieve your goals by implementing a plug-in on a flow. Because – why write code if you don’t have to? When we are done you will be able to create a flow and drop our plug-in anywhere in your process and wire it up.

We are going to start by looking at some basic setup. Then we will step through the code. What code? You are probably wondering, “Why do we need to look at the code, if we don’t need to write any”. Well, if you wanted to implement any additional business logic or tailor the plug-in, you will be equipped to do just that.

After we review the code, we will jump into creating a sample flow that allows a user to enter either a US address or Canadian address and process that data through our US and Canada validation APIs and then display those results back to the screen.

In this section, we will need to do some setup before we get started. We will need to register the Service Objects’ endpoint with Salesforce so that we can make the web service calls to the address validation API’s. We went over this in the previous blog but it is worth repeating here.

The page where we add the URL is called “Remote Site Settings” and can be found in Home->Settings->Security->Remote Site Settings. This is the line I added for this blog.

Be aware that this will only work for trial keys. With a live/production key you will want to add a URL for for ws.serviceobjects.com and wsbackup.serviceobjects.com. As a best practice, you’ll want to add both endpoints with your live key, to take advantage of our fail-over capabilities. We named this one ServiceObjectsAV3 because we are really just reusing it from the previous blog but you can name it whatever you want.

No need for custom fields to get this example to work but you will likely want to create the same ones or similar as those in the previous blog, seen here.

This section shows the structure of the plug-in class for the US address validation, that we are naming AddressValidationUSA3, and the signature of the invoke and describe methods. The invoke method will contain the business logic and the call to our web service API. The describe method sets up the input and output variables for the plug-in that end users will be able to connect to on a flow.

The describe method also allows you to supply definitions and descriptions of the variables on the plug-in itself that will appear in the flow interface. The definitions used here are important because it can save a lot of time for the end user that is developing a flow. I would resist skipping this to save time. The following is just a snippet of the code.

There really isn’t much else to the describe method, most of the business logic happens in the invoke method. In the invoke method, we will gather the inputs to the plug-in and do some initial formatting to make sure we have valid characters in the call to our API. In gathering the inputs, we make sure to use the names of the inputs that we used in the describe method.

Since we will be making a path parameters call to the API, we want to account for anything that could break the URL like missing parameters. A missing parameter using our API should break the call but on other APIs it could simply change the meaning of the call and end up returning unexpected results. To make sure there are no issues with missing parameters, we simply replace any ones that are missing with space character. Just as in the previous blog, there will be minimum field requirements before it even makes sense to call the operation. The operation we are using is GetBestMatches and these are the requirements.

  • Combination 1
    • Address
    • City
    • State
  • Combination 2
    • Address
    • Zip code

If you do not have some combination of these inputs then it is best to add code to avoid the call completely, since there is no way to validate an address otherwise. By “avoid the call,” we mean avoid even hitting the plug-in at all, since it would not be necessary. A decision component in a flow can help with the process.

In an effort to simplify the code, I pushed the logic for calling our API into a method called CallServiceObjectsAPI which should make things easier to read. Using this method, we pass in the input parameters that we cleaned up.

Below, I show how to setup the HttpRequest/HttpResponse and the request URL. After that, I add in some basic error checking to check for a couple results. First, I am checking to see if there was an error with the API call and/or the result from it. If other errors happen outside of that then we catch the general exception, which I would assume is a connectivity issue at that point. You can try to catch more specific exceptions on your own but what I have here will work in a general fashion. In the instance of an error or exception on the call to our API, we also demonstrate our standard best practice of failover by adding code to call another endpoint in these situations. In the case of this blog, we are walking you through a trial key scenario, so failover in this case will not be true failover since it is failing over to the same trial.serviceobjects.com endpoint. But in a live key scenario, you would be encouraged to use ws.serviceobjects.com for the first call and wsbackup.serviceobjects.com for the failover call.

Another thing you may have noticed in the code above is that the input parameters to the API call are URL encoded and “+” are switched to “%20”. The reason for this is that certain characters are not allowed in a URL or a path parameters call, so the built-in Apex function urlEncode cleans that kind of thing up. One side effect that the encoding has is it replaces spaces with “+” symbols. Though in a URL “+” symbols are becoming the norm, path parameter calls still have issues with them. So the proper way to make the call work is to replace the “+” symbol with a “%20” which will be deciphered correctly as a space in the end. The method returns a string response from the web service and based on the call made, it is more precisely returning a JSON string response which we save in the variable ServiceObjectsResult.

The first thing we do after we get the response from the method is deserialize it into a Map object so we can start processing the result. Here is the rest of the code.

This section of the code is checking to see the type of response that was returned. The response could have been either an address, error or network error response. Based on those variations, we populated the corresponding output values in Map variable called “result”. In the Map, we map the outputs from the service to the expected outputs described in the describe method. Those values are the output values of the plug-in and are directly interfaced with in the flow. Adding code anywhere in the method we just went through would be appropriate based on your own specialized business logic.

Now that we have gone over the code, we are ready to jump in and show an example of our plug-in in a flow. For this example, I also created a Canadian address validation plug-in to make it a little more interesting. However, I do not see any service we offer that would not make for an appropriate and powerful example.

As I mentioned on the outset of this blog, I will show you a demonstration of a flow where the end user will be presented with a data entry screen. They will have options for adding either a US address or a Canadian address. From there, we will wire it up to either the US address validation plug-in or the Canadian address validation plug-in and then finally display the results to the screen. This flow will be more of an example on how to wire up the plug-ins rather than creating an input and output screen. Though doing something with screens is not out of the question, it will be more realistic to have a flow that manipulates Contact, Account or Custom objects without an interface.

Start by creating a new flow in the Process Automation section. I am just going to open the one I already had saved. Dragging on the Screen object is the first step and be sure to set it to be the starting interface by setting the green down arrow on the object. A flow cannot be saved without one part of it being a starting point.

Inside this interface you will setup the input fields that you want to retrieve from the user. In this example, we forced the Address 1 field to be a required field and at the bottom we added a radio button selection for the desired country, defaulted to USA.

Once we have the inputs from the user, we need to find some way to route the variables to either a US address validation or a Canadian address validation. For that we can use Decision Logic Tool. We will configure it to look at the country field and make a decision on which way it should continue to process.

The actual logic simply decides to go down the US address validation path if USA is found otherwise it will assume it is a Canadian input address.

Now we are ready to drop our US and Canada address validation plug-ins on the screen. On the left, in the Tool area you can find the plug-ins under the respective names you gave them in the creation of the plug-in.

You can and will be forced to drag them onto the flow canvas one by one and set them up individually. Inside you will be mapping the inputs from the user data entry to the inputs for the plug-ins. Additionally, you will be mapping the outputs to variables you create or objects variables in the system. This can take some time depending on how many of the address validation outputs you want/need to use.

When you are done with that part, you will wire them up in the flow to the decision tool you added earlier as shown below.

In this last part, we will setup two output screens, one for the US address validation results and one for the Canadian address validation results. This time instead of adding text boxes to the interface, we just add a display object for each field we want to show.

After wiring the last screens, the completed flow will look like this.

From here you can choose to save the flow and then add layout options on other screens that will give you access to executing the flow, schedule the work flow to run at a specific time (not useful in our example though) or you can run it directly from this interface by clicking Run. We will demonstrate it by clicking Run. We’ll start with a US address and then view the results on the output screen. In this example, you can see there are several issues with this address. Street name is incomplete, it uses unit instead of suite, city name is misspelled and even the postal code is wrong.

Upon validation, we see that the system was able to correct the address and had a DPV score of 1 meaning the result is a perfect address. The DPV score is one of the most important fields to pay attention to in the output. It indicates the validity level of the address. You’ll see other information in the response that will give you an idea about what was changed or if there were any errors. You’ll also have access to the fragments of the address so you can process the response at a more granular level. More details about the fields can be found here.

In the last example, we will use a Canadian address. In this case the only thing wrong with the address is the postal code, so we’ll see how the system handles that.

Sure enough that address validated and the postal code was corrected. In the US address validation service, the DPV score and error result indicated the validity of an address. In the Canadian validation service, we really only need to look at the error fields. Empty or blank error fields will mean that the address is good and can receive mail.

In conclusion, we learned how to create a plug-in and then use it in a flow. Though you do not need to know how the plug-in was made to be able to use them, it is very helpful to know the details in the case that your business logic requires a more tailored solution. And you can see that in this demonstration adding additional code does not take much effort. These plug-ins allow you to get up and running very quickly without needing to know how to code. As I mentioned earlier, the flow created here is definitely a use case but more often than not I would imagine Salesforce administrators creating flows to work on their existing objects such as Contact, Account or some other custom object.

A quick note on the license key, you will want to add you own license key to the code. You can get one free here for US address validation and here for Canadian address validation (Each service will require a different license key).

The last thing I want to discuss about the license key is that it is good practice to not hard code the key into the code. I would suggest creating a custom object in Salesforce with a key field. Then restrict the permissions on the field so that it is not view-able by just anyone. This will help protect your key from unwanted usage or theft. At this point, we have the code for these two address validation plug-ins, but Service Objects will continue to flush out more for our other services and operations. With that said, if there is one you would like to request, please let us know by filling out the form here and describe the plug-in you are looking for.

No image, text reads Service Objects Tutorials

Salesforce Trigger Integration – Video Tutorial

Here at Service Objects, we are dedicated to helping our clients integrate our data quality services as quickly as possible. One of the ways we help is educating our clients on the best ways to integrate our services with whatever application they may be using. One such application where our tools are simple to implement is Salesforce.

Salesforce is, among other things, a powerful, extensible and customizable CRM. One of the advantages of Salesforce’s extensibility is that users can set up triggers to make external API calls. This is great for Service Objects’ customers, as it allows APIs calls to any our DOTS web services and helps ensure their contact data in Salesforce is corrected and verified.

In the video below, we will demonstrate how to set up a trigger that will call our DOTS Address Validation 3 service whenever a contact is added to our list of contacts.

See full transcript below.

Hello, and welcome to Service Objects video tutorial series. For today’s tutorial we’ll be setting up a trigger and a class in Salesforce that will call out to our DOTS Address Validation 3 web service. If you don’t already know, Salesforce is an extremely powerful, extensible and customizable CRM. One of the great things that we like about Salesforce here at Service Objects is the ability to call out to APIs so that the data going into your CRM can be validated and verified before it gets entered. This means that you can call out to any of our APIs from Salesforce. You can use this video as an overview for how to integrate any of the service, but for this specific example we’ll be using DOTS Address Validation 3.

To participate in this tutorial, you need the following items. A Service Objects web service key, whether that is a trial key or a production key. You can sign up for a free trial key at www.serviceobjects.com. You will need a developer account in Salesforce. You will also need a working knowledge of Salesforce and Apex, which is the native programming language inside Salesforce. We will go ahead and get started.

To start off, one of the first things we’ll need to do is add the Service Objects endpoint into the list of allowed endpoints that Salesforce is allowed to contact within your developer platform. To do this, you can navigate here and type in remote site settings, or remote, and the remote site settings field will pop up. Here, you’ll see a list of all the websites that your Salesforce platform is allowed to contact. In my account here you can see I have ws.serviceobjects.com and wsbackup.serviceobjects.com. To add a new site, you’ll go and select new remote site. Give an appropriate name, and you will type in the URL here. You can see for this example I’m going to type in trial.serviceobjects.com which will only work if you have a trial license key. If you have a production key, you want to add ws.serviceobjects.com and wsbackup.serviceobjects.com as those will be the two primary URLs that you will be hitting with your production Service Objects account.

This trial.serviceobjects.com URL will only work with trial license keys. Click save and new or just save. You see here if we go back to our remote site settings, you can see that trial.serviceobjects.com was successfully added to our remote site settings. Now that we have successfully added the Service Objects endpoint, we’ll want to add some custom objects in our contact field that will hold some of the values that are returned by our DOTS Address Validation 3 web service. To do that, we’ll scroll down and go to customize. In our example we’re using the contacts field, but you can add custom fields to whatever field is most appropriate for your application, and we’ll select add custom field to contacts. Once we are here, we will scroll down and scroll to this contact custom fields and relationship. You can see here I have several custom fields here already defined. I have a DPV, mostly DPV information and error information, which our field set will parse out from our Address Validation 3 response.

We’ll add another field here for the sake of example. For this field we’re going to add the Is Residential Flag that comes back from the Address Validation 3 service. For this we’ll select text, select next, and here we’re going to go ahead and enter an appropriate field name, which I have in my clipboard. We’re going to call it DotsAddrVal_IsResidential. If you hover over this little “i,” it will say this is the label. This is the label to be used on displays, pages layouts, reports, and list views. This will be a more of a pretty type display. You’ll want to name it something more appropriate and something that will work better in your workflow, but for our example we’re just going to name it this.

For length, we’re going to do length of 15, and for the field name we’re just going to call it AddrValIsResidential. This is the internal field name here. When you’re calling an internal field name, you’ll have to add a double underscore and C in the Apex class. We’ll see an example of that in the next piece of code that we’re going to add. We’ll select next. You’ll select the appropriate field level security here. Next again, and go ahead and click save. To add the actual code that will call out to our Address Validation 3 web service, we’ll scroll down here, go to develop Apex classes. I have already added the class to my developer console, but just for the sake of example, I’ll go ahead and delete it and re-add it. I already have the code in a text editor, so I’m just going to copy and paste that, and just go over the code and explain some key points of it.

Now that I have my code copy and pasted in, I’ll walk through some key elements of it. In the sample code that we have, we have some extra commented out information here that gives you some resources like the product page, the developer guide. You can download this sample code along with this tutorial so you don’t have to pause the video and type it out and everything. The first thing we do is substantiate some of the HTTP request objects in this call WS by ID method. We’ll pull back the contact that’s just been added, and so we’ll pull back all these fields. Mailing street, mailing city, postal code, and state as well as the custom DPV and error information fields that we’ve entered into Salesforce. To call an internal field, an internal custom field that you’ve created in Salesforce, you’ll need to add this double underscore C at the end of it. We can see that we’ve done that here and other place where we reference these objects in the code.

Here, you can see we set the endpoint of the request to the trial URL endpoint, and this will point to the GetBestMatches JSON operation, so this will return a JSON formatted output. We’ve URL encoded all of the address information here. As you can see with this EncodingUtil.urlEncode. We’ll encode it to the UTF-8 standard. Another thing to note here is that you’ll have to put in your license key in this field here. Right now we just have it as a generic WS72 XXX, etc, but you’ll want to put in your specific license key. Here, we’ll send a request to the service, and if the response back is null, then that means there was something wrong with the primary endpoint, so we’ll come back here and check out our backup endpoint. For this example, it’s pointing to the same URL, the same trial endpoint. If you have a production key, you will want to point this primary URL to ws.serviceobjects.com, and this backup URL to ws.backup.serviceobjects.com. You’ll want to be sure to change both the license keys to whatever your license key is.

After that failover configuration, we’ll see here we checked the status code. If it’s equal to 200, we’ll go into processing the response from the service. Create some internal address fields here, and we’ll initialize the error response here to none, which would indicate that no error was returned from the service. What this does is it traverses through the JSON response of the service, and it finds the appropriate field. For this case we’ll see if it finds address1, it will set our initial address field to the address1 that was returned from the service. That will be the standardize and validated address information that is returned. We do that with all the fields that are pertinent to us. The DPV and DPV description, DPV notes description, as well as the IsResidential and error fields down here.

Here, you can see if we get a DPV score equal to 1. That indicates that the address is mailable, it is deliverable, and it is considered good by the USPS. This is the L-statement for the 200 code check here. If the 200 code wasn’t right, then we’ll say put the error description as this generic error message. At the end of this, we’ll update the list of contacts, so we’ll go ahead and click save. Now that we have our TestUtil class made here, we’ll go ahead and scroll down, select Apex triggers. To add a new trigger, we’ll select developer console, select file, new, trigger. For a name, we’ll simply call it Test Trigger.

We’ll go down here and select the contact object. We have the little bit of code right here. I have the actual code in a text editor that will call the service, so I’ll just copy that in. Now that I have this copied, you can see here that whenever a contact is added, or before it’s inserted rather, it will call the class that we made which was called WS by ID, and it will send the contact to it. To save this, just simply go to file and save. Hit refresh. We can see we now have a test trigger here. Now, to add a contact and to test out our new trigger, we’ll simply go up here, select contacts. In recent contacts, you can see here we don’t have any, so let’s go ahead and add one. We’ll add in a fake person by the name of Jane Doe. Go down here to the mailing street information, and we’ll enter in an address. For this example, we’re just going to use our Service Objects office address. We’ll put some typos in there so you can see the standardization and validation that the Service Objects web service does.

We’ll do 27 East Coat. That’s suite number 500. We’ll do Sant Barb for Santa Barbara and CA and 93101. We’ll go ahead and save the contact. You can see here that we still have the old values here, and that’s because the Salesforce doesn’t immediately call the outside APIs. It cues it up a little bit, but if we go and select Jane Doe again, we can see that now we have a standardize address here. In our DPV description, we have a message that indicates, “Yes, this record is a valid mailing address.” For this DPV score, we get a score of one. We can find the “Is Residential,” says false, meaning this is a business address. Again here, we see that the validated address, we see the USPS standardize version of the address which is 27 East Cota Street, Suite 500, as well as the validated city and zip-plus four information.

This concludes our tutorial for how to add a trigger and a class that will call out to our Service Objects web service. If you have any questions or any requests to other tutorials, please feel free to let us know at support@serviceobjects.com. We’ll be happy to accommodate.

 

The Struggles with Deprecated Services

The word “deprecated” is thrown around frequently in the software development world. It is used to indicate a product or service that is either not going to continue being maintained or it is going to be sunsetted. Often times, when companies roll out a new product or API they decide to give their users a heads up that the older operations are going to be deprecated. This prompts the users to update to the latest version to take advantage of the latest and greatest features that the company is offering.

Marking a service to be deprecated is a warning to the users of the product or service that it will no longer be supported and it is highly recommended to upgrade to a newer, supported service.  Here at Service Objects, we don’t particularly like the practice of deprecating services.  Although we don’t rule it out completely, our mission is to maintain support for our legacy services. This is because we understand that it takes time, resources, and money to integrate with APIs. The time it takes for developers to integrate, test, and deploy new code inevitably costs money. To help solve the issue of legacy services falling behind the advancements, we keep our core code separate from the individual service outputs. A fixed set of output fields enables us to provide our clients with peace of mind that the service they have invested their time and resources into won’t change beneath their feet.

A clear picture of this concept can be seen in our DOTS Address Validation services. We have DOTS Address Validation 1, 2 and 3. The 3rd iteration is currently our primary and most robust address validation service yet. It has the latest and greatest in terms of available output fields. Even though Address Validation 3  is our latest version of our address services, both DOTS Address Validation 1 and 2 are actively supported.

The reason we are able to maintain these is due to the fact that the share a core address validation code set, which is continuously refined to return the most accurate and up to date data available.

By choosing our services, you can rest assured that the service you integrate will not be left to be put out to pasture in the future  and will continue to push to provide you with the best data, regardless of which version of the service you are using.

We invite you to get started testing any of our 23 data quality services today.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Salesforce Data Quality Tools Integration Series – Part 1 – Apex Insert Trigger

With Salesforce being the dominant platform in the Customer Relationship Management (CRM) universe, we are excited to demonstrate how our tools can benefit the data quality of your contact data in this robust system.  Salesforce is highly customizable and one of the great things about it is that you don’t need to be a developer to create a rich user experience on the system.  Although, being a developer, will help with understanding some of the coding concepts involved with some of the components. With all the data in your organization don’t you want it to be clean, accurate, and as monetizable as possible? The Service Objects suite of validation API tools are here to help you achieve those goals. Our tools make your data better so you can gain deeper insights, make better decisions and achieve better results.

This blog will demonstrate various ways that a Salesforce administrator or developer can boost the quality of their data.  We will look into various flows, triggers, classes, Visualforce components, and Lightning components, and more as we demonstrate the ease of integration and the power of our tools in this series.

To get started, we will focus on demonstrating an Apex trigger with a future class.  I will show you the code, part by part, and discuss how it all fits together and why.  You will also learn how to create an Apex trigger, if you didn’t already know.

Apex triggers allow you to react to changes on a Salesforce object.  For our purposes, those objects will typically be objects that have some kind of contact information like a name, address, email or phone, IP, etc.  Additionally, the objects can contain various combinations of these as well.  In this example, we will use the Contact Salesforce object and specifically the mailing address fields, though the Account object would have made just as much sense to demo.  Service Objects has services for validating United States, Canadian and international addresses.  In this example, I am going to use our US address validation API.

We are going to design this trigger to work with both a single Contact insert and bulk Contact inserts.  This trigger will have two parts: the Apex trigger and an Apex future class.  We will use the trigger to gather all of the Contact Ids being inserted and the future class will process them with a web service call to our address validation API.

It is important to note a couple things about the future class method and why it is used.  First, using a future call tells the platform to wait and process the method at the next convenient time for the platform. Remember that Salesforce is a multi-tenanted platform, meaning organizations on the platform share, among others, processing resources.  With that in mind, the platform tries to govern processing so that everyone on the platform gets the same experience and not one organization can monopolize the system resources. Typically, the future calls get initiated very quickly but there are no guarantees on timing but you can be sure that the future method will process soon after calling it. Second, callouts to a web service cannot be executed within a trigger, so the future method acts more like a proxy for the functionality.  There are plenty more details and ways that a web service can be called and you can dive deep into the topic by going through the Salesforce documentation. If for no other reason, it forces you to separate the call to the web service from your trigger, in turn, exposing your future call to other code you may want to write.

Once we are finished the trigger and the future class, we will test out the functionality and then you will have some working code ready to deploy from your sandbox or development edition to your live org.  But wait, don’t forget to write those unit tests and get over 75% coverage…Shoot for 100%.  If you don’t know what I am talking about with unit tests I suggest you review that documentation on Salesforce.

The results of the future method will update mailing address fields and custom fields on the Contact object.  For this example, here are the fields you will need to create and their respective data types.  We will not need these until we get to the future method but it is a good idea to get them created and out of the way first.

  • Field name
    • Internal Salesforce name
    • Type
    • Service Objects field name
  • dotsAddrVal_DPVScore
    • AddrValDPVScore__c
    • Number(1, 0)
    • DPV
  • dotsAddrVal_DPVDescription
    • AddrValDPVDescription__c
    • Text(110)
    • DPVNotesDesc
  • dotsAddrVal_DPVNotes
    • AddrValDPVNotes__c
    • Long Text Area(3000)
    • DPVNotesDesc
  • dotsAddrVal_IsResidential
    • AddrValIsResidential__c
    • Checkbox
    • IsResidential
  • dotsAddrVal_ErrorDescription
    • AddrValErrorDescription__c
    • Long Text Area(3000)
    • Desc
  • dotsAddrVal_ErrorType
    • AddrValErrorType__c
    • Text(35)
    • Type

The last thing we will need to setup before we get started is registering the Service Objects’ endpoint with Salesforce so that we can make the web service calls to the address validation API.  The page to add the URL to is called “Remote Site Settings” and can be found in Home->Settings->Security->Remote Site Settings. This is the line I added for this blog.

 

Be aware that this will only work for trial keys.  With a live/production key you will want to add one for ws.serviceobjects.com and wsbackup.serviceobjects.com.  You’ll want both endpoints with a live key and we’ll explain more about that later.  We named this one ServiceObjectsAV3 but you can name it whatever you want.

Let’s get started with the trigger code.  The first thing needed is to setup the standard signature of the call.

The method will be acting on the Contact object and will execute after an insert.  Next, we will loop through the Contact records that were inserted pulling out all the associated Contact Ids.  Here you can add logic to filter out contacts or implement other business logic before adding the contact Id to the Id list of contacts to update.

Once we have gathered all the Ids, we will send them to the future method which is expecting list of Ids.

As you can see, this will work on one-off inserts or bulk inserts.  Since there is not much code to this trigger, I’ll show you the entire sample code for it here.

So that was painless, let’s move to the future class and we will see how easy it is to make the call to the web service as well.

Future methods need to be static and return void since they do not return and values.  They should also be decorated with the @future annotation and callout=true.

It will be more efficient to update the newly inserted records all at once instead of one at a time and with that in mind, we will store the results from our address validation web service in a new list of Contacts.

Based on the results from the service, we will either update the mailing address on the Contact record and/or the DPV note descriptions or errors, as well as, the Is Residential flag.  Some of these fields are standard on the Contacts object and some are custom ones that we created at the beginning of this project.  Here is a sample of initiating the loop in order to loop through the Contact Ids that we passed into this method from the trigger and then the SOQL call to retrieve the details.

In case you are wondering why we just didn’t create a list of Contacts and send those in from the trigger instead of building up the list of Contact Ids, the reason is there is a limitation to @future calls. You can only pass in primitive objects such as Ids, strings, integers and so on.  So we went with a list of Ids where in Salesforce Id is its own primitive type.

Demonstrated in the code, which is shown in the next screen shot, are our best practices for service failover to help ensure 100% uptime when making calls to the API.  Note, that with a live production key for our API, the URL for the first trial.serviceobjects.com would need to be ws.serviceobjects.com and the second one, the one inside the “if” statement, would need to be wsbackup.serviceobjects.com.

I left both of these as trial.serviceobjects.com because most people starting out will be testing their integration with a free trial key.  In the screen shot you will see that I have added the API license key to the call “ws-72-XXXX-XXXX”.  This is factitious. You will need to replace that with your proper key based on the live/production or trial endpoint your key is associated with.  A best practice suggestion for the key is to “hide” it in a custom variable or custom configuration instead of exposing here in this method.

Once we get a response back from the call to the API and everything is okay, we setup some variables and start parsing the response.  There are several ways to parse JSON and definitely better ways than the one described here but this is not an exercise in parsing JSON, it is an example in how to integrate.  In this example, we loop through the JSON looking for the field names that we are interested in.  We are looking for:

  • Address1
  • City
  • State
  • Zip
  • DPV
  • DPVDesc
  • DPVNotesDesc
  • IsResidential
  • Type
  • Desc

But the service returns many other valuable fields, which you can learn about from our comprehensive developer guide found here, which has other helpful information along with the fields mentioned.  Remember, if you do end up using more fields from the service and you want to display them or have them saved in your records, you will need to create corresponding custom fields for them.  The next screen shot is just a part of the code that pulls the data from the response.

In practice, you may want to make decisions on when to update the original address using more criteria, but in this example we are basing that decision on the DPV score result alone.  You can find out more about the different DPV codes back in the documentation.  When the DPV value is 1 then we are returning a valid mailing address.  Corrections to the address may have occurred so it would be best to update the address fields on the Contact record and that is what we are doing here just before adding the updated Contact to our new list of Contacts.

Once we have looped through the entire list of Ids that we sent into this method, we are ready to do the update in Salesforce.  Before this point, nothing yet would have been saved.

And there you have it, go ahead and add some new contacts with addresses and test it out.  Over at the Contacts tab I add a new contact and then refreshed the page to see the results.  I will purposely make an error in the address so we can see more of the validation results.

The address we added is for our office and there are several errors in the street name, city and zip code.  Let’s see if our system gets it right.

The address validation API fixed the address and returned that the fixed address is the correct address to use based on the inputs.  Next, we will demonstrate a bad, non-salvageable address. You will see more than a few things wrong here.

There were so many problems that the address was not salvageable at all.

Let’s try one more, but this time instead of a completely bad address, we will add a bad (not completely bad) address but missing key parts.

The input address is still not good enough to be good but this time we were able to get details back that can help with fixing the problems.

From the results, we can see that the address is there but perhaps a unit number or something key to the address is missing to get full delivery by the USPS.

In conclusion, there are a few things to take into consideration when integrating data quality tools into Salesforce. First, you need to do more error checking.  These were simple examples to show how to integration our service and the error checking was the bare minimum and not something I would expect to see in a production environment. So, please…please, more error checking. Second, don’t forget to write the unit tests and try to get 100% coverage.  Salesforce only requires 75% to be able to deploy your code, but we highly recommend striving for 100%, it is definitely attainable.  Salesforce has this requirement for several reasons. One being that when Salesforce makes updates to their platform, they can run all the units in all the organizations on the platform and ensure they are not going to break anyone’s system. It is just good practice to do so.  There is tons of documentation on Salesforce that will help you down the right path when it comes to testing. Lastly, we didn’t make any considerations in the code for the situation where a contact being inserted doesn’t have an address to validate or enough address components.  Clearly, you would want to add a check to the code to see if you have a valid combination of data that will allow for an address validation against our API.  You will want to see if any of these combinations exist. These represent the minimum or required fields.

  • Combination 1
    • Address
    • City
    • State
  • Combination 2
    • Address
    • Zip code

You can find the sample code for this blog on our web site with the file names TestTrigger.trigger and TestUtil.cls at this link.

No image, text reads Service Objects Tutorials

Service Objects ColdFusion Integration Tutorial

As part of our commitment to making our data quality solutions easy to integrate, our Application Engineering team has developed a series of tutorials on how to integrate our services.  The series highlights various programming languages, with this tutorial exploring the “how-to’s” of applying our services using ColdFusion.

ColdFusion is a scripting language that has been around since 1995. It was created to make development of CGI scripts easier and faster.  ColdFusion has unique aspects, including use of its native ColdFusion Markup Language (CMFL for short) to allow HTML style tags for programming with systems. Like most things in the tech world, it can draw a lot of polarized opinions, where some are ardent supporters, and others, less than enthusiastic fans. If you fall in the supporter camp, and want to learn how to call a web service with ColdFusion, that is where our experts can step in and help.

To get started you will need a ColdFusion IDE (we’re using ColdFusion Builder 3) and a Service Objects’ License key. We’re using one for DOTS Lead Validation but you can follow along with your service of choice.

Project Setup

The first step is to launch your IDE and select an appropriate workspace for your project. Next, we will create a new project.

Select next for a blank template and then click next again.  On the following screen give your project an appropriate name and click finish.

Congratulations! You created a brand new ColdFusion project. Now it’s time to add some code. For starters, we’ll want to add a form and elements to initialize our form inputs so that we can create a sample page to input data to send to our web service. This likely won’t be what you will want to do in a live environment, but this is for demonstration purposes.

The DOTS Lead Validation service that we’re using has quite a few inputs so this may take a while. Once you are finished it should look like the following:

Making the Web Service Call

The next bit of code that we will add is to make the actual HTTP GET call to the Service Objects’ web service. Let’s use the CFML tags to make the actual web service call.

After the code makes the call to the trial.serviceobjects.com endpoint, we perform a failover check in the code. This failover check and the try catch blocks that it is nested in will help ensure that your integration of our web service will continue to work uninterrupted in the event that the primary web service is unavailable or not responding correctly.

The primary endpoint should be pointing to ws.serviceobjects.com and the backup endpoint should be pointed to wsbackup.serviceobjects.com.

Displaying the Results

Now that you have successfully called the web service, you will obviously want to do something with the results. For demonstration purposes we will simply display the results to the user.  You can use the code snippet below to display.

If you are having trouble figuring out how a particular output is mapped in the ColdFusion response, then you can use the <cfdump var=””> tag to dump the outputs onto the screen. This should allow for easy troubleshooting.

Now that our CFML is all set up, lets see an example input and output from the service. Below is sample lead information that you might encounter:

And here is some of the response that DOTS Lead Validation will return:

The DOTS Lead Validation service can return a multitude of information about your lead.  To download a trial key for any of our 23 contact validation solutions, please visit https://www.serviceobjects.com/products

P.S.  Here is the full ColdFusion script page in case you need it to get up and running.

 

Service Objects’ Application Engineers: Helping You Get Up and Running From Day 1

At Service Objects, one of our Core Values is Customer Service Above All. As part of this commitment, our Application Engineers are always available to answer any technical questions from prospects and customers. Whether users are beginning their initial investigation or need help with integration and deployment, our Engineers are standing by. While we continually make our services as easy to integrate as possible, we’d like to touch on a few common topics that are particularly helpful for users just getting started.

Network Issues

Are you are experiencing networking issues while making requests to our web services? It is a very common problem to face where outbound requests are being limited by your firewall and a simple rule update can solve the issue. When matters extend beyond simple rule changes, we are more than happy to schedule a meeting between our networking team and yours to get to the root cause and solve the issue.

Understanding the Service Outputs

Another common question revolves around the service outputs, such as how they should look and how they can be interpreted. From a high level, it is easy to understand what the service can provide but when it comes down to parsing the outputs, it can sometimes be a bit trickier. Luckily there are sets of documentation for every service and each of their operations. Our developer guides are the first place to check if you are having trouble understanding how individual fields can be interpreted and applied to your business logic. Every output has a description that provides insight into what that field means. Beyond the documentation, our Application Engineering team is available via multiple channels to answer your questions, including r email, live chat, and phone.

 Making the Move from Development to Production

Eventually everyone who moves from a being a trial user to a production user undergoes the same steps. Luckily for our customers, moving code from development to production is as easy as changing two items.

  • The first step is swapping out a trial license key to a production key.
  • The second step is to point your web service calls from our trial environment to our production environment. Our trial environment mirrors the exact outputs that you will find in production so no other code changes are necessary.

We understand that, even though we say it is easy, making the move to production can be daunting. That is why we are committed to providing your business with 24/7/365 technical support. We want the process to go as smoothly as possible and members of our team are standing by to help at a moment’s notice.

We have highlighted only a few broad cases that we have handled throughout our 16 years of providing genuine, accurate, and up-to-date data validation. Many technical questions are unique and our goal is to tackle them head on. If a question arises during your initial investigation, integration, move to production, or beyond, please don’t hesitate to contact us.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

API Integration: Where We Stand

Applications programming interfaces or APIs continues to be one of the hottest trends in applications development, growing in usage by nearly 800% between 2010 and 2016 according to a recent 2017 survey from API integration vendor, Cloud Elements. Understandably, this growth is fueling an increased demand for API integration, in areas ranging from standardized protocols to authentication and security.

API integration is a subject near and dear to our hearts at Service Objects, given how many of our clients integrate our data quality capabilities into their application environments. Using these survey results as a base, let’s look at where we stand on key API integration issues.

Web service communications protocols

This year’s survey results bring to mind the old song, “A Little Bit of Soap” – because even though the web services arena has become dominated by representational state transfer (REST) interfaces, used by 83% of respondents, a substantial 15% still use the legacy Simple Object Access Protocol (SOAP) – a figure corroborated by the experiences of our own integrators.

This is why Service Objects supports both REST and SOAP among most if not all services. We want our APIs to be flexible enough for all needs, we want them to work for a broad spectrum of clients, and we want the client to be able to choose what they want, whether it is SOAP or REST, XML or JSON.  And there are valid arguments for both in our environment.

SOAP is widely viewed as being more cumbersome to implement versus REST, however tools like C# in Visual Studio can do most of the hard work of SOAP for you. Conversely, REST – being URL http/get focused – does carry a higher risk of creating broken requests if care is not taken.  Addresses, being a key component in many of our services, often contain URL-breaking special characters.  SOAP inherently protects these values, while REST on a GET call does not properly encode the values and could create broken URLs. For many clients, it is less about preference and more about tools available.

Webhooks: The new kid on the block

Webhooks is the new approach that everyone wants, but few have implemented yet. Based on posting messages to a URL in response to an event, it represents a straightforward and modular approach versus polling for data. Citing figures from Wufoo, the survey notes that over 80% of developers would prefer this approach to polling. We agree that webhooks are an important trend for the future, and we have already created custom ones for several leading marketing automation platforms, with more in the works.

Ease of integration

In a world where both applications and interfaces continue to proliferate, there is growing pressure toward easier integration between tools: using figures cited from SmartBear’s State of the APIs Report 2016, Cloud Elements notes that this is a key issue for a substantial 39% of respondents.

This is a primary motivation for us as well, because Service Objects’ entire business model revolves around having easy-to-integrate APIs that a client can get up and running rapidly. We address this issue on two fronts. The first is through tools and education: we create sample code for all major languages, how-to documents, videos and blogs, design reference guides and webhooks for various CRM and marketing automation platforms. The second is a focus on rapid onboarding, using multiple methods for clients to connect with us (including API, batch, DataTumbler, and lookups) to allow easy access while APIs are being integrated.

Security and Authentication

We mentioned above that ease of integration was a key issue among survey respondents – however, this was their second-biggest concern. Their first? Security and authentication. Although there is a move toward multi-factor and delegated authentication strategies, we use API keys as our primary security.

Why? The nature of Service Objects’ applications lend themselves well to using API keys for security because no client data is stored. Rather, each transaction is “one and done” in our system, once our APIs perform validation on the provided data, it is immediately purged from our system and of course, Service Objects supports and promotes SSL over HTTPS for even greater protection.  In the worst-case scenario, a fraudster that gains someone’s key could do transactions on someone else’s behalf, but they would never have access to the client’s data and certainly would not be able to connect the dots between the client and their data.

Overall, there are two clear trends in the API world – explosive growth, and increasing moves toward unified interfaces and ease of implementation. And for the business community, this latter trend can’t come soon enough. In the meantime, you can count on Service Objects to stay on top of the rapidly evolving API environment.

Testing Through Batches or Integration: At Service Objects, It’s Your Choice

More times than not, you have to buy something to really try it.  At Service Objects, we think it makes more sense to try before you buy.  We are confident that our service will exceed expectations and are happy to have prospects try our services before they spend any money on them.  We have been doing this from the day we opened our doors.  With Service Objects, you can sign up for a free trial key for any of our services and do all your testing before spending a single cent.  You can learn about the multiple ways to test drive our services from our blog, “Taking Service Objects for a Test Drive.” Today, however, I am focusing on batch testing and trial integration.

Having someone go through their best explanations to convey purpose or functionality can be worthwhile but, as the saying goes, a picture is worth a thousand words.  If you want to know how our services work, the best way to see them is simply try them out for yourself.  With minimal effort, we can run a test batch for you and have it turned around within a couple hours…even less time in most cases.  Another way we encourage prospects to test is by directly integrating our API services into their systems.  That way you see exactly how the services behave and get a better feel for our sub-second response times.  The difference between a test batch and testing through direct integration is the test batch will show the results and the test through integration will demonstrate how the system behaved to deliver results.

TESTING THROUGH BATCHES

Test batches are great.  They give you an opportunity to see the results from the service first hand, including all the different fields we return.  Our Account Executives are happy to review the results in detail and you always have the support of the Applications Engineering team to help you along.  With test batches, you can quickly see that a lot of information is returned regardless of the service you are interested in.  Most find it is far more information than expected and often clients find that the additional information helps them solve other problems beyond their initial purpose.  Another aspect that becomes clearer is the meaning of the fields. You get to see the fields in their natural environment and obtain a better understanding than the strict academic definitions.  Lastly, it is important to see how your own data fairs through the service and far more powerful to show how your data can be improved rather than just discussing it conceptually.  That is where our clients get really excited about our services.

TESTING THROUGH INTEGRATION

Testing through integration is a solid way to gain an understanding of how the service behaves and its results.  It is a great way to get a feel for the responses that come back and how long it takes.  More importantly, you can identify and fix issues in your process long before you start paying for the service.  Plus, our support team is here to assist you through any part of the integration process.  Our services are built to be straightforward and simple to integrate, with most developers completing them in a short period of time.  Regardless, we are always here to help.  Although we highly recommend prospects run their own records through the service, we also provide sample data to help you get started.  The important part is you have a chance to try the service in its environment before making a commitment.

Going forward with either of these approaches will quickly demonstrate how valuable our services are. Even more powerful is when you combine the two testing procedures with your own data for the best understanding of how they will behave together.

With all that said, if you’re still unsure how to best begin, just give us a call at 805-963-1700 or sign up for a free trial key and we’ll help you get started.

No image, text reads Service Objects Tutorials

Introducing Service Objects New Open API

Service Objects is committed to constantly improving the experience our clients and prospective clients have with our data quality solutions. This desire to ensure a great experience has led us to revamp and redesign our lookup pages. These pages are easy to use and give all the information necessary for integrating and using our API in your application. This blog presents some of the key features.

Sample Inputs

One request we often receive is a quick sample lookup that will show our customers and prospects what to expect when calling our API. We are implementing just that in our new lookup pages.

In the example below, we are using our Lead Validation International lookup page. If the “Good Lead” or “Bad lead” link is selected, sample inputs will be filled into the appropriate fields. For this example we’ve selected, “Good Lead.”

We implemented this option so that users can get a quick idea of what types of inputs our services accept and what type of outputs the service will return. The form simply needs a license or trial key and it will return the validated data.

All Operations and Methods

Another benefit of these new pages is that they concisely and easily display all the methods available for an API along with all the potential HTTP methods that can be used to interact with the service.

If you want a JSON or XML response, select the appropriate GET operation and you will have everything you need to make a successful request to the service. If you want to make a POST request to the service, simply select the post operation and it will detail all that you need to have your data validated in your method of choice.

Detailed Requests and Responses

Arguably the most important pieces for a developer looking to integrate with an API would be to know how to make a request to the service, and what type of response to expect. These new lookup pages provide that information in a very easy way as shown below.

 

After making a sample request, you will see the URL used to fetch the validated data, the actual response from the web service, and the response headers that the service provides. These are all vital pieces of information that will have you up and running in no time. The new pages also list what type of response object will be returned from the service. This can be seen below the response body and headers.

Additional Resources

The page also offers up extra pieces of information that will assist with integration. The link to our developer guides, WSDL (for SOAP integrations) and host paths can be found on the page as well. These resources will help you have your application up and running as quickly as possible.

Feel free to sign up for a Service Objects trial key to test with our new look up pages!

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Follow This Checklist to Ensure a Smooth API Integration

There can be a lot of “i’s” to dot and “t’s” to cross when integrating with any API.  Here at Service Objects, we certainly recognize there can be a lot on the to-do list when starting an integration project. Integrating with our APIs is pretty straight forward, but we have developed a quick checklist that will ensure it is as easy as possible to follow our best practices.

Failover, Failover, Failover
Service Objects prides ourselves as having 99.999% server uptime. However, in the unlikely event that we do experience an issue with one of our servers, implementing a failover configuration is arguably the most important aspect of integrating with any of our APIs. Proper failover configuration will ensure that your application continues to operate unhindered in an event that the primary Service Objects web server is unavailable or not responding as expected. Below is an example (using C# syntax) of proper failover configuration.

The example above is for our DOTS Address Validation 3 – US service, but this scenario will be relatively similar for our other services. The main thing to note is that the primary call is pointed towards ws.serviceobjects.com and the backup call within the catch statement is pointed to wsbackup.serviceobjects.com.  In the event that the primary web server is unresponsive, producing strange errors or behaving abnormally, then the backup URL will be called and your application will continue to function as expected.  Another important item to note is that proper failover will check the web service for an error response with a TypeCode of 3. This indicates that a fatal error has occurred in the web service and that the backup URL should be called. If you are using one of our older services, then the error object that service will return may be different (there will be only “Number” and “Desc” fields present in the Error object) and you will need to check for a Number value of 4 to indicate a fatal error.

URL Encoding

Properly encoding the URL you are using is also an item you will want to place on your to-do list for integration. If you are using a path parameter to access our services, then you’ll need to use what’s called RFC3986 encoding to encode your URLs. If you are using query string parameters to hit our services, then you can use the RFC3986 encoding or the older RCF2396 encoding. What do both of those RFC standards mean? Well in short, if you are using a query string URL, spaces can be acceptably replaced with “+” in the URL. If you’re using a path parameter URL, then spaces will have to be encoded with its hex equivalent %20. Using the RFC3986 standard encoding is generally the safer bet since it is the most accepted and newest.

Logging (we’re not talking cutting down trees)

We also highly recommend implementing some code that will log the requests and responses that our services provide. For the sake of customer privacy, we do not log information on our end. Logging can be a big help when troubleshooting your code. It can also be a big help to us if you ever need technical support. Since we do not log customer requests, it is very helpful for us to have the exact inputs or URL used when contacting our services. This will allow us to provide the stellar customer support that comes with integrating with a Service Objects’ API. If you do run into any issues, please send us your request and response as this will help us get to the bottom of whatever issue you are encountering as quickly as possible.

Null or Nothing Strings
Our output structure for most of our services is consistent and won’t change without notice.  For the most part, our structure will stay the same and we’ll return the elements in our output structure as blank strings as opposed to null objects. That being said, we still highly recommend that your application performs null checks before using the response or any of the nested elements from our service. Even though the output structure for our services is very consistent, appropriately null checking our response can save you and your application a lot of headaches if something unexpected occurs.

URLs, IP Addresses and Whitelisting
Some clients or prospects will ask us if they can access our web services by IP address or whitelist the IP address in their firewall. Well, you certainly can, but we highly recommend whitelisting or hitting our web service by the domain URL. Most modern firewalls will support whitelisting domains by names and we highly recommend utilizing this. The reason being that IP addresses can and will change. We have a lot of backups and redundancy set up in our web services that will go unnoticed if you are accessing the service via a domain. If you absolutely need to hit our service by IP address or whitelist them like that, please reach out to our support team and we will be happy to make recommendations on best practices and provide you the information you will need.

Use Cases
If you are curious to know if you are understanding the results correctly or want to know if you are using the right operation to get the functionality you want, our developer guides can help provide more clarity about certain inputs and outputs from the service of choice.

Summary
As discussed, integrating with any API can bring up a lot of questions. If this list didn’t cover your questions or particular use case, please feel free to send your requests or questions to support@serviceobjects.com and we will be happy to help you get up and running with any of our 24 data validation APIs.

What Does API Really Mean?

API stands for Application Program Interface. It is a way for software to communicate with each other based on specific inputs and outputs. API’s are everywhere, nearly any application in existence can use them. One API can even utilize one or more other API’s as well. What an API is has been defined and explained in any number of Google search results. What I want to talk about is what an API really is.

It is a way to reuse existing code. Writing reusable code is often a high consideration when developers write code. If you are going to need a specific functionality over and over again, why recreate the wheel? Well, that is just what APIs help developers avoid, as well as making code more understandable and enabling them to be more productive, resulting in saved time and money.

The interchangeable nature of API’s makes switching one API out for another relatively simple. The cost of improving complex segments of code is almost as simple as unplugging a broken toaster and plugging in a new one.

The beauty of APIs is that they allow your organization’s developers to do what they do best. They don’t need to be an expert in everything. Often organizations can be sent on wild goose chases trying to figure out solutions that they are not experts in. At Service Objects, we want your business to continue being an expert at what you do and let us be your expert in the field of contact validation. Why attempt to do the heavy lifting of trying to solve a problem that is not in your wheel house? Without spending an enormous amount of time and money, there is little chance that you will be able to reproduce the code that another API can give you out of the box.

At Service Objects, we have been developing our services since 2001, so when someone is purchasing any of our 23 APIs, they are using over 16 years of cumulative knowledge and expertise. And on top of that, we are going to keep learning and improving on our services so that our customers don’t have to. What this means is that your organization can be using the best code available in your applications and leverage the best practices we have developed without being an expert in the field of data validation.

For more information, or to obtain a free trial key for any of Service Objects’ Data Validation APIs, click here.

No image, text reads Service Objects Tutorials

C# Integration Tutorial Using DOTS Email Validation

Watch this video and hear Service Objects’ Application Engineer, Dylan, as he presents a 22 minute step-by-step tutorial on how to integrate an API using C#. In order to participate in this tutorial, you will need the following :

  1. A basic knowledge of C# and object-oriented programming.
  2. Visual Studio or some other IDE.

Any DOTS Validation Product Key. You can get free trial keys at www.serviceobjects.com.

In this tutorial, we have selected the DOTS Email Validation web service.  This service performs real-time checks on email addresses to determine if they are genuine, accurate and up-to-date. The service performs over 50 tests on an email address to determine whether or not it can receive email.  If you are interested in a different service, you can still follow along in this tutorial with your service of choice. The process will be the same, but the outputs, inputs, and objects that we’ll be dealing with in the integration video will be slightly modified.

Enjoy.

Demonstrating JavaScript and NuGet with DOTS Address Validation 3

In this demonstration I am going to show you how to implement our DOTS Address Validation 3 web service with JavaScript and no Asp.Net markup tags using Visual Studio. We are going to create a form for address inputs, similar to what you would see on a checkout page, which will call our API to validate the address. Although, we could call the web service directly from JavaScript, we would then be forced to expose our license key. So we will use an aspx.cs page to handle our call to the service API, keeping our license key safe from prying eyes. In this demonstration, I will also show you how to add the NuGet version of DOTS Address Validation 3 to the project so that you are always using our best practices and speeding up your integration.

First, create a new empty web site, which I am going to call, cSharpJavascriptAjax. Then I am going to add a new html page called, Addressvalidation.html.

Next, we are going to add the markup for the form. Add the following code in between the body tag.

This form was designed using the table structure for the layout. You will likely want to create the layout using divs, which is more appropriate. Here is what the page looks like in the browser.

To speed things up a bit, I am going to include jQuery in the header so we can utilize some of the short cut functions. Of course, all of this can be done in pure JavaScript.

For the action we will take when the Complete button is clicked, we will create the submit function. This will take in all the values from the form, call the web service and then display the response to the screen. I added an alert and left the URL blank, this will allow for a quick test to make sure what we have is working. Later we will update the failure and success sections as well. For now, they will simply display an alert for testing.

First, let’s start by making sure we are getting all the inputs into the submit function and are getting all the inputs back. I am going to display an alert with DataValue variable so we can see what we have.


Good, now that we know that is working, we will need to make the call to the web service. Like I mentioned before, we could call the service directly but using JavaScript would make the license key visible. So we are going to leverage asp.net for security by creating and empty aspx page and add a method to the aspx.cs page to handle the call. Let’s go ahead and add the aspx/aspx.cs page to the project and call it, ServiceObjectsAPIHandler.

Here is our empty aspx.cs page, where we will be adding our function. It will go below the Page_Load method.

Let’s call the method, CallDOTSAddressValidation3_GBM. There are two things to notice with the signature for this method. We decorate the method with the WebMethod attribute so that our new method can be exposed as a web service method. And, we declare it as static because of the stateless nature of the static methods. An instance of a class does not need to be declared.

To test things out, for now we will just take one string input and return it to the Ajax call where we will display it. In order to do this, we will need to adjust our DataValue call in the script on the html page.

We will also need to add the URL and method to the Ajax call.

I also adjusted the alerts so we can see the success or fail.

Great! That worked! Now, let’s write up the code in the aspx.cs page to make the call to the service. We could grab some sample code from the web site and walk through it, but we have documentation and tutorials already on how to do that. Instead, I am going to take a short cut and grab what I need from NuGet. We offer NuGet packages for most of our services, which include best practices such as failover configurations and speeding up the integration time.

To do this, we will need to open up the NuGet Package Manager in Visual Studio under the Tools menu option. When browsing for our services, you can type “Serviceobjects”. You should see a list of available Service Objects NuGet packages to choose from.

Select DOTSAddressValidation3US, you will be prompted to select the project you want to install it under. Select the project and then click install.

Once the installation is complete, a ReadMe file will automatically open that will contain a lot of information about the service. You will also notice a few things were added to your project. First, a DLL for DOTSAddressValidation3US was added to the bin folder and was referenced in your project.

Next, an AV3LicenseKey app setting was added to your web.config file with the value “WSXX-XXXX-XXXX”. You will need to substitute this value for the key you received from Service Objects.

Also in the web.config you will see several endpoints were added. These will be used in the pre-packaged code to determine how to call the service based on your key.

Now that we have the NuGet package added to the project, we can use it. Next is the code that will use the DOTSAddressValidation3US DLL by loading it up with the input parameters, making the call to the service, then working with the response and finally sending the data back out to our JavaScript.

First we get the license key from the web configs file.

Then we make the call to the service. You will notice that we throw in a Boolean parameter at the end of the call to GetBestMatches for live or trial. This is an indicator that will tell the underlying process which endpoints to use. A mismatch between your license key and this Boolean flag will cause the call to the service to fail.

After we make the call to the service, we will process the response and send the data back to the JavaScript. If there is an error, we will return the error details. Otherwise, we will return a serialized version of the response object. Note that you can also just deal with the response completely here or completely in JavaScript. I mixed it up so you can see a bit of both.

Now we will turn our attention back to the JavaScript and update our submit function. All we really have to deal with now is the response from the aspx method call. Here is the portion of the Ajax call implementing that.

Success or failure here does not mean that the address is good or not. It is really just a success or failure of the call to the aspx web method. On failure, we simply make an alert stating something went wrong. On success, we examine the response value, determine if the address was good or not and display the appropriate response. Here is the whole submit function.

 

Here is a good address.

 

And here is a bad address.

 

In the submit function, we did the most basic check to see if an address was good or not.

But many more data points from the response can be used to make any number of conclusions about an address. One thing you will notice is that AddressResponse.Addresses is an array. A check to see if multiple addresses are returned can be valuable because in certain situations more than one address may be returned. One example is when an East and a West street address are equally valid responses to an input address. In this case you may want to display both addresses and let the user determine which to use. You may want to evaluate the individual address fragments or the corrections that were made to the address.

The data points associated to the DOTS Address Validation US 3 service can be found on our development guide page for the service.

The following is commented out code that I added to the submit method for easy access to the various output data points.

And here is the same thing but for the Error object in the aspx page.

I hope this walk-thru was helpful in demonstrating how to implement our DOTS Address Validation 3 web service with JavaScript and no Asp.Net markup tags using Visual Studio. If you have any question, please feel free to contact Service Objects for support.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

NCOA Integration Tutorial

The reality about any set of residential customer data is that given enough time, addresses and the people living there are bound to change. Occasionally, businesses and organizations can rely on the customer to notify them of changing addresses but when people move, this often times falls to the wayside on the list of priorities.

For cases like these, accessing the USPS National Change of Address database can provide a helpful solution to ensure that mail gets delivered to the correct person. The USPS maintains a large data set of address forwarding notifications, and with the DOTS NCOA Live service, this information is right at your finger tips.

Our DOTS NCOA Live service is a bit different than the rest of our products. Most of our other products process validation requests in a one at a time manner. NCOA is a little different in that in order to start a request, a minimum list of 100 addresses must be sent to the service, and from there anywhere from 1 to 500 records can be processed at a time. To show you how it works, we’ve put together a quick step by step tutorial.

What You Will Need

  • C# experience
  • Visual Studio ( VS 2015 is used for this example)
  • A DOTS NCOA Live license Key. A free Trial Key can be obtained here.

Project Creation and Setup

To get started, go ahead and launch Visual Studio and select File->New->Project.  From here you can choose whatever project will meet your needs. For this example, we’ll be creating a very basic console application.  So if you want to follow along step by step, you can choose the same project details as shown below.

Select OK and wait for Visual Studio to finish creating the project. Once that is done, right click the project, select Add and then select Service Reference. Here, we’ll enter the URL to the WSDL so that Visual Studio will create all the necessary classes, objects and methods that we’ll use when interacting with the DOTS NCOA Live service. To successfully do this, add the necessary information into the pop up screen as shown in the screenshot below.

3

Select OK. Now that the service reference is successfully set, open up the App.Config. Below is a screenshot of the App.Config that has been modified.

We’ve added the appSettings section and within that we’ve added two key value pairs. The first is the license key field where you will enter your key. Storing the license key in the app or web config files can be helpful and easy when transitioning from a trial to a live environment in your application. When you are ready to use a production key, changing it in the app config is an easier option than having to change a hard coded license key.

We’ve also put the path to a csv that will contain the address and name information that we will be sending to the NCOA service. You may not want to read in a CSV for your application but the process of building the input elements for the service will be relatively similar. For this example, we’re just going to put the file in the BIN folder of our project, but you can add any path you want to the file.

We’ve also increased the maximum buffer size in the http Binding. Since we’ll be sending a list of 100 addresses to the DOTS NCOA Live service, we’ll indeed need to increase the buffer size.

Lastly, we’ve changed the name of the original endpoint to “PrimaryClient,” made a copy of the endpoint and changed that name to “BackupPoint.” Currently, both of these endpoints point to the trial Service Objects environment, but when a production key is purchased, the PrimaryClient url should point to http://ws.serviceobjects.com/nl/ncoalive.asmx and the BackupClient should point to http://wsbackup.serviceobjects.com/nl/ncoalive.asmx.

Calling the DOTS NCOA Live web service

The first thing we’ll do is instantiate two static strings outside of the Main method. One will be for the input file, and the other will be for the license key that we’ve placed in the app.config. Inside the scope of the Main method, we’ll instantiate a NCOAAddressResponse object called response and set it equal to null. We’ll also create a string called jobID and set that equal to null as well.  This jobID will be passed as a parameter to our NCOA service call. A JobID can be seen as a unique identifier for all the records that are run against the service.

Now we’ll create the following method that will read our input file.

This method will return a List of the NCOAAddress object that will have all the inputs we need to send to the service. In my particular file the fields are as follows: Name, Address, Address2, City, State, Zip.  Your code will need to be modified to read the specific structure of your input file. This code reads in the file and then loops through each of the lines and adds the appropriate values to the fields of the NCOA address object. After the line is successfully read, we add the individual NCOAAddress objet to the list called inputAddresses and then return that object once the code has finished looping through the file.

Now we’ll insert a try catch block into the main method. Within this try catch block we’ll create a List of the NCOAAddress object and call the readInputFile method to fill it. We’ll also make a JobID with today’s date appended to the end of it. You will likely want to customize your job id to fit into your business application. Jobs close on Sunday at 11:55 PM so that is also something to take into consideration when designing your code.

Failover Configuration

Now that we have all our inputs successfully set up, we are able to call the NCOA web service. The first step we’ll take is create another try catch block to make the web service calls in. We will also create an instance of the DOTSNCOALibraryClient and pass in the trip “PrimaryClient” as a parameter. This will ensure that our first call to the NCOA service will point to our primary URL that we have defined in the web config. Next we’ll make the call to the webservice using the library client and set it equal to our response object.

After we get a response back from the service we’ll perform our failover check. We check if the response is null or if an error TypeCode of 3 was returned from the service. An error type code of 3 would indicate that a fatal error occurred with the service. If either of these criteria are met, then we’ll throw an exception that will be caught by the catch block we have created. Within this catch block we’ll set the library client object to a new instance with the “BackupClient” string passed to it to ensure that we call the backup client. The code should look like the following.

This failover logic will ensure that your application stays up and running in the event that a Service Objects web service is unresponsive or in the event that it is not behaving as expected.

Processing the Response

Now that we have successfully called the service, we’ll need to do something with the response. In this tutorial, we’ll take the results from the service and download them as a CSV into our bin folder.

To do this, we will call and create a method called processResponse that will take a NCOAAddressResponse as an input. This method will take the outputs from the service and build a DataTable that we will use to eventually turn into a CSV. This can be done as shown in the following screen shot.

 

 

Now that our output data table has been created, we’ll call and create two methods will loop through our DataTable and convert it to a CSV string, and then write that CSV to the bin folder.  The code to perform this is shown below.

More information on all the elements in the response object can be found on our developer guide for the NCOA service.

Now that our output data table has been created, we’ll call and create two methods will loop through our Data Table and convert it to a CSV string, and then write that CSV to the bin folder. The code to perform this is shown below.

11

Now our code is ready to run and the service is ready to test.

We’re always available for assistance, so be sure to contact us with any questions about integration for NCOA or any of our other services.

The Path to Data Quality Excellence

“In the era of big data and software as a service, we are witnessing a major industry transformation. In order to stay competitive, businesses have reduced the time it takes to deploy a new application from months to minutes.” – Geoff Grow, Founder and CEO, Service Objects

The big data revolution has ushered in a major change in the way we develop software, with applications webified and big data tools woven in. Until recently data quality tools that ensure data is genuine have not kept pace. As a result, developers have had little choice but to leave out data validation in their applications.

In this video, Geoff will show you why data validation is critical to reducing waste, identifying fraud, and maximizing operation efficiency – and how on-demand tools are the best way to ensure that this data is genuine, accurate, and up-to-date. If you develop applications with IP connectivity, watch this video and discover what 2,400 other organizations have learned about building data quality right into their software.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Taking Service Objects for a Test Drive

You’ve found Service Objects, you’ve read about our services so now you want to test drive. Well, there are several ways to do just that and as I go through the options you will also get a pretty good picture about how you can integrate our services into your applications or processes. Testing can be a tedious task, delving into the unknown of a third party process, but we make it easy for you to jump right in by giving you several ways to test such as our Quick Lookup Tool, DataTumbler, Batch Processing and our real-time API services.

Quick Lookup Tool
The Quick Lookup tool is used to test one off examples on our website.  It is as simple as navigating to the Quick Lookup page, selecting the particular service you are interested in, filling out the fields and clicking submit. You’ll receive a real time response from the service containing the results of your test.

Since our services often offer multiple operations the Quick Lookup pages will inform you which operation is being used for the particular service in the form. If there are other operations you are interested in testing then we have you covered there as well with links to the other operations.

DataTumbler
The DataTumbler is a PC-based desktop application you can download from our site to run tests on multiple records. If you have ever used Excel then this application will be easy for you to drive. It works like a spreadsheet where you can paste multiple records for processing, in real-time.


Here are the basic steps: Choose your service, choose your operation, drop your data in and click validate. Choosing the service and desired operation is important because often it will change the input columns needed for the records to process properly. In the screenshot above you can see that there are 5 input columns designated by the yellowish cell background. Here we have inputs for Address, Address2, City, State and Zip. If your particular purposes do not require Address2, for instance, then that column can be removed by simply clicking on the “Customize Input Columns” button and removing it from the input.  You can do the same thing for the output columns as well but in that case you would need to access the “Customize Output Columns” popup.  The output columns are designated by the cells with the greenish background.

You can also add additional columns that are not predefined by the application by right clicking a column and selecting “Insert Column”.  This is handy for situations where you want additional data to stay together with your test like a unique identifier or other data unrelated to testing the service.

Once one of the validation buttons at the bottom is pressed, the DataTumbler will make requests to our services in real-time and populate the output columns as the data is being processed.

To get the application please call Customer Support at 1.800.694.6269 or access Live Chat and we will get you started.

Batch Processing
Batch processing is another way you can test drive our services.  When you have a file ready to go for testing you can simply hand it over to us.  We will process it and send back the results to you along with a summary.

This is one of the more preferred ways to test drive our services for several reasons:

  • We can see the data you send us first hand and give you feedback about what the file looks like with respect to items like formatting.
  • By seeing all the fields of data we can quickly recommend the most beneficial service.
  • It gives us an opportunity to see the results with you and go over any interesting data points from a data validation/cleansing expert point of view.

All that is needed for this is a test file containing the records you want to try out. The file can come in several formats including txt, csv and xls to name a few. You can send us the file directly to an email or for a more secure means we can provide you a secure ftp for the file transfer. We can also handle working with encrypted data when an extra security layer is needed. An additional way to get us your test file is through our web site itself. You can drag and drop a file and be on your way. Once we have the file we will process it against the service you are testing and return the results along with a summary breakdown of the processing.

If your test run is a success and you’re ready to move forward with a larger file, we can also run one-time paid batch. Clients often use a this as an initial data scrub before a switching to our real time API or automated batch system which will run batches virtually on demand.

Integrating the API
The last way you can test our services is by implementing our API in your code. Most clients use the API when they integrate with us so testing this way gives you the closest representation of how your production process will work with our services.

When it comes to doing a direct software integration test we have you covered. We make it easy to integrate and get testing quickly by means of sample code, code snippets, step-by-step integration walk-through blogs, developer guides and NuGet for Visual Studio.

We have sample code for C#, Java, PHP, Rails, Python, NodeJS, VB.Net, Classic ASP, Microsoft SQL Server, APEX and Cold Fusion.  This list does not mean that we stop there.  Additional sample code can be requested and our team will review to find the best solution for you.  When applicable, our sample code will be available in both REST and SOAP.  All of our examples will implement best practices and demonstrate failover.

If you are a C# developer and use Visual Studio you will have us at the hands of your finger tips.  Using the NuGet Manager in Visual Studio you can have our API injected into your code and ready to go.

All of our walk-through tutorial blogs and documentation are presented in a straight forward and easy to understand format and as always the Service Objects team is here to assist with any questions or help you may need.

When it comes to test driving our services we give you options to make it easy. A trial key will give you access to all the options I mentioned. Going through these options also gave me a chance to show you how you can integrate with our services.  The beauty behind the way we set this system up is that you can become fully integrated and tested before you even purchase a live production key.  In all of these cases, usually only two things need to be updated when switching from trial testing to live production use.  In the Quick Lookup Tool you will need to switch the “Select Key Type” to “Customer Production Key” and then use your live production key instead of your trial key.  In the DataTumbler you will be similarly swapping those fields out as well.  When it comes to doing a code integration you just need to update your endpoints from trial.serviceobjects.com to ws.serviceobjects.com and the trial key for a live production key.

Whenever you want to test one or two records or a whole file, simply put, our team has you covered.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Java Integration Tutorial

Java is easily one of the most popular programming languages in the world. It is a general purpose language that users have managed to implement in a variety of applications. It is popular with web applications; which is why it is one of our most requested sample code languages for our DOTS validation products. This is no surprise either, since Java is able to run on a wide range of architecture, applications and environments.  Since it is so popular, we’re here to show you the ropes and get a Service Objects web service up and running.

For this example we’ll be using a fairly new operation in our DOTS Address Detective service. Our Address Detective service is a power house address validation service that can leverage multiple different data sources to validate an address.  The DOTS Address Detective service has input fields for a person’s name, a business name, and a phone number along with the traditional address fields. These additional data points help the service leverage other data sources to get your address validated.  This service even has an operation that will take your data in any order it’s given and return a standardized address. It’s called FindAddressLines and has 10 inputs ranging from “Line1” to “Line10” and can work wonders on standardizing messy data sources.  We’ll be integrating with this operation in our tutorial today, so let’s get started!

What You’ll Need

  • A Java IDE (We’re using Eclipse for this example)
  • Basic Java Knowledge
  • A Service Objects DOTS Validation product License Key (We’re using DOTS Address Detective for this case)

Setting Up the Project

Launch Eclipse and select a workspace if it asks you do so. Once everything has finished loading,  select File->New->Other. In the search field type “Dynamic Web Project” and click next.

On the next screen, type in an appropriate project name and configure the settings to our specific needs. Congratulations, you’ve built a project! We’ll need to add several files to take in our inputs, send them to the DOTS Address Detective service, serialize the XML response and display the results back to the user. To start, add two jsp files by right clicking the project and selectingNew->JSP File as shown below:

In this tutorial we’ll name them “inputForm.jsp” and “webServiceCall.jsp.”  These will function as the input form and the display page for the results from the service.

These new JSP files will obviously need something more than they currently have. We’ll place all the necessary HTML elements in the inputForm.jsp file. Make your page look like it does below:


These 10 input lines will send all the necessary information to the FindAddressLines operation.  Now that we have the necessary fields to take our input parameters, we’ll need to put the code in place to actually call the DOTS Address Validation web service. We’ll create a package with the necessary class files for the response from the Address Detective service and within that package we’ll create a method that will actually perform the call to the web service.

To add a package, right click the project and select New->Package.  For this project we’ll name the package “serviceObjects” as shown below.

Java Tutorial 5

Be sure to select the “Create package-info.java” checkbox as we’ll need to add some necessary import statements into the package.info file.  After the package as been added, right click it and select New->Class and add all the following classes to the project.

Java Tutorial 6
We’ll talk briefly about the necessary code to add to each of the different objects and classes so that the XML response from the web service will be successfully deserialized into the objects that we are defining here. One thing that needs to be added to the package-info.java file is as follows.

Java Tutorial 7

This will let the code know that it should expect the http://www.serviceobjects.com namespace in the XML response from the service.

Like most classes, all the values in each of the code files will need the “getters and setters” for each of the objects that the service returns in order to properly work with the returned object. The highest level object that is returned from the service (meaning all the other objects and values will be contained within that one) will be the FixedAddressResponse. This object will contain a possible array of FixedAddress objects that is called “Addresses” and it will contain an Error object that the Address Detective service can potentially return. See below for an example of how to format the declarations and, XML annotations and the “getters and setters.”

Java Tutorial 8

As mentioned this is a general example of how to do this and we’ll include all the class files with this tutorial so that they can be easily added to your own project.

Now we’ll point out some things to be aware of in our “ADClient” file.

Inside the class declaration, we’ll define a few things that we’ll need to make the actual call to the service.

Java Tutorial 9

In the above code, we’ve defined both the trial and the backup URLs. Currently, they both point to the trial environment Service Objects servers. In the event that a license key is purchased the primary call should be to ws.serviceobjects.com and the backup URL should be to wsbackup.serviceobjects.com. We’ve also defined our LicenseKey in the class which will allow us to keep it hidden from outside view and we’ve defined a method called “FindAddressLines” that will eventually call the same operation. Notice that it returns the FixedAddressResponse object.

Within the actual method we have some cleanup logic that is performed on the input strings, and then the URLs are assembled for the HTTP call. See below:

Java Tutorial 10

In the above snippet of code URL strings are assembled and sent to the DoHTTPRequest method. After the web service is called, it is necessary to do a check to ensure that the call was completed correctly. The code will check for a null response from the service or a ‘TypeCode” of 3 is returned which would indicate a fatal error from the Service Objects web service.  If either of those conditions is true, then the code will throw an exception and use the backup URL.  The above functionality and logic will ensure that your application will be uninterrupted in the unlikely event that the Service Object servers are non responsive or returning a TypeCode of 3.

Displaying The Results from the Web Service

Since our failover logic and call to the DOTS Address Detective web service are set up, we can now create the objects that will call the web service with the inputs from the input form, and display the results to the user.

Navigate to the webServiceCall.jsp file and implement the following code:

Java Tutorial 11

In this bit of code we’ll grab the inputs from the inputForm.jsp page and place them in strings of the same name.  We’ve also instantiated an instance of the “FixedAddressResponse” object which will hold the results from the service and the “ADClient” object which will make the actual call to the web service.  Since our FindAddressLines method in the ADClient object returns a FixedAddressResponse object we can set it equal to it.

Our call to the web service is all set up and now we can implement some logic that will either display the results from the service.  Implement the following logic in your JSP page.

This code will first check if an error is present in the response; if it is present it will display that error to the screen. If the error response is null, it will display the validated response from the service.  Notice that we call the “getters” that we have previously defined in our response class to display the results to the user.

We now have everything we need to use the service, let’s see it in action!

As mentioned previously, the FindAddressLines operation can take the inputs for an address in any order and return the validated output address.  See below for an example.

Java Tutorial 13

After we send that input to the service, we will receive this response:

Java Tutorial 14

You are now all set to test the FindAddressLines operation in the DOTS Address Detective web service. Sample code downloads are also available for all of our other services. As always if you have any questions about integrations, service behavior or best practices, feel free to contact us and we will gladly assist in any way that we can!

Service Objects Provides Customized Sample Code

One of our primary goals as Application Engineers at Service Objects, is to do whatever we can to ensure that clients and prospective clients get up and running with their DOTS validation service and programming language of choice. That’s why we have over 250 different pieces of sample code available to those who want to test our services!

But what if you are interested in integrating multiple services in your application?

Lucky for you, this commitment to getting the data hungry masses up and running with testing our services goes even further. We are dedicated to ensuring that you get the most out of the service(s) that you are testing and assisting with any integration related questions. One of the ways we do this is by writing custom sample code to help our clients and prospective clients integrate our services into their business logic.

What are some examples of custom sample code?

Well I am glad you asked! Need some sample code that will run our NCOA service against 500,000 addresses in a couple hours? No problem.  Do you want to get geocode

coordinates from the contact address that comes back from our DOTS Geophone Plus 2? We’ll write you some sample code that will get that done. Does a portion of your address data include a PO Box number reflected as the unit or suite? We can help you leverage the results from our DOTS Address Validation 3 service to programmatically identify those records.  Need to use any of our DOTS validation products with asynchronous calls? We can certainly help with that as well.

There are a multitude of other combinations that our services can be used to get you your desired result! If you’re interested in any DOTS validation products and need some assistance in how to get the intended result, please reach out to us here! We will gladly provide a consultation on how to best integrate your service (or services) of choice into your application or we’ll go ahead and write a piece of sample code for you to illustrate best practices when calling a DOTS validation web service.

No image, text reads Service Objects Tutorials

Python Tutorial

Python is a versatile, robust scripting language that can be used for a variety of projects and implementations. One thing that separates Python from other programming languages is its use of white space and indentations to separate different blocks of code rather than curly braces that languages like Java or C# use. This can often be a polarizing issue for many developers but it is simple enough to start using once you have some experience in it. For this tutorial we’re going to look into making a RESTful web service call to our DOTS Phone Exchange 2 service to validate an international phone number. You will need the following to participate in this tutorial.

What you’ll need

  • Python Installed on your Test Machine(2.7.11 is used in this version)
  • IDE or Text editor of choice. We’re using the community edition of PyCharm.
  • A DOTS product License key of choice. We’re DOTS Phone Exchange 2 for this example.

Once you have all the necessary prerequisites installed, Launch PyCharm and create a new project in your directory of choice. Once you have a project created right click on the project location in the solution and select “New” and then “Python File.”

For this project we will use the following modules that we are importing at the top of the .py file. The “Tkinter” module will allow us to have a simple GUI to allow the user to enter phone number Information to be validated. The “Tix” module allows us to use some handy labels, scrollbars and buttons to properly display the outputs from the service, and the “requests” and “xmltodict” modules will allow us to make a HTTP Get call and parse the results in the a Python dictionary respectively.

To add these modules in PyCharm you will need to go into settings from the File menu. Select Project:, and then select Project Interpreter.  Click the small green plus symbol in the upper right-hand corner and from there you can search for and install the necessary modules to run this project.

Now that we have all the necessary modules in place, let’s create a method that will eventually make the HTTP Get call. For now we will leave it empty as shown below.

There is nothing very exciting happening in that method at the moment, but we’re certainly going to change that.  We’ll now create the elements necessary to take in the values that will create a successful call to the GetInternationalExchangeInfo operation for the DOTS Phone Exchange 2 web service.  The service takes in 3 values as an input: PhoneNumber, Country and Licensekey.  To create the necessary input elements, add the following bits of code to your python file below the method definition.

This will give our GUI a title, and 3 text boxes so that we can enter the necessary information to validate an international phone number. Notice at the bottom that once the button gets pressed, it will call the PE2Int method which we have defined right above this code. Since we have the user interface all set up, we can go ahead and enter the code that will make the actual call to the web service and then display the results to the user.

Web Service Call and Fail Over Configuration

For starters, we’ll take inputs that a user will enter in the text boxes in the GUI and also instantiate the beginning part of the URL that we’ll be using to make a web service call. Additionally, we’ll utilize a feature of the “requests” module that allows the user to format the query string items in the URL in a more readable way. See below for the example.  Our new code will also have a “primaryURL” and “backupURL” string which we’ll talk about more when we get to implementing proper failover into the project.  Add the following code to the program under the primary method.

The project will now implement a few try/catch blocks that will call the DOTS Phone Exchange 2 web service and handle any potential errors that come up during the call to the web service. In this basic bit of code, we show how to call the service using the “requests” module and how to properly failover to another datacenter in the event that the primary Service Objects data center is not responding as intended.

As shown above, the code will call the primaryURL and then process the response depending on the results returned from the service. If an error code is returned and a TypeCode of “3” is returned the code will throw an exception, and then call the backupURL to receive a valid response from the service. A TypeCode of “3” indicates that something has gone wrong with the service and that it is throwing an “Unhandled Error.”

Proper error checking and failover implementation should be used to ensure that your business logic goes uninterrupted and can continue to be used in the event that the primary Service Objects data center is offline or not behaving as expected.

Displaying the Results to the User

Now that our fail over configuration is properly set up, we can work on displaying the results to the user. To do this we’ll create Label’s using the Tix module that will simply allow us to show the values that are returned from the service. To do this, make statements in the primary call and back up call section of the code resembling the following.

Now it’s time for testing!  For this example we’re using a the phone number to a Hotel in Germany, but feel free to test any phone number you would like! This operation will also validate phone numbers from the US and Canada as well. Here is an example of the sample output from the service.

This concludes our python tutorial. Please contact support with any questions or tutorial requests!

No image, text reads Service Objects Tutorials

C# Integration Tutorial

C# may very well be our most requested sample code.  There is good reason for that too; C# and the .NET framework are many developer’s first choice for creating a web page or any other type of application because of the versatility of the language and framework as well as the robust features that Visual Studio offers.  One of those features is the ability to consume a WSDL (Web Services Description Language) and create all the necessary classes and methods to successfully call a web service.  This makes using SOAP squeaky clean! Ok, that was the first and last SOAP pun, I promise.  Here’s what you will need for this tutorial.

Requirements

  • Visual Studio (2015 is used in this tutorial but the process should be relatively similar with any other version)
  • DOTS Web Service License Key (We are using DOTS Address Validation International for this example)
  • Some familiarity with C# and the .NET Framework. This tutorial will be pretty basic so it should be accessible even if you are a beginner.

Setting Up the Visual Studio Project

For starters, launch Visual Studio and create a new ASP.NET Web Application and choose an appropriate project name. Your screen should look similar to the following:

Click ok and then select “Empty” to create an empty web form. We will add the necessary aspx page momentarily.

But, our first step for our new project will be to add the service reference to the DOTS Address Validation International web service. To do this, right-click on “References” in the Solution Explorer and select “Add Service Reference” a pop up should appear to add the Service Reference.  Here we will add the URL to WSDL that contains the information on how the project should interact with the DOTS Address Validation International web service and we will name the Service Reference.

For reference here is the WSDL URL and here is what the pop up page should look like

WSDL: http://trial.serviceobjects.com/avi/soap.svc?wsdl

Now that we have successfully added the service reference, we can add an aspx page that will have our input form and display our results. Right click the project name and select “Add” and then select “Web Form” to add a blank web form to the project.  For our example we’ll name the form “AVIForm”.

Creating the Input Form and Code Behind

Now that our form is present, we’ll add some simple HTML and ASP elements to take in our inputs and display them to the screen after we get a response from the service. Make your ASPX page look like the following.

The above code will allow us to take the inputs send them to the code behind and the display the results in the outputGrid and InformationComponentsGrid.  We have to separate grids; one to account for the standard outputs from the service, and the other to account for the InformationComponents field to account for some variable information and data to be returned by the service.  This field can change based on the country or data available for a specific international address. Now that our input form is all set up, we’ll add the proper code behind that will display the results to the user.

We won’t look at every part of the code here, to download a .txt version of the code, click here.

One thing we like to stress to clients who are integrating our services is proper Failover Configuration.  In the unlikely event that our primary datacenters are offline or producing strange errors, we want to ensure that our clients are pointing their code to our backup datacenters so that their applications and business processes go uninterrupted.  Here is a full picture of the proper way to integrate failover into an application.

We’ve found that to ensure uninterrupted service the best practice is to have the calls to the web service nested in a try-catch block of code.  In our current setup, the backup call will hit the same data center as the primary call; but if a License Key is purchased the primary call should point to ws.serviceobjects.com and the backup call should point wsbackup.serviceobjects.com.   The screen shot below, highlights some of the primary failover logic that will allow the code to run uninterrupted.

This code occurs right after the primary call to the web service if it detects that the response from the service is null or if an Error TypeCode of “3” is returned, then the code will throw a new exception and the catch statement will call the backup web service call.

If a successful response is received from the service, the code will call a method named “ProcessValidResponse” which takes in the response from the web service and display the results into a DataGrid and then send that data grid to the ASPX page for the user to see. This method is pretty straight forward for the most part, as it simply assigns the outputs and their respective values into separate columns for the user to see.

The only part that may be mildy tricky would be the InformationComponents field that is returned from the service.  This field is an array of InformationComponent which contains two strings; one for the “Name” of the variable returned and one that indicates the “Value” of the variable returned.  For example is you pass a US address into the AVI service, one InformationComponent that can be returned will have a Name of “DPV” and a Value of “1” indicating that it is considered delivered by the USPS.  Below is an example of the XML output.

 

This array of fields allows us to add new outputs to the service over time without potentially breaking any existing client’s code.  To account for this array of information we have a brief For loop below that will loop through all the elements of InformationComponents and add their names and values to the InfoCompTable so that they can be seen by the user.

Making a Successful Call to the Service

If we go ahead and run our project, our webpage will look like the following.

Not very exciting, but it will get the job done. As an example address, we’ll use the following in France:

3 Place de la Victoire, 33000 Bordeaux, France

If this address is sent to the service you should see the following response.

As you can see, this address is considered Valid by the service and the service has a premise level resolution for this address.  The Address1-Address8 fields will also display how the address should look for mailing purposes. For this particular example we only need Address1 and Address2 but for other countries all 8 address lines may be used.  The InformationComponents field was also parsed out and the two Name and Value pairs were shown below the standard outputs from the service.

That wraps up our tutorial for DOTS AVI integration in C#.  Congratulations! You are now on your way to being a DOTS Address Validation International expert! Feel free to go and test more International addresses, and as always if you have any questions feel free to reach out to us at support@serviceobjects.com!

No image, text reads Service Objects Tutorials

Ruby on Rails Integration Tutorial

Ruby on Rails (or more colloquially, “Rails”) is a server-side web application framework that provides its users a powerful arsenal to create web pages, web services and database structures. Rails utilizes a model-view-controller (or MVC for short) framework that provides an easy way to separate the user interface, database model and controller.

One of Service Objects highest priorities is getting our clients up and running with our DOTS Validation web service(s) as quickly as possible.  One way we do this is by providing sample code and occasionally, step by step instructions.  So if you are partial to using Rails, this tutorial will provide everything needed to get up and running.

What You will need

  • A DOTS license key, click here for a free trial key. We’re using DOTS Email Validation 3 for this tutorial.
  • Your Favorite Text Editor (Sublime, Notepad++ etc)
  • Ruby On Rails installed on your test machine.
  • Familiarity with the Command Line

Creating a Default Project

First navigate to the directory via command line in which you would like the project created.  Once the command line is in the desired directory, run the command “rails new EV3Tutorial.” You may call the project whatever you prefer, but this will be the name of our tutorial.

After the project creation is finished you should have a brand new Ruby on Rails project.  You can launch the server by simply being in the project directory and entering “rails server” into the command line.  The default port number the project runs on is 3000. We’ll be using port 3001 for this project; the port number can be specified by entering the following command to launch the server: “rails server –p 3001”.

If you run the server and navigate to the applicable localhost URL where it has been launched, you’ll see the following page:

Ruby on Rails Tutorial Image One

This page is the default page for any new Rails project. From here, you find more documentation about working with Rails.

Generating a Model and Controller

We’ll need a controller to make the actual call to the DOTS web service and a model to pass information to our controller.  Simply type “rails generate controller request new show” and the necessary controller and views will be created. This command will also create two new views; one titled “new.html.erb” and the other called “show.html.erb”.  The command line should look similar to the following.

Ruby on Rails Tutorial Image 1

After this command is run, the controller folder and the views folder will have some new files created. It should look like the following:

The “new.html.erb” file will be our default page, so to make the application load it automatically open the routes.rb file in the config folder of the project and make the routes page look like the following.

For this example, we’re using the “ValidateEmailAddress” operation which has 4 separate inputs. EmailAddress, AllowCorrections, Timeout, and LicenseKey.   These in turn, will be part of our model that will pass values from the view to the controller.  So before we fill in the html page, we’ll create the model and include these values in its definition.

Enter the command “rails generate model request” in the command line. The values in the model can be specified via the command line, but it is usually easier to enter it in the model file. Locate the model schema in the following folder:

To add values to the model, make the model schema on your file look like the following.

In order to use the model it will need to be migrated. To do this enter “rake db:migrate” into the command line to commit the model.

Creating the Views           

Now that our model has been created and committed to the database, we can pass values from the view to controller. To do this, we’re going to create a simple web form with the inputs to send to the DOTS Email Validation service. To do this, add the following code to the “new.html.erb” file.

Now that our input page is set, we’ll make sure our “show.html.erb” file is ready to display some output values from the service. We will include a brief if statement in the show that will determine whether or not an error object is present in the response from the service. If one is present we’ll display that if not, we’ll display the valid results. Make your show.html.erb file look like the following.

These output fields haven’t been instantiated or set to any values yet but that will happen in our controller.

Integrating Logic into the Controller

Now that our views are set up, we’ll need to put some code in the controller to make the actual web service call to the Service Objects web service. To make this call, we’re going to use the gem “httparty” which will allow us to make a RESTful web service call. Be sure to place this gem in your projects gem folder and run a “bundle install” in the command line.

We won’t go over all the elements of the controller file in this tutorial, but rather just touch on some important parts of the logic to be integrated.  The screen shot below highlights some important aspects of calling the webservice.

The code above has a primary and backup URL.  Currently they both point to the Service Objects trial environment. In the event that a production key is purchased, the primary URL should be set to ws.serviceobjects.com and the backup URL should be set to wsbackup.serviceobjects.com. These two URLS along with the accompanying logic that checks for a fatal error (TypeCode 3 for DOTS EV3) will ensure that your application continues to run in the event that the primary Service Objects data center is offline or experiencing issues.  If you have any questions about the failover logic, please don’t hesitate to contact Service Objects and we will be glad to assist further.

The code above calls an internal function, “processresults,” to display the values to the user. Here is the accompanying screen shot illustrating some of the logic of this function.

Depending on what response comes back from the service, this code will display the values to the user.  For your application, you will likely want to create a class that will automatically serialize the response from the service into something a bit easier to use; but we are showing the above logic as an example of how to parse the values from the service. Note: the @ev3response, @ev3info and @everror are present to help parse the response hash that httparty automatically creates for the response.

Final Steps and Testing

Our project is ready to test out. For this example, we’ll use the email support@serviceobjects.com to get a valid response from the service.

Entering these values in to the text boxes, along with a valid license key, should result in the following output.

That completes our Ruby on Rails tutorial.  If you have any questions about this, any other tutorials or sample code please don’t hesitate to contact us! We would love to help you get up and running with a Service Objects web service(s).

No image, text reads Service Objects Tutorials

A Node JS Step By Step Tutorial

A few weeks ago we published a blog article titled “Why You Should Never Put Sensitive Data in Your JavaScript” and it described some of the dangers that accompany client side scripts like javascript.  More specifically, if you are using an API like a Service Objects web service, the license key that is passed to the web service can be plainly visible to anyone who can inspect the page source.  Obviously, if you are looking to keep your page secure, this is bad.  Luckily enough, there are solutions available for you if you are keen on using javascript.

One such solution for this would be to use NodeJs.  NodeJS is a versatile server-side platform that can be used for networking or server side applications.  This short step by step tutorial will give you the bare bones server to begin testing a Service Objects web service using Node JS.

What you will need:

-NodeJS installed on your machine

-Your favorite Text Editor

-Familiarity with the Command Line

-A license key to the DOTS Address Validation 3 web service. Get a free trial key here!

Creating the Server

Once you have NodeJS and all your environment variables set correctly, you will have to navigate to the directory on the command line where your node.js file will be.

For this example create a file called GetBestMatches.js with your text editor and enter the following code in the file:

1

This small bit of code will create our server and run it on port 8081. When the server is launched through the command line using the command “node GetBestMatches.js” we’ll see a message in the command line that the server is running:

2

Seems simple enough right? Now lets add some more code that will allow us to display some information to the client side.  Inside the http.createServer function, add the following code:

3

Here we have a switch statement that contains 3 different cases that will set up the response header on the client side, have our URL that will eventually call the web service and display some default information on the localhost URL shown above. When this bit of code is launched the URL above will show the following:

4If we do navigate to the GetBestMatches link, well, nothing will happen. But we’re going to add some more code to change that.

Setting the Inputs and URLs

Here we will add code to instantiate the input values and the URL necessary to successfully call DOTS Address Validation 3. This process will be relatively similar to the necessary process to connect to most of the DOTS validation products, but you will need the specified input for that service and the respective license key for the service.

Here’s what the code will look like:

5

As shown above, the values for our inputs are hard coded here just to simple test how to interact with the Service Objects API within Nodejs.  When using this in production, you will obviously want to dynamically pass in the values to be validated, but for this tutorial we’ll use the hard coded inputs just for example and testing purposes. In the screenshot above there is also a primary and backup URL that we create. Currently, both of the URLs point to the same environment; but if the service is being used in a production environment, the primary URL should point to ws.serviceobjects.com and the backup URL should be pointed to wsbackup.serviceobjects.com.

Setting up the Call to the Web Service

Now that our inputs are created, we’ll add the code that will actually perform the GET call to the service. to do this, we’ll use the http.get function of the server that we’ve created and parse the results from the service in a readable format:

6

In this set of code we set the encoding and set the response from the service equal to the results object.  We also have included the xml2js package to parse the xml that comes back from the service.  This will simply allow us to display the results from the service in a more readable way.

Implementing FailOver Configuration

For the last bit of code that we’ll add, we will include proper failover configuration so that in the event that there is a service outage or that the web service is not responding correctly, your code and calls the Service Objects API will continue to function normally.

In the code below, if an error code Type 3 is found from the web service, then the code will use the backup URL.  If a successful response is received from the service then the code will display the results on the client side:

7

This is all the code that we will need to make a successful call to the DOTS Address Validation 3 service. We’ll change the inputs to use the Service Objects address (27 E Cota St STE 500, Santa Barbara, CA 93101) to show an example response from the service.  If the server is restarted , you will see the output that is displayed to the user:

{
“BestMatchesResponse”:  {
“xmlns”: “http://www.serviceobjects.com”,
“xmlns:i”: “http://www.w3.org/2001/XMLSchema-instance”
},
“Addresses”:  {
“Address”: {
“Address1″: 27 E Cota St Ste 500”,
“Address2”: “”,
“City”: “Santa Barbara”,
“State”: “CA”,
“Zip”: “93101-7602”,
“IsResidential”: “false”,
“DPV”: “1”,
“DPVDesc”: “Yes, the input record is a valid mailing address”,
“DPVNotes”: “26,28,39”,
“DPVNotesDesc”: “Perfect address, The input address matched the ZIP+4 record,The input address matched the DPV record,Highrise apartment/office building address”,
“Corrections”: “”,
“CorrectionsDesc”: “”,
“BarcodeDigits”: “931017602254”,
“CarrierRoute”: “C006”,
“CongressCode”: “24”,
“CountyCode”: “083”,
“CountyName”: “Santa Barbara”,
“FragmentHouse”: “27”,
“FragmentPreDir”: “E”,
“FragmentStreet”: “Cota”,
“FragmentSuffix”: “St”,
“FragmentPostDir: “”,
“FragmentUnit”: “Ste”,
“Fragment”:,”500″,
“FragmentPMBPrefix: “”,
“FragmentPMBNumber: “”
}
},
“IsCASS”: “true”

}

That completes our tutorial for JodeJS. If you have any questions, feel free to reach out to our tech team anytime!

The Difference Between Webhooks and APIs

The word “webhook” sounds really cool, and it is, but what exactly is it? What does a webhook do and how is it different from an API? What do webhooks have to do with Service Objects’ APIs?

What is an API?

In order to better understand webhooks, let’s first define API, or “application program interface.” APIs are pieces of code developed to execute some sort of logic. They are building blocks used to create other software, which can include creating other APIs. They serve as a common interface between two separate applications, allowing them to interact by sending and receiving data.

APIs can be used internally to support other processes within a company or they can be provided to the public as a paid or unpaid service. Google Maps API, for example, makes it possible to embed a Google map on a webpage. By using the Google Maps API, a web developer has a simple way to include an official Google map pinpointing their business’s exact location. There’s no need for any special coding or reinventing of the proverbial wheel.

Another example would be Service Objects’ address validation API, which makes it possible for our customers to input an address and receive standardized and corrected address information from our USPS CASS Certified database engine.

What is a Webhook?

Webhooks, for the most part, use APIs. Applications that have a mechanism for webhooks will often use a webhook when an event requiring custom logic has been triggered. Events are typically triggered by a workflow or some type of data entry, but other event types exist.

Webhooks give external developers opportunities to implement some sort of custom logic that either executes and returns results or executes custom logic for processes or purposes outside of the given application or both.

Using Marketo marketing automation software and Service Objects’ data validation APIs as an example, when an address is saved or added to a contact in Marketo, a webhook could be used to automatically validate the address using one of our address validation APIs such as our DOTS Address Validation – US 3 API. When the data is available, it is sent to an API call. In this case, there’s no polling from an external application of the host to check back periodically for data to validate.

So, from the standpoint of providing a webhook to a third party such as Service Objects, the intent is to enable that third party to push data back into your Marketo instance or trigger other operations outside of it. In our example, Marketo can send address data to Service Objects and trigger the address validation API thanks to the webhooks that provide this ability. Learn more about Service Objects for Marketo here.

In contrast, from the standpoint providing an API without webhooks, the intent is to enable others to trigger responses from your API to use strictly in their own applications.

There is no purpose to a webhook without an API, but the reverse is not true.

What Does All of This Mean to You?

It means that data validation is easy! Sign up for a free trial and find out just how easy and effective our webhook-powered APIs are.

Service Objects Announces DOTS NuGet Packages

Service Objects, NugetWhat do you get when you cross Service Objects’ powerful data validation APIs with NuGet? Time savings — and peace of mind that our best practices including failover configuration have been implemented.

What is NuGet?

NuGet, an open source package manager, was first introduced in 2010 as NuPack. It has since evolved and is now preinstalled with Microsoft Visual Studio 2012 and up. For Visual Studio 2010, it is available via Visual Studio Extension Gallery. Designed for the Microsoft development platform, NuGet supports multiple programming languages such as .NET Framework packages.

According to NuGet.org, as part of the package installation, NuGet will copy your files and then automatically apply any changes specified. Upon removal, the files will be removed and your changes reversed.

NuGet allows developers to more easily tap into the vast array of open source components readily available on the web. With NuGet, you can spend less time searching for, downloading, configuring, referencing, tweaking, and managing source files.

Third party developers, such as Service Objects, can produce and submit packages to the NuGet Gallery. Use either the Manage NuGet Packages dialog box or the Package Manager Console’s PowerShell command line commands to find, install, update, and remove packages.

Service Objects’ NuGet Packages

Our suite of NuGet packages offer .Net developers easy access to a library of code that implements our best practices. Integration with Service Objects’ data validation will be fast and clear.

Why NuGet, and why now? While many developers successfully and efficiently implement our data validation APIs on a routine basis, we want to make your job easier. We decided to provide Service Objects DOTS data validation packages via NuGet because we wanted integration with our service APIs to be as simple, straightforward and fast as possible without compromise. Our NuGet packages enforce our best practices including failover configuration. The inclusion of failover was one of our top priorities when deciding to release NuGet packages because ensuring that you get the utmost quality and uptime when using our data validation APIs is also one of our top priorities.

NuGet has matured to the point where all of our internal requirements for a package manager have been met. Our engineers have built a suite of packages that allow you to more easily integrate our DOTS data validation tools into your projects.

This is our first release of Service Objects NuGet packages. Currently, these data validation NuGet packages operate using the recommended operation from each or our DOTS services. Over time, we will continue to add more operations to each of the packages in the suite.

Benefits of Using Our Data Validation NuGet Packages

If you use Visual Studio 2012 or above, you already have the basic tools you need to work with our NuGet packages. By using our NuGet packages, you can expect the following:

  • Time saved during your development phase.
  • Our best practices implemented including failover configuration.
  • Web / app configuration settings injected directly into your configuration files.
  • Dependencies automatically added to your project.

Need more information? Feel free to reach out to discuss data validation API integration using NuGet.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Improve Mail Delivery With Service Objects’ Free Chrome Extension for Salesforce

Having validated and standardized addresses within your CRM will improve business efficiency, cut down on bogus leads, and help reduce waste. Let our new DOTS Address Validation Chrome Extension for Salesforce do all of the validating and standardizing for you.

Chrome-Extension

Configuration is as simple as copy and pasting your license key into the key field on the options page after downloading the extension:

Chrome-Extension2From there, any time you are on a Lead, Account, or Contact page within Salesforce, right clicking will present you with an option to validate the address. The extension will automatically pull out the address, city, state, and zip code, thus eliminating the need to manually enter the information you wish to validate. User error is taken out of the equation so you can be sure the address in your CRM is the address that is being validated. After validation, updating your Salesforce Lead/Account/Contact address is as simple as clicking an update button:Chrome-Extension4

The Technologies at Play

DOTS Address Validation – US 3

Our USPS CASS Certified™ DOTS Address Validation service improves internal mail processes and delivery rates by standardizing contact records against USPS data and flagging for vacancy, addresses returning mail, and general delivery addresses. Our industry-leading GetBestMatches operation, now combines Delivery Point Validation (DPV), SuiteLink and Residential Delivery Indicator (RDI) into one robust API call to our USPS CASS Certified™ database engine.

Addresses are pulled from the Salesforce Leads, Accounts, or Contacts tabs and then validated through the DOTS Address Validation – US 3 service. Once the address is validated, a window will pop up showing the service’s results. If the address was standardized and corrected to a perfectly deliverable address (DPV1) an option is given to the client to automatically update the address in their Salesforce instance.

Google Chrome Extensions JavaScript API

This is the primary technology that allows this project to be possible. The JavaScript API that is provided by Google allows developers access to the inner workings of the Chrome browser giving them access to features such as browser actions, events, web requests, and more. Developers are able to add functionality to the Chrome browser while staying lightweight and nonintrusive.

Installation, Configuration, and Usage

Please refer to our website for a detailed breakdown on installation, configuration, and usage for the Chrome extension for Salesforce.

Opinionated Software – Choosing Your Vision

In software development, there are two common approaches to architectural design. One school of thought, un-opinionated, is to make software agnostic and be as flexible as possible to allow the developer to make decisions on how to design and solve problems correctly. On the other hand, some believe the best software, opinionated, should only realize one true vision, the right way. This design paradigm suggests software should pick a side and stick to it. Following the opinionated approach, design decisions have been made already limiting the options available to the developer.

Opinionated Software

Some examples of highly opinionated software include Ruby on Rails, AngularJS and Ember. These software packages share the common characteristics of making certain tasks simple to the developer by following the already predesigned path, sometimes referred to as the “Golden Path”. Following this approach can be advantageous to a developer when a problem or task maps directly to one of these predesigned paths. This can, however, present challenges to the developer when functionality outside of a package’s design is desired, creating additional effort to solve.

Un-opinionated Software

In contrast, un-opinionated software such as the .NET technology stack offers the developer freedom in design choices. Some choices available to the developer include which language they prefer working in, be it C#, F#, VB.NET and virtually any language that is .NET compatible. Flexibility allows the developer to choose the right tool to accomplish a task. The potential downside of this is that with so much flexibility it may be difficult to develop a solution the framework does not provide assistance with, leaving less experienced developers with sub-par solutions.

Our Vision

Our vision is to offer developers the flexibility in the decisions they choose to make in integrating our services. Naturally, since our services are exposed over HTTP, they can be integrated in any language or platform that supports HTTP connections. The applications that are consumed by our clients follow best practices in architecture and input/output structures. This ensures flexibility in request/response formats as well as simplicity in obtaining results. As well, we are bound by the WSDL contracts we deliver to ensure consistency in response format.

Tips for Referencing a Web Service from Behind a Firewall

It’s not unusual for network administrators to lock down their server environments for security reasons and restrict inbound and outbound network activity. Basically, nothing can come in or go out without permission. As such, if your application requires an HTTP connection to call an external web service, then your network admin will most likely need to create a firewall rule to allow access to the service provider so that communication between your application and the web service may occur.

Most firewall rules are created to whitelist ports on a specific IP address. While opening up a port for a particular IP address will allow communication between the two endpoints to occur, most RESTful web services will make use of several IP addresses that point to geographically different data centers to help ensure maximum uptime and availability. So if your service provider has multiple IP addresses available then be sure to whitelist all of them in your firewall. Not only should you include all available IP addresses in your firewall rules, but you also need to make sure that your application utilizes proper failover code to use another IP address in the event that one responds slowly or becomes unavailable.

It is also recommended that you never hardcode a reference endpoint such as a domain or IP address. In the event of unexpected network related failure, a hardcoded endpoint will leave you vulnerable and leave you with no choice but to update your code. Depending on the complexity of your code and your deployment procedure, this could lead to more wasted downtime than necessary. Instead, it is considered a better practice to use an editable configuration location such as a database or config file to save your service endpoints. Using an easy to access editable location means that you can quickly switch to another service endpoint in the event that primary endpoint is unavailable.

Depending on how your failover code is written, using an external configuration location can also save your application from attempting a request to an unresponsive location. If your application is always attempting a call to a primary location first before failing over, then your application must first wait for the primary location to fail before attempting a call to the secondary location. Most default timeouts are around 30 seconds, so your application may be forced to wait for 30 seconds before switching to a secondary location, but with an editable configuration source you can easily swap out the bad location for a good one and save your application from any future failures.

Overall, here some basic tips for referencing a web service from a production application:

  • Do not hardcode your reference endpoints.
  • Do not reference by an IP address unless you are restricted behind a firewall. Otherwise always use the fully qualified domain name.
  • If you are behind a restricted firewall then be sure to include all IP Address endpoints if more than one is available.
  • Be sure to include failover code to make use of the available endpoints in the event that one or more may become unavailable.

Follow the above tips to help take full advantage of what your RESTful service provider has to offer and to also help ensure that you are doing everything you can to keep your application running smoothly.

No image, text reads Service Objects Tutorials

Path and Query String Parameter Calls to a RESTful Web Service

It’s important to remember that REST is an architectural style and that it does not have an official standard. As such, the question on how to call a REStful web service will often arise. If the service you are calling supports more than one way of being called, then which way should you choose? The short answer is the one that works best for you, and by works best for you I don’t simply mean the one that you feel most comfortable using (although that is an important factor). For the purpose of this article, we will focus on path parameter calls and how they compare to query string parameter calls, two deceptively similar styles. There are inherent differences between path parameters and query string parameters that you should consider before choosing one over the other.

A RESTful web service should behave the same regardless of how it is called, and as long as the same input values are provided the service will return the same result values every time. So why choose one format over another?

If you are unfamiliar with path parameters, then simply put, a path parameter template is one where the parameter values are embedded in the URL path.

Path Parameter Example:

http://domain/resource/verb/value1/value2/value3

Query String Parameter Example:

http://domain/resource/verb?key1=value1&key2=value2&key3=value3

The above examples are rough, but they get the point across. Notice that in the path parameter example the variable values are embedded in the URL path, whereas in the query string parameter example the variable values are provided in the query string.

Path parameters have gained popularity as being more human readable and easier to understand as the path template lends itself to a natural progression of drilling into data resources. Some services offer path parameters specifically for this reason. The user experience can be likened to having a direct connection to the data resource and it is often useful for simple lookup services where no data transformation/refinement occurs, or is needed. Path parameters can enable a service to be called in a distinctive style and, depending on the functionality of the service being called, the path template can take on many forms.

Here are some common examples:

http://domain/resource/format/noun/verb/value

http://domain/resource/noun/value1/value2

http://domain/resource/noun/verb/value1?format=formatvalue

The above examples add the noun and format values, where the noun can help make the service request more descriptive, intuitive and easier to understand. The format value, when available, is commonly used to tell the service what type of response format to return, typically either json or xml.

This is not to say that a service that is called on using query string parameters cannot be designed to be as descriptive and intuitive as a path parameter service.

Caveats to look out for when using path parameters:

  • Order matters: Unlike query string parameters, the order in which the path parameter values are given must correspond to the format dictated by the URL path. For example, if the service path dictates “/resource/format/noun/verb/value” then you cannot interchange it with “/resource/verb/format/value/noun/”.
  • Path parameter values cannot be omitted: Unlike query string parameters you cannot omit a value in path parameters because doing so would mean changing the URL path. The only exception is the final parameter value in the URL path. If the path format is “/resource/verb/value1/value2/value3”, and you want to omit value1 then “/resource/verb//value2/value3” will not work because it is the same as “/resource/verb/value1/value2/”. In which case, you have inadvertently omitted value3. If you omit enough values then you will end up with a malformed URL.
  • Reserved HTTP characters: such as “:”, “/”, “?”, “#”, “[“, “]” and “@” – These characters and others are “reserved” in the HTTP protocol to have “special” meaning in the implementation syntax so that they are distinguishable to other data in the URL. If a variable value within the path contains one or more of these reserved characters then it will break the path and generate a malformed request. You can workaround reserved characters in query string parameters by URL encoding them or sometimes by double escaping them, but you cannot in path parameters.

These are just some simple examples of how a path parameter URL request can differ from a query string parameter request. Overall, it is important to note that regardless of where the parameter variable values are placed, either in the path or in the query string, both styles must conform to the HTTP protocol. For some services, path parameters offer a more descriptive and distinct style that enforce a specific way of querying resources, whereas old-fashioned query string parameters offer greater flexibility and ease of use at the cost of being potentially less descriptive and intuitive.

Service Objects integrations can help improve your contact data quality, help with data validation, and enhance your business operations.

Service Objects Integration Patterns

Request and Reply

Usage: The request and reply pattern utilizes the HTTP protocol to transfer data to Service Objects Web Services where the data is processed, validated and/or appended to and returned to the client.

Example: A custom web form which captures a user’s contact information to be inserted into a CRM or database. In this example, a developer would integrate a POST call to Service Objects on form submission to validate the user’s contact fields prior to inserting data into a CRM or database.

Request-ReplyBest Practices:

Timeouts: Whether integrating Service Objects Web Services in an application or web page, implementing the correct client timeout setting is vital to the function service. If the timeout is too long, application performance or user experience can be degraded. Conversely, setting the timeout too short will not provide the service with proper time to perform its function and return a result over the network. A general rule of thumb for client side timeout settings is around 3 seconds for most of Service Objects Web Services. However, there are some services which potentially require a longer timeout due to the nature of work being performed. We always recommend consulting with Service Objects Support to get the best-recommended settings for the service being integrated.

Failover: Having redundancy available to our clients is vital for mission critical systems which cannot afford downtime or lack the ability to process incoming information. Service Objects offers multiple redundant datacenters to meet the demands of these systems. We encourage our clients to make use of these locations in their integration logic in the event of a slowdown or issue at one of our datacenters. Coupling failover integration with proper timeout settings provides a reliable system which ensures zero downtime in critical systems.

HTTPS:
Security is an equally important consideration when transmitting user information over the net.Service Objects strongly recommends transmitting all calls be made using https. This ensures all client information including license key are encrypted during transport.

Remote Call In

Usage: The remote call in pattern utilizes the HTTP protocol for Service Objects to make requests to remote applications for available records to be processed. If available, these records are exported for processing and upserted into the remote application.

Example: A Salesforce application which flags incoming leads to be processed by Service Objects. Service Objects would act on the client’s behalf and periodically check for contacts that are available to be processed. If contacts are marked as available to process, Service Objects would bulk export contacts to process and reinsert into the system as a post processing step.

RemoteCallInBest Practices:

Credentials: When acting on the client’s behalf to process records, we always recommend creating a user account with API level credentials which is easily identifiable. This prevents access from being inadvertently revoked by system administrators which can cause issues in application workflow.

FTP Batch Upload

Usage: Service Objects provides an SFTP user account to the client for batch uploads to be processed and returned. With each batch file that is processed, a report is included in the package returned to the client with metrics on the data that was processed.

Example: Client X joins into agreement with Service Objects to batch process records. Service Objects provides whitelisting to the client’s IP for access to the SFTP account. On demand, a batch file is securely transmitted via SFTP protocol to Service Objects for processing. Each data file is preprocessed to ensure field mappings are adhered to and records are ready to be processed. Once completed, the processed files are packaged along with a metrics report for Client X to retrieve.

FTP BatchBest Practices:

File Format: Service Objects regularly accepts multiple file formats, however it is recommended that all files delivered for processing be encoded in standard utf-8 character encoding with Windows line ending format.

Header Format: In the setup process, Service Objects will receive a sample file for processing from an FTP client. It is recommended to provide a header line along with the data to be processed for simplified identification of fields to be processed. It is not uncommon to receive input files with more than 50 fields which can make identification of input fields more difficult without a header line. We also recommend simplified location of input fields within the input file. If the input file will have fields other than the input fields, the input fields should be grouped together towards the left side of the document if possible.

Column Format: Service Objects recommends that columns should be quote enclosed to ensure that all files are valid CSV. In some cases, address information can break CSV files which can halt processing to fix any necessary columns.