Web services are great tools for completing one or more tasks that you would otherwise not have the resources to complete on your own. Web services can be relatively quick as well, returning a response in tenths of a second. While a tenth of a second may seem fast, web services in general are regarded as slow processes and are not commonly considered when performing large batch jobs. This is because a batch may consist of millions of records, and if each web request took a tenth of a second to complete then it would take approximately 28 hours to complete one million requests. Every millisecond counts when performing large batches, so it is not uncommon for web services to be regarded as bottlenecks. However, with the right preparation and integration, making a call to a web service is almost no different than making a call to a local database. You may actually be surprised to discover that potential bottlenecks exist in areas that you may not have previously thought of.
Web services rely on an internet connection and they normally communicate via HTTP or HTTPS on port 80 and port 443, respectively. Large batch processes typically run on designated servers with elevated security, and it is not uncommon for a network admin to lock down a server so that it cannot access the internet. Be careful not to let your admin block internet communication on the machine that will be calling the web service and ensure that the network is in good condition.
Depending on the platform you are on and how your environment is configured, your application may be performing a DNS lookup for every web request, regardless of what the DNS time-to-live (ttl) is set to. Depending on how your local DNS resolver is configured, the average DNS lookup time should range between 10-50 milliseconds. There is no need to perform a DNS check for every single request. Instead, take advantage of the DNS cache and ttl so that a DNS lookup is only performed after the ttl has expired.
Most platforms will have connection pools available to quickly perform requests. Keep in mind though that the number of connections in a pool can be exhausted, so always remember to properly close and dispose your connections when you are finished using them. If a connection is not closed then it will remain open and unusable until it returns to the connection pool. This can sometimes take 30 seconds for every connection depending on which framework you are on, and your application will be forced to wait until a connection becomes available or worse yet your application may crash. The problem with connection leaks is that they do not occur right away. Instead, they commonly pop up unexpectedly after your batch job has been long running. This can sometimes mean that all of the work that was performed up to the point of the error will be lost, leaving you empty-handed and possibly behind schedule. Connection leaks are not just limited to web service calls either. They can occur in all types of connections, such as database calls. So be sure to check your entire application for potential connection leaks and not just the web service calls.
Many batches are performed against large datasets. Make sure your tables are designed and indexed to support fast read and insert times. In general, inserts are faster than updates with where clauses, so try to avoid update commands if possible. You can’t really blame a web service for being slow if your local database queries are slower than the web service call. When processing a large record set it is sometimes faster to load parts of the data into memory, call the web service as you iterate through the in-memory record set, store the results in memory and then when done iterating, insert the results in bulk to your destination table.
Web services accept simultaneous requests, so instead of processing one record at a time, you can instead process multiple simultaneous connections to complete a batch in a fraction of the time. Most platforms will allow you to perform asynchronous requests, which can be used to process multiple web requests at the same time. Asynchronous requests will use a thread other than the main program thread to perform the web request, and therefore will consume additional resources. In general, most applications run on a single worker process. Depending on the platform, a single worker process may have between 12 to 24 threads at its disposal, with the number of threads being configurable or dynamically managed. The number of threads available to a worker process is also determined by the number of cores available on the CPU. It is important to not spawn too many asynchronous requests as doing so can lead to performance degradation in your application, local machine and other areas such as your network connection. It is best to test your application with a small number of simultaneous requests first and then work your way up. Take the time to evaluate the performance of your application and the health of your machine when performing each test. In general, most batch jobs can be quickly processed by 10, 20 or even 40 simultaneous requests. Larger batches may require 100+ simultaneous requests, but be aware that doing so could expose/introduce deficiencies in other areas of your program, local machine and/or database.
Good application design can require careful consideration in many areas that are sometimes not commonly apparent. The contents of the short list above were aimed towards web services, but the basic list can easily apply to any scenario where a call to a method that resides outside of memory is necessary.