Fun with Outbound REST Events

“A good programmer is someone who always looks both ways before crossing a one-way street.”
Doug Linder

A while back I mentioned that ServiceNow Event Management can be used within ServiceNow itself. I explained how all of that could work, but I never really came up with a real-world Use Case that would demonstrate the value of starting to wander down that road. I have code to generate Events in a lot of places that never get executed, but it is still there, just in case. One place where unwanted things do tend to happen, though, is when interacting with outside resources such as JDBC imports or external REST calls. Here, you are dependent on some outside database or system being up and available, and that’s not always the case, so you need to build in processes that can gracefully handle that. This is an excellent place for Event Management to step in and capture the details of the issue and log it for some kind of investigation or resolution.

So, I thought I should come up with something that anyone can also play around with on their own, that isn’t tied to some proprietary database or internal web service. I started searching for some kind of public REST API, and I stumbled across the Public APIs web site. There is actually a lot of cool stuff here, but I was looking for something relatively simple, and also something that would seem to have some relation to things that go on inside of ServiceNow. After browsing around a bit, I found the US Street Address API, which looked like something that I could use to validate street addresses in the User Profile. That seemed simple enough and applicable enough to serve my purpose, so that settled that.

There are quite a few parts and pieces to do everything that I want to do, so we will just take them on one at a time. Here is the initial list of the things that I think that I will need to accomplish all that I would like to do:

  • Create an Outbound REST Message using the US Street Address API as the end point,
  • Create a Script Include that will encapsulate all of the functions necessary to make the REST call, evaluate the response, log an Event (if needed), and return the results,
  • Create a Client Script on the sys_user table to call the Script Include if any component of the User’s address changes and display an error message if the address is not valid,
  • Create an Alert Management Rule to produce an Incident whenever the new Event spawns an Alert,
  • Test everything to make sure that it all works under normal circumstances, and then
  • Intentionally mangle the REST end point to produce a failure, thereby testing the creation of the Event, Alert, and Incident.

The first thing to do, then, will be to create the Outbound REST Message, but before we do that, let’s explore the web service just a little bit to understand what we are working with. To do that, there is a handy little API tester here. This will allow us to try a few things out and see what happens. First, let’s just run one of their provided test cases:

https://us-street.api.smartystreets.com/street-address?auth-id=21102174564513388&candidates=10&match=invalid&street=3901%20SW%20154th%20Ave&street2=&city=Davie&state=FL&zipcode=33331

The API is just a simple HTTP GET, and the response is a JSON object:

[
  {
    "input_index": 0,
    "candidate_index": 0,
    "delivery_line_1": "3901 SW 154th Ave",
    "last_line": "Davie FL 33331-2613",
    "delivery_point_barcode": "333312613014",
    "components": {
      "primary_number": "3901",
      "street_predirection": "SW",
      "street_name": "154th",
      "street_suffix": "Ave",
      "city_name": "Davie",
      "default_city_name": "Fort Lauderdale",
      "state_abbreviation": "FL",
      "zipcode": "33331",
      "plus4_code": "2613",
      "delivery_point": "01",
      "delivery_point_check_digit": "4"
    },
    "metadata": {
      "record_type": "S",
      "zip_type": "Standard",
      "county_fips": "12011",
      "county_name": "Broward",
      "carrier_route": "C006",
      "congressional_district": "23",
      "rdi": "Commercial",
      "elot_sequence": "0003",
      "elot_sort": "A",
      "latitude": 26.07009,
      "longitude": -80.35535,
      "precision": "Zip9",
      "time_zone": "Eastern",
      "utc_offset": -5,
      "dst": true
    },
    "analysis": {
      "dpv_match_code": "Y",
      "dpv_footnotes": "AABB",
      "dpv_cmra": "N",
      "dpv_vacant": "N",
      "active": "Y"
    }
  }
]

It looks like the response comes in the form of a JSON Array of JSON Objects, and the JSON Objects contain a number of properties, some of which are JSON Objects themselves. This will be useful information when we attempt to parse out the response in our Script Include. Now we should see what happens if we send over an invalid address, but before we do that, we should take a quick peek at the documentation to better understand what may affect the response. One input parameter in particular, match, controls what happens when you send over a bad address. There are two options;

  • strict  The API will return detailed output only if a valid match is found. Otherwise the API response will be an empty array.
  • invalid  The API will return detailed output for both valid and invalid addresses. To find out if the address is valid, check the dpv_match_code. Values of Y, S, or D indicate a valid address.

The default value in the provided tester is invalid, and that seems to be the appropriate setting for our purposes. Assuming that we will always use that mode, we will need to look for one of the following values in the dpv_match_code property to determine if our address is valid:

Y β€” Confirmed; entire address is present in the USPS data. To be certain the address is actually deliverable, verify that the dpv_vacant field has a value of N. You may also want to verify that the active field has a value of Y. However, the USPS is often months behind in updating this data point, so use with caution. Some users may prefer not to base any decisions on the active status of an address.
S β€” Confirmed by ignoring secondary info; the main address is present in the USPS data, but the submitted secondary information (apartment, suite, etc.) was not recognized.
D β€” Confirmed but missing secondary info; the main address is present in the USPS data, but it is missing secondary information (apartment, suite, etc.).

So, let’s give that a shot and see what happens. Let’s drop the state and zipcode from our original query and give that tester another try.

https://us-street.api.smartystreets.com/street-address?auth-id=21102174564513388&candidates=10&match=invalid&street=3901%20SW%20154th%20Ave&street2=&city=Davie

… which give us this JSON Array in response:

[
  {
    "input_index": 0,
    "candidate_index": 0,
    "delivery_line_1": "3901 SW 154th Ave",
    "last_line": "Davie",
    "components": {
      "primary_number": "3901",
      "street_predirection": "SW",
      "street_name": "154",
      "street_suffix": "Ave",
      "city_name": "Davie"
    },
    "metadata": {
      "precision": "Unknown"
    },
    "analysis": {
      "dpv_footnotes": "A1",
      "active": "Y",
      "footnotes": "C#"
    }
  }
]

This result doesn’t even have a dpv_match_code property, which is actually kind of interesting, but that would still fail a positive test for the values Y, S, or D, so that would make it invalid, which is what we wanted to see.

OK, I think we know enough now about the way things work that we can start building out out list of components. This is probably a good place to wind things up for this episode, as we can start out next time with the construction process, beginning with of our first component, the Outbound REST Message.

Yet Even More Fun with JDBC Data Sources

“We have a strategic plan; it’s called doing things.”
Herb Kelleher

Recently I had the need to write to an external database on demand in much the same way as I had been reading from an external database on demand earlier. I was able to hack the Scheduled Import process to fetch a specific record, so my thought was that there must be some kind of equivalent Scheduled Export process that I might be able to bend to my will to satisfy my current requirement. Unfortunately, the Scheduled Export process is based on dropping a file somewhere, not on writing to an external database. You have a lot of options as to what type of file it is, but it is still just a file, and I wanted to write to a database table.

So, then I looked into lighting up the JDBCProbe, which I did without too much difficulty, but after digging into all of its various capabilities, I realized that it really didn’t have that much more to offer than the techniques that I had already been using. You still have to create a Data Source, you still have to configure the job to run, you still have to launch the job, and you still have to monitor the job for completion and successful execution. Since I had already pretty much worked all of that out with the Scheduled Import process, decided that I would just stick with that.

The difference, of course, was that this time I wanted to write, not read. Fortunately, that problem can be solved with SQL, and nothing else about my original hack really needed to change. Scheduled Imports expect the end result of the SQL execution to be a Result Set, which is the outcome of an SQL query. However, a query is not the only way to produce a Result Set. SQL Insert and Update statements can also produce a Result Set when you use the Output clause. The Output clause returns specified values such as a generated Identity key back in the form of a Result Set. By leveraging the Output clause, you can build an SQL statement that will both update the external database and return a Result Set all in one operation.

Consider the following example SQL:

INSERT
   EMPLOYEE,
   INCIDENT_ID,
   DATE,
   HOURS_WORKED
OUTPUT
   INSERTED.ID,
   INSERTED.EMPLOYEE,
   INSERTED.INCIDENT_ID,
   INSERTED.DATE,
   INSERTED.HOURS_WORKED
INTO
   TIME_CARD
VALUES(
   'snhackery',
   'INC0000456',
   getdate(),
   6.5)

This SQL statement will insert the specified values into the external database table, and then return the inserted values, along with the newly generated ID on the external table, back to ServiceNow in the form of a Result Set. To the Scheduled Import job, this response is indistinguishable from the response received from a standard SELECT statement. Now, you still have to have a somewhat useless table on the ServiceNow end of things to receive the returned data, even though all you really wanted to do was to write to a table in a remote database, but that table can serve as a log of all records written out to the external destination.

It is possible that we could have accomplished this without the need for the local ServiceNow table by utilizing the JDBCProbe, but this happens to work, so that’s good enough to satisfy Rule #1.

Even More Fun with JDBC Data Sources

“When in doubt, use brute force.”
— Butler Lampson, Hints for Computer System Design

After hacking up a JDBC type Data Source to pass in a dynamic SQL parameter and then modifying that hack to provide simultaneous update protection, we still have a couple of unresolved issues related to this approach. For one, since we are watching the table and not the scheduled import job, a Record Not Found result and a Something Bad Happened to the Job and it Never Ran result end up looking exactly the same (no new record ever shows up in the table). It would be nice to be able to distinguish one from the other, particularly if you wanted to log an Event in the case of the latter (but not the former!). Another somewhat related issue involves the age of the imported data. Once you retrieve a record from an outside database, there needs to be some criteria for how long you hang on to it and continue to use it before you go back to the outside database and fetch a fresh copy. There are a number of ways to deal with that, which we can review, but first let’s take a look at the Not Found issue.

We already implemented a post-import script in our last modification, and that’s a great place to insert some code to verify that the job ran. After all, the entire purpose of the script is to run when the job is over, so that’s the perfect spot to check to see if you found a record or not. But if we did not find a record, what do we do about it? One approach would be to add a column to the table called Record Found with a type of True/False and a default value of True. Then, in your post-import script, you can grab the key value from the SQL (before you reset it for others to use!), and use it to read the table to see if you found a record. If you didn’t find a record, the you can then create a record using the key value and setting the Record Found column to False. This way, your process that is spinning endlessly waiting for a record to appear will see the record and get out of the waiting loop, and you will just have to check the Record Found column to see if a record was returned or not. Converting that narrative to code might produce something like this:

var ds = import_set.data_source.getRefRecord();
var sql = ds.getValue('sql_statement');
var parts = sql.split("'");
var keyValue = parts[1];
sql = sql.substring(0, sql.indexOf("'")) + "'AVAILABLE'";
ds.setValue('sql_statement', sql);
ds.update();
var gr new GlideRecord('your_table_name');
gr.addQuery('id', keyValue);
gr.query();
if (!gr.next()) {
    gr.initialize();
    gr.setValue('id', keyValue);
    gr.setValue('record_found', false);
    gr.insert();
}

This code actually combines the simultaneous update protection code from our last outing with the new code needed to handle the Record Not Found condition. To complete the solution, of course, the process that reads in this record from the database table will need to check the Record Found column to ensure that there was an actual record returned, and if not, take the appropriate action.

As to the question of how long an externally fetched record is valid before reaching back out to the external database and fetching a fresh one, that really depends on the specific use case. If your specific implementation requires up-to-the-minute information, then the simplest solution is to just delete the imported record as soon as you grab the data. That will ensure that the initial query will always come up empty, and you will always launch a new Scheduled Import job to get a fresh copy. If, on the other hand, the data is not that volatile, and will last throughout the day, you could always set up a nightly process to clean out the table overnight. Under this strategy, the first person interested in a specific record would pull it from the external database, and everyone else who comes along afterwards would just use the imported copy and not have to pay the penalty of the wait time for the completion of the Scheduled Import.

Yet another approach might be to timestamp all of the records at the time of import, and then compare the timestamp value at read to a specified duration from the current date/time. If the record is too old, then trigger a fetch from the external database. Under this scenario, you wouldn’t necessarily have to ever delete the data in the ServiceNow table, as things would be driven by the timestamp, not the presence of a record. You may still want to schedule some kind of periodic record deletion anyway, though, just to keep from hanging on to records that you may never look at again once they are expired.

Up to this point, we have been talking about leveraging the Scheduled Import capability of ServiceNow to fetch single records, or limited sets of records, on demand based on a single key. The data always flows in one direction: from your internal database into ServiceNow. But what if you have a need to write to an internal database on demand from ServiceNow? Well, that’s an interesting question, but one we will have to address at a later date when we have a little more time …

More Fun with JDBC Data Sources

“Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.”
John F. Woods

Rewriting the SQL of a JDBC type Data Source was a neat trick for a specific, limited use case, but to use that tactic on a more traditional requirement would require that we fill in a few of the holes left unattended by that simplistic example. How we deal with the local copy of the data is one major consideration. How we deal with multiple, concurrent users is another one. The latter concern is probably the easiest to address, so let’s take that one on first.

The simplest way to support active multiple users is to treat the Data Source like an airplane restroom and put up some kind of Occupied sign that limits access to only one user at a time. In this case, it would actually be more like an Unoccupied sign that signals that the Data Source is available for use. Using our previous example’s SQL, it might look something like this:

SELECT
   STATUS,
   EXPIRATION_DATE
FROM
   LICENSE_MASTER
WHERE
   ID='AVAILABLE'

The presence of the key word AVAILABLE is our cue that the Data Source is open for business. If that word is not present, then we wait our turn. In practice, that might look something like this in Javascript:

function fetchData() {
    var ds = new GlideRecord('sys_data_source');
    ds.get('name', data.config.dataSource);
    var sql = ds.sql_statement;
    if (sql.indexOf('AVAILABLE') != -1) {
        ds.sql_statement = sql.replace('AVAILABLE', data.license.number);
        ds.update();
        var job = new GlideRecord('scheduled_import_set');
        job.get('name', data.config.jobName);
        gs.executeNow(job);
    } else {
        if (attempts++ < maxAttempts) {
            gs.sleep(sleepMs);
            fetchData();
        }
    }
}

Once you have replaced the key word AVAILABLE with your own key, everyone else who comes along will no longer find the welcome sign out, and will have to wait their turn to utilize the service. The one thing that you will have to do, however, is to reset the SQL back to its open-for-business state once your import job has completed. That can be handled in the specifications for the import job itself. Near the bottom of the Scheduled Import form, there is a check-box labeled Execute post-import script. Checking this box opens up a Script input field where you can enter a script that will run once the Scheduled Import job has completed:

Post-import Script option

To reset the SQL in the Data Source to make it available for the next potential user waiting in line, you can do something like this:

var ds = import_set.data_source.getRefRecord();
var sql = ds.getValue('sql_statement');
sql = sql.substring(0, sql.indexOf("'")) + "'AVAILABLE'";
ds.setValue('sql_statement', sql);
ds.update();

With the post-import script in place, your key value will be safe through the execution of the import job, but then as soon as it completes, will once again be replaced with the key word AVAILABLE to signal to to the next person in line that it is their turn in the barrel. Things may take a tad bit longer under this strategy, but at least users will not be stepping on each other’s requests when they attempt to use features implementing this concept. And of course, the most important thing is that it satisfies Rule #1.

Fun with JDBC Data Sources

“It is common sense to take a method and try it. If it fails, admit it frankly and try another. But above all, try something.”
Franklin D. Roosevelt

ServiceNow JDBC type data sources are used primarily for importing data from external databases, usually in bulk via periodically scheduled imports. This is a useful feature that allows you to run standard SQL statements either directly from the ServiceNow cloud, or locally on a specified MID server. Unfortunately, the SQL has to be static, and I have yet to find a way to pass dynamic query parameters to the SQL statement so that I can use a JDBC data source for running parameterized queries.

For a standard bulk import, there is really no need to pass query parameters to the SQL statement, but if you want to fetch a specific record, or obtain a list of transactions related to a specific account, it would be nice to be able run a prepared statement to which you could pass one or more parameters. There may actually be a way to set that up, but since I really don’t know what I am doing, I have no knowledge of anything remotely like that. Still, that’s pretty much what I want to be able to do.

The basic use case that I have in mind is a simple UI Action that brings up a modal dialog box containing data from a foreign database table. Let’s say that I have a JDBC connection to a license database on my internal corporate intranet, and the purpose of the button is to check the current status of the license of the person displayed on the screen. I have the person’s license number as a property of the current record, and I want to pass it to a process that will use that number as a key to read the license record from the JDBC data source. If I could pass that number in as a parameter of the SQL, that would be perfect, but since I don’t see any way to do that, we’ll have to resort to a little ServiceNow hackery …

Like most components of ServiceNow, a JDBC type data source is just a record in a table in the system database. In this case, the relevant table is sys_data_source, and the SQL statement can be found in the property sql_statement. Now, I would not recommend this approach for any function that is used with any regularity or by a large, active audience, but because this particular function is a seldomly used, low volume activity, we should be able to get away with it. The little bit of hackery will be to rewrite the SQL statement every time that we use it, injecting a variable value into the stock SQL, saving the data source record, and then launching the job using the updated SQL. The first thing that we will have to do is to create the JDBC data source and configure it with its own SQL statement by setting the Query property to Specific SQL:

JDBC Data Source with Specific SQL

This is the SQL that we will want to modify to insert our dynamic key value in place of the hard-coded key value that is currently in the data source definition. Essentially, we will want to keep everything up to and including that first single quote, and then we can add to that our dynamic value and then a closing single quote to finish things off. We can do that in Javascript with something like this:

sql = sql.substring(0, sql.indexOf("'") + 1) + keyValue + "'";

Of course, before we do that, we have to read the data source definition record in from the database, then replace the SQL statement value, and then update the record in the database, but that’s all simple GlideRecord manipulation that is the foundation of ServiceNow development. But before we dig into all of that, let’s step back just a little bit and talk about what we need to have in place to make all of this work.

To fetch data from a database on your own local system, you will need more than just a JDBC type data source. You will also need a destination Table, an Import Table, a Transform Map, and a Scheduled Import job that you can launch to make the magic happen. ServiceNow tables and transform maps are pretty vanilla items that you can learn about from a variety of sources, so we won’t spend too much time on that here. You will need to create the table, though, before you set up the scheduled import, as you will need to reference that table (as well as the data source) when you set up the scheduled import job. Oh, and let’s not get too overly attached to that word “scheduled” … we really don’t want this thing running on a schedule; we want it running on demand. To create a scheduled import, you do have to define a schedule, but once a year seems like a good enough choice for a schedule if you are going to have to pick one to declare.

For our purpose, though, we will be launching our scheduled import from a script. Once we import the selected record, we don’t really need to import it again, so our script will start out looking in the table first. If we don’t find a record, or if we do find a record and it is out of date, then we launch the job to go get it from the local source database. As this is an asynchronous process, we might want to monitor the job to see when it has been completed, but for our purposes, the presence of the new record in the database table will tell us the same thing, so we can just launch the job and forget it, and then keep looking in the table for the data that we are waiting for. The entire sequence of events would go something like this:

var sleepMs = 1000;
var maxAttempts = 30;
var attempts = 0;
var updateRequested = false;

function getLicense() {
	var gr = new GlideRecord('imported_license_info');
	gr.addQuery('id', data.license.number);
	gr.query();
	if (gr.next()) {
		data.license.status = gr.getDisplayValue('status');
		data.license.expiration_date = gr.getDisplayValue('expiration_date');
	} else {
		if (!updateRequested) {
			var ds = new GlideRecord('sys_data_source');
			ds.get('name', data.config.dataSource);
			var sql = ds.sql_statement;
			ds.sql_statement = sql.substring(0, sql.indexOf("'") + 1) + data.license.number + "'";
			ds.update();
			var job = new GlideRecord('scheduled_import_set');
			job.get('name', data.config.jobName);
			gs.executeNow(job);
			updateRequested = true;
		}
		if (attempts++ < maxAttempts) {
			gs.sleep(sleepMs);
			getLicense();
		}
	}
}

In this example, we first look to the ServiceNow database table for our info, and if we don’t find it, then we check to see if we have launched the import job or not. If we haven’t, then we grab the data source, update the SQL with the key of the record that we want to read, and then launch the job. Then we wait for a bit and check again. The next time through the process, if we still do not have data in the table, we see that we have already launched the job, so now all we have to do is wait for a bit and check again. Rinse. Repeat. Eventually, we will either have the data that we requested or we will run out of attempts and return empty-handed. Since it is possible that something might go wrong with the import, the setting a maximum number of attempts keeps this from just looping through this process forever.

The danger in this approach, of course, is that more that one person will want to run this process at the same time, and one user will rewrite the SQL statement before the other user’s job runs, and then the first user’s SQL will never get a chance to run. That’s a real possibility and one that should not be discounted, but in this particular instance, it was just for an occasionally used convenience feature. The remote possibility of that happening in this case was not significant enough to warrant coding a defensive mitigating strategy.

Later on, I did run into a situation where that would have been a real problem, but that’s a discussion for another time