“We have a strategic plan; it’s called doing things.” — Herb Kelleher
Recently I had the need to write to an external database on demand in much the same way as I had been reading from an external database on demand earlier. I was able to hack the Scheduled Import process to fetch a specific record, so my thought was that there must be some kind of equivalent Scheduled Export process that I might be able to bend to my will to satisfy my current requirement. Unfortunately, the Scheduled Export process is based on dropping a file somewhere, not on writing to an external database. You have a lot of options as to what type of file it is, but it is still just a file, and I wanted to write to a database table.
So, then I looked into lighting up the JDBCProbe, which I did without too much difficulty, but after digging into all of its various capabilities, I realized that it really didn’t have that much more to offer than the techniques that I had already been using. You still have to create a Data Source, you still have to configure the job to run, you still have to launch the job, and you still have to monitor the job for completion and successful execution. Since I had already pretty much worked all of that out with the Scheduled Import process, decided that I would just stick with that.
The difference, of course, was that this time I wanted to write, not read. Fortunately, that problem can be solved with SQL, and nothing else about my original hack really needed to change. Scheduled Imports expect the end result of the SQL execution to be a Result Set, which is the outcome of an SQL query. However, a query is not the only way to produce a Result Set. SQL Insert and Update statements can also produce a Result Set when you use the Output clause. The Output clause returns specified values such as a generated Identity key back in the form of a Result Set. By leveraging the Output clause, you can build an SQL statement that will both update the external database and return a Result Set all in one operation.
This SQL statement will insert the specified values into the external database table, and then return the inserted values, along with the newly generated ID on the external table, back to ServiceNow in the form of a Result Set. To the Scheduled Import job, this response is indistinguishable from the response received from a standard SELECT statement. Now, you still have to have a somewhat useless table on the ServiceNow end of things to receive the returned data, even though all you really wanted to do was to write to a table in a remote database, but that table can serve as a log of all records written out to the external destination.
It is possible that we could have accomplished this without the need for the local ServiceNow table by utilizing the JDBCProbe, but this happens to work, so that’s good enough to satisfy Rule #1.
After hacking up a JDBC type Data Source to pass in a dynamic SQL parameter and then modifying that hack to provide simultaneous update protection, we still have a couple of unresolved issues related to this approach. For one, since we are watching the table and not the scheduled import job, a Record Not Found result and a Something Bad Happened to the Job and it Never Ran result end up looking exactly the same (no new record ever shows up in the table). It would be nice to be able to distinguish one from the other, particularly if you wanted to log an Event in the case of the latter (but not the former!). Another somewhat related issue involves the age of the imported data. Once you retrieve a record from an outside database, there needs to be some criteria for how long you hang on to it and continue to use it before you go back to the outside database and fetch a fresh copy. There are a number of ways to deal with that, which we can review, but first let’s take a look at the Not Found issue.
We already implemented a post-import script in our last modification, and that’s a great place to insert some code to verify that the job ran. After all, the entire purpose of the script is to run when the job is over, so that’s the perfect spot to check to see if you found a record or not. But if we did not find a record, what do we do about it? One approach would be to add a column to the table called Record Found with a type of True/False and a default value of True. Then, in your post-import script, you can grab the key value from the SQL (before you reset it for others to use!), and use it to read the table to see if you found a record. If you didn’t find a record, the you can then create a record using the key value and setting the Record Found column to False. This way, your process that is spinning endlessly waiting for a record to appear will see the record and get out of the waiting loop, and you will just have to check the Record Found column to see if a record was returned or not. Converting that narrative to code might produce something like this:
var ds = import_set.data_source.getRefRecord();
var sql = ds.getValue('sql_statement');
var parts = sql.split("'");
var keyValue = parts[1];
sql = sql.substring(0, sql.indexOf("'")) + "'AVAILABLE'";
ds.setValue('sql_statement', sql);
ds.update();
var gr new GlideRecord('your_table_name');
gr.addQuery('id', keyValue);
gr.query();
if (!gr.next()) {
gr.initialize();
gr.setValue('id', keyValue);
gr.setValue('record_found', false);
gr.insert();
}
This code actually combines the simultaneous update protection code from our last outing with the new code needed to handle the Record Not Found condition. To complete the solution, of course, the process that reads in this record from the database table will need to check the Record Found column to ensure that there was an actual record returned, and if not, take the appropriate action.
As to the question of how long an externally fetched record is valid before reaching back out to the external database and fetching a fresh one, that really depends on the specific use case. If your specific implementation requires up-to-the-minute information, then the simplest solution is to just delete the imported record as soon as you grab the data. That will ensure that the initial query will always come up empty, and you will always launch a new Scheduled Import job to get a fresh copy. If, on the other hand, the data is not that volatile, and will last throughout the day, you could always set up a nightly process to clean out the table overnight. Under this strategy, the first person interested in a specific record would pull it from the external database, and everyone else who comes along afterwards would just use the imported copy and not have to pay the penalty of the wait time for the completion of the Scheduled Import.
Yet another approach might be to timestamp all of the records at the time of import, and then compare the timestamp value at read to a specified duration from the current date/time. If the record is too old, then trigger a fetch from the external database. Under this scenario, you wouldn’t necessarily have to ever delete the data in the ServiceNow table, as things would be driven by the timestamp, not the presence of a record. You may still want to schedule some kind of periodic record deletion anyway, though, just to keep from hanging on to records that you may never look at again once they are expired.
Up to this point, we have been talking about leveraging the Scheduled Import capability of ServiceNow to fetch single records, or limited sets of records, on demand based on a single key. The data always flows in one direction: from your internal database into ServiceNow. But what if you have a need to write to an internal database on demand from ServiceNow? Well, that’s an interesting question, but one we will have to address at a later date when we have a little more time …
“Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.” — John F. Woods
Rewriting the SQL of a JDBC type Data Source was a neat trick for a specific, limited use case, but to use that tactic on a more traditional requirement would require that we fill in a few of the holes left unattended by that simplistic example. How we deal with the local copy of the data is one major consideration. How we deal with multiple, concurrent users is another one. The latter concern is probably the easiest to address, so let’s take that one on first.
The simplest way to support active multiple users is to treat the Data Source like an airplane restroom and put up some kind of Occupied sign that limits access to only one user at a time. In this case, it would actually be more like an Unoccupied sign that signals that the Data Source is available for use. Using our previous example’s SQL, it might look something like this:
SELECT
STATUS,
EXPIRATION_DATE
FROM
LICENSE_MASTER
WHERE
ID='AVAILABLE'
The presence of the key word AVAILABLE is our cue that the Data Source is open for business. If that word is not present, then we wait our turn. In practice, that might look something like this in Javascript:
function fetchData() {
var ds = new GlideRecord('sys_data_source');
ds.get('name', data.config.dataSource);
var sql = ds.sql_statement;
if (sql.indexOf('AVAILABLE') != -1) {
ds.sql_statement = sql.replace('AVAILABLE', data.license.number);
ds.update();
var job = new GlideRecord('scheduled_import_set');
job.get('name', data.config.jobName);
gs.executeNow(job);
} else {
if (attempts++ < maxAttempts) {
gs.sleep(sleepMs);
fetchData();
}
}
}
Once you have replaced the key word AVAILABLE with your own key, everyone else who comes along will no longer find the welcome sign out, and will have to wait their turn to utilize the service. The one thing that you will have to do, however, is to reset the SQL back to its open-for-business state once your import job has completed. That can be handled in the specifications for the import job itself. Near the bottom of the Scheduled Import form, there is a check-box labeled Execute post-import script. Checking this box opens up a Script input field where you can enter a script that will run once the Scheduled Import job has completed:
To reset the SQL in the Data Source to make it available for the next potential user waiting in line, you can do something like this:
var ds = import_set.data_source.getRefRecord();
var sql = ds.getValue('sql_statement');
sql = sql.substring(0, sql.indexOf("'")) + "'AVAILABLE'";
ds.setValue('sql_statement', sql);
ds.update();
With the post-import script in place, your key value will be safe through the execution of the import job, but then as soon as it completes, will once again be replaced with the key word AVAILABLE to signal to to the next person in line that it is their turn in the barrel. Things may take a tad bit longer under this strategy, but at least users will not be stepping on each other’s requests when they attempt to use features implementing this concept. And of course, the most important thing is that it satisfies Rule #1.
“It is common sense to take a method and try it. If it fails, admit it frankly and try another. But above all, try something.” — Franklin D. Roosevelt
ServiceNow JDBC type data sources are used primarily for importing data from external databases, usually in bulk via periodically scheduled imports. This is a useful feature that allows you to run standard SQL statements either directly from the ServiceNow cloud, or locally on a specified MID server. Unfortunately, the SQL has to be static, and I have yet to find a way to pass dynamic query parameters to the SQL statement so that I can use a JDBC data source for running parameterized queries.
For a standard bulk import, there is really no need to pass query parameters to the SQL statement, but if you want to fetch a specific record, or obtain a list of transactions related to a specific account, it would be nice to be able run a prepared statement to which you could pass one or more parameters. There may actually be a way to set that up, but since I really don’t know what I am doing, I have no knowledge of anything remotely like that. Still, that’s pretty much what I want to be able to do.
The basic use case that I have in mind is a simple UI Action that brings up a modal dialog box containing data from a foreign database table. Let’s say that I have a JDBC connection to a license database on my internal corporate intranet, and the purpose of the button is to check the current status of the license of the person displayed on the screen. I have the person’s license number as a property of the current record, and I want to pass it to a process that will use that number as a key to read the license record from the JDBC data source. If I could pass that number in as a parameter of the SQL, that would be perfect, but since I don’t see any way to do that, we’ll have to resort to a little ServiceNow hackery …
Like most components of ServiceNow, a JDBC type data source is just a record in a table in the system database. In this case, the relevant table is sys_data_source, and the SQL statement can be found in the property sql_statement. Now, I would not recommend this approach for any function that is used with any regularity or by a large, active audience, but because this particular function is a seldomly used, low volume activity, we should be able to get away with it. The little bit of hackery will be to rewrite the SQL statement every time that we use it, injecting a variable value into the stock SQL, saving the data source record, and then launching the job using the updated SQL. The first thing that we will have to do is to create the JDBC data source and configure it with its own SQL statement by setting the Query property to Specific SQL:
This is the SQL that we will want to modify to insert our dynamic key value in place of the hard-coded key value that is currently in the data source definition. Essentially, we will want to keep everything up to and including that first single quote, and then we can add to that our dynamic value and then a closing single quote to finish things off. We can do that in Javascript with something like this:
Of course, before we do that, we have to read the data source definition record in from the database, then replace the SQL statement value, and then update the record in the database, but that’s all simple GlideRecord manipulation that is the foundation of ServiceNow development. But before we dig into all of that, let’s step back just a little bit and talk about what we need to have in place to make all of this work.
To fetch data from a database on your own local system, you will need more than just a JDBC type data source. You will also need a destination Table, an Import Table, a Transform Map, and a Scheduled Import job that you can launch to make the magic happen. ServiceNow tables and transform maps are pretty vanilla items that you can learn about from a variety of sources, so we won’t spend too much time on that here. You will need to create the table, though, before you set up the scheduled import, as you will need to reference that table (as well as the data source) when you set up the scheduled import job. Oh, and let’s not get too overly attached to that word “scheduled” … we really don’t want this thing running on a schedule; we want it running on demand. To create a scheduled import, you do have to define a schedule, but once a year seems like a good enough choice for a schedule if you are going to have to pick one to declare.
For our purpose, though, we will be launching our scheduled import from a script. Once we import the selected record, we don’t really need to import it again, so our script will start out looking in the table first. If we don’t find a record, or if we do find a record and it is out of date, then we launch the job to go get it from the local source database. As this is an asynchronous process, we might want to monitor the job to see when it has been completed, but for our purposes, the presence of the new record in the database table will tell us the same thing, so we can just launch the job and forget it, and then keep looking in the table for the data that we are waiting for. The entire sequence of events would go something like this:
var sleepMs = 1000;
var maxAttempts = 30;
var attempts = 0;
var updateRequested = false;
function getLicense() {
var gr = new GlideRecord('imported_license_info');
gr.addQuery('id', data.license.number);
gr.query();
if (gr.next()) {
data.license.status = gr.getDisplayValue('status');
data.license.expiration_date = gr.getDisplayValue('expiration_date');
} else {
if (!updateRequested) {
var ds = new GlideRecord('sys_data_source');
ds.get('name', data.config.dataSource);
var sql = ds.sql_statement;
ds.sql_statement = sql.substring(0, sql.indexOf("'") + 1) + data.license.number + "'";
ds.update();
var job = new GlideRecord('scheduled_import_set');
job.get('name', data.config.jobName);
gs.executeNow(job);
updateRequested = true;
}
if (attempts++ < maxAttempts) {
gs.sleep(sleepMs);
getLicense();
}
}
}
In this example, we first look to the ServiceNow database table for our info, and if we don’t find it, then we check to see if we have launched the import job or not. If we haven’t, then we grab the data source, update the SQL with the key of the record that we want to read, and then launch the job. Then we wait for a bit and check again. The next time through the process, if we still do not have data in the table, we see that we have already launched the job, so now all we have to do is wait for a bit and check again. Rinse. Repeat. Eventually, we will either have the data that we requested or we will run out of attempts and return empty-handed. Since it is possible that something might go wrong with the import, the setting a maximum number of attempts keeps this from just looping through this process forever.
The danger in this approach, of course, is that more that one person will want to run this process at the same time, and one user will rewrite the SQL statement before the other user’s job runs, and then the first user’s SQL will never get a chance to run. That’s a real possibility and one that should not be discounted, but in this particular instance, it was just for an occasionally used convenience feature. The remote possibility of that happening in this case was not significant enough to warrant coding a defensive mitigating strategy.
Later on, I did run into a situation where that would have been a real problem, but that’s a discussion for another time …