More Fun with JDBC Data Sources

“Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.”
John F. Woods

Rewriting the SQL of a JDBC type Data Source was a neat trick for a specific, limited use case, but to use that tactic on a more traditional requirement would require that we fill in a few of the holes left unattended by that simplistic example. How we deal with the local copy of the data is one major consideration. How we deal with multiple, concurrent users is another one. The latter concern is probably the easiest to address, so let’s take that one on first.

The simplest way to support active multiple users is to treat the Data Source like an airplane restroom and put up some kind of Occupied sign that limits access to only one user at a time. In this case, it would actually be more like an Unoccupied sign that signals that the Data Source is available for use. Using our previous example’s SQL, it might look something like this:

SELECT
   STATUS,
   EXPIRATION_DATE
FROM
   LICENSE_MASTER
WHERE
   ID='AVAILABLE'

The presence of the key word AVAILABLE is our cue that the Data Source is open for business. If that word is not present, then we wait our turn. In practice, that might look something like this in Javascript:

function fetchData() {
    var ds = new GlideRecord('sys_data_source');
    ds.get('name', data.config.dataSource);
    var sql = ds.sql_statement;
    if (sql.indexOf('AVAILABLE') != -1) {
        ds.sql_statement = sql.replace('AVAILABLE', data.license.number);
        ds.update();
        var job = new GlideRecord('scheduled_import_set');
        job.get('name', data.config.jobName);
        gs.executeNow(job);
    } else {
        if (attempts++ < maxAttempts) {
            gs.sleep(sleepMs);
            fetchData();
        }
    }
}

Once you have replaced the key word AVAILABLE with your own key, everyone else who comes along will no longer find the welcome sign out, and will have to wait their turn to utilize the service. The one thing that you will have to do, however, is to reset the SQL back to its open-for-business state once your import job has completed. That can be handled in the specifications for the import job itself. Near the bottom of the Scheduled Import form, there is a check-box labeled Execute post-import script. Checking this box opens up a Script input field where you can enter a script that will run once the Scheduled Import job has completed:

Post-import Script option

To reset the SQL in the Data Source to make it available for the next potential user waiting in line, you can do something like this:

var ds = import_set.data_source.getRefRecord();
var sql = ds.getValue('sql_statement');
sql = sql.substring(0, sql.indexOf("'")) + "'AVAILABLE'";
ds.setValue('sql_statement', sql);
ds.update();

With the post-import script in place, your key value will be safe through the execution of the import job, but then as soon as it completes, will once again be replaced with the key word AVAILABLE to signal to to the next person in line that it is their turn in the barrel. Things may take a tad bit longer under this strategy, but at least users will not be stepping on each other’s requests when they attempt to use features implementing this concept. And of course, the most important thing is that it satisfies Rule #1.

Fun with JDBC Data Sources

“It is common sense to take a method and try it. If it fails, admit it frankly and try another. But above all, try something.”
Franklin D. Roosevelt

ServiceNow JDBC type data sources are used primarily for importing data from external databases, usually in bulk via periodically scheduled imports. This is a useful feature that allows you to run standard SQL statements either directly from the ServiceNow cloud, or locally on a specified MID server. Unfortunately, the SQL has to be static, and I have yet to find a way to pass dynamic query parameters to the SQL statement so that I can use a JDBC data source for running parameterized queries.

For a standard bulk import, there is really no need to pass query parameters to the SQL statement, but if you want to fetch a specific record, or obtain a list of transactions related to a specific account, it would be nice to be able run a prepared statement to which you could pass one or more parameters. There may actually be a way to set that up, but since I really don’t know what I am doing, I have no knowledge of anything remotely like that. Still, that’s pretty much what I want to be able to do.

The basic use case that I have in mind is a simple UI Action that brings up a modal dialog box containing data from a foreign database table. Let’s say that I have a JDBC connection to a license database on my internal corporate intranet, and the purpose of the button is to check the current status of the license of the person displayed on the screen. I have the person’s license number as a property of the current record, and I want to pass it to a process that will use that number as a key to read the license record from the JDBC data source. If I could pass that number in as a parameter of the SQL, that would be perfect, but since I don’t see any way to do that, we’ll have to resort to a little ServiceNow hackery …

Like most components of ServiceNow, a JDBC type data source is just a record in a table in the system database. In this case, the relevant table is sys_data_source, and the SQL statement can be found in the property sql_statement. Now, I would not recommend this approach for any function that is used with any regularity or by a large, active audience, but because this particular function is a seldomly used, low volume activity, we should be able to get away with it. The little bit of hackery will be to rewrite the SQL statement every time that we use it, injecting a variable value into the stock SQL, saving the data source record, and then launching the job using the updated SQL. The first thing that we will have to do is to create the JDBC data source and configure it with its own SQL statement by setting the Query property to Specific SQL:

JDBC Data Source with Specific SQL

This is the SQL that we will want to modify to insert our dynamic key value in place of the hard-coded key value that is currently in the data source definition. Essentially, we will want to keep everything up to and including that first single quote, and then we can add to that our dynamic value and then a closing single quote to finish things off. We can do that in Javascript with something like this:

sql = sql.substring(0, sql.indexOf("'") + 1) + keyValue + "'";

Of course, before we do that, we have to read the data source definition record in from the database, then replace the SQL statement value, and then update the record in the database, but that’s all simple GlideRecord manipulation that is the foundation of ServiceNow development. But before we dig into all of that, let’s step back just a little bit and talk about what we need to have in place to make all of this work.

To fetch data from a database on your own local system, you will need more than just a JDBC type data source. You will also need a destination Table, an Import Table, a Transform Map, and a Scheduled Import job that you can launch to make the magic happen. ServiceNow tables and transform maps are pretty vanilla items that you can learn about from a variety of sources, so we won’t spend too much time on that here. You will need to create the table, though, before you set up the scheduled import, as you will need to reference that table (as well as the data source) when you set up the scheduled import job. Oh, and let’s not get too overly attached to that word “scheduled” … we really don’t want this thing running on a schedule; we want it running on demand. To create a scheduled import, you do have to define a schedule, but once a year seems like a good enough choice for a schedule if you are going to have to pick one to declare.

For our purpose, though, we will be launching our scheduled import from a script. Once we import the selected record, we don’t really need to import it again, so our script will start out looking in the table first. If we don’t find a record, or if we do find a record and it is out of date, then we launch the job to go get it from the local source database. As this is an asynchronous process, we might want to monitor the job to see when it has been completed, but for our purposes, the presence of the new record in the database table will tell us the same thing, so we can just launch the job and forget it, and then keep looking in the table for the data that we are waiting for. The entire sequence of events would go something like this:

var sleepMs = 1000;
var maxAttempts = 30;
var attempts = 0;
var updateRequested = false;

function getLicense() {
	var gr = new GlideRecord('imported_license_info');
	gr.addQuery('id', data.license.number);
	gr.query();
	if (gr.next()) {
		data.license.status = gr.getDisplayValue('status');
		data.license.expiration_date = gr.getDisplayValue('expiration_date');
	} else {
		if (!updateRequested) {
			var ds = new GlideRecord('sys_data_source');
			ds.get('name', data.config.dataSource);
			var sql = ds.sql_statement;
			ds.sql_statement = sql.substring(0, sql.indexOf("'") + 1) + data.license.number + "'";
			ds.update();
			var job = new GlideRecord('scheduled_import_set');
			job.get('name', data.config.jobName);
			gs.executeNow(job);
			updateRequested = true;
		}
		if (attempts++ < maxAttempts) {
			gs.sleep(sleepMs);
			getLicense();
		}
	}
}

In this example, we first look to the ServiceNow database table for our info, and if we don’t find it, then we check to see if we have launched the import job or not. If we haven’t, then we grab the data source, update the SQL with the key of the record that we want to read, and then launch the job. Then we wait for a bit and check again. The next time through the process, if we still do not have data in the table, we see that we have already launched the job, so now all we have to do is wait for a bit and check again. Rinse. Repeat. Eventually, we will either have the data that we requested or we will run out of attempts and return empty-handed. Since it is possible that something might go wrong with the import, the setting a maximum number of attempts keeps this from just looping through this process forever.

The danger in this approach, of course, is that more that one person will want to run this process at the same time, and one user will rewrite the SQL statement before the other user’s job runs, and then the first user’s SQL will never get a chance to run. That’s a real possibility and one that should not be discounted, but in this particular instance, it was just for an occasionally used convenience feature. The remote possibility of that happening in this case was not significant enough to warrant coding a defensive mitigating strategy.

Later on, I did run into a situation where that would have been a real problem, but that’s a discussion for another time