Hacking the Scripted REST API to process a form

“Try something new each day. After all, we’re given life to find it out. It doesn’t last forever.”
Ruth Gordon

One of our older systems includes a form through which you could report issues, and when you filled out the form and submitted it, it would send an e-mail to the support team. Pretty cool stuff back in the day, but the procedure bypasses the Service Desk and hides the activity from our support statistics because no one ever opens up an Incident. They just quietly resolve the issue and move on without a trace. The people who like to have visibility into those kinds of activities are not really too keen on these little side deals that allow certain groups to fly below the radar. So the question arose as to whether or not we could keep the form, with which everyone was comfortable and familiar, but have it create an Incident rather than send an e-mail.

Well, the first thing that came to mind was to just send the existing e-mail to the production instance and then set up an inbound mail processor to turn the e-mail into an Incident. The problem with that approach, though, was the the Incident was created off-line, and by that time, you had no way to inform the user that the Incident was successfully created or to give them the ID or some kind of handle to pull it up and check on the progress. What would really be nice would be to be able to simply POST the form to ServiceNow and have it respond back with an HTML thank you page. ServiceNow is not really set up to be a third-party site forms processor, though, so that really didn’t seem to be a feasible concept.

But, then again …

ServiceNow does have the Scripted REST API, but that is built for Web Services, not user interaction. Still, with a little creative tweaking maybe we could actually fool it into taking a form POST and responding with an HTML page. That would actually be interesting. And as it turns out, it wasn’t all that hard to do.

To make our example relatively simple and easy to follow, let’s just build a nice clean HTML page that contains nothing but our example legacy form:

Simple stand-alone input form for demonstration purposes

This clears out all of the window dressing, headers, footers, and other clutter and just gets down to the form itself. None of that other stuff has any relevance to what we are trying to do here, so we just want a simple clean slate without all of the distractions. Here is the entire HTML code for the page:

<html>
 <head>
  <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css" integrity="sha384-9aIt2nRpC12Uk9gS9baDl411NQApFmC26EwAOH8WgZl5MYYxFfc+NcPb1dKGj7Sk" crossorigin="anonymous">
  <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/js/bootstrap.min.js" integrity="sha384-OgVRvuATP1z7JjHLkuOU7Xw704+h835Lr+6QL9UvYjZE3Ipu6Tp75j7Bh/kR0JKI" crossorigin="anonymous"></script>
 </head>
 <body>
  <form action="/some/legacy/form/handler" method="post">
   <div style="padding: 25px;">
    <h2>Legacy Support Form</h2>
    <div style="margin-top: 10px;">
     <label for="name">Please enter your name:</label>
    </div>
    <div>
     <input name="name" size="64"/>
    <div style="margin-top: 10px;">
     <label for="email">Please enter your email address:</label>
    </div>
    <div>
     <input name="email" size="64"/>
    </div>
    <div style="margin-top: 10px;">
     <label for="title">Please enter a brief statement describing the issue:</label>
    </div>
    <div>
     <input name="title" size="64"/>
    </div>
    <div style="margin-top: 10px;">
     <label for="description">Please describe the problem in detail:</label>
    </div>
    <div>
     <textarea name="description" cols="62" rows="5"></textarea>
    </div>
    <div style="margin-top: 20px;">
     <input type="submit" value="Submit Problem Report"/>
    </div>
   </div>
  <form>
 </body>
</html>

The idea here is to now take the existing form handler, as specified in the action attribute of the form tag, and replace it with a URL for a ServiceNow “web service” that we will create using the Scripted REST API tools. That is the only change that we want to make on this existing page. Everything else should look and behave exactly as it did before; we are just pointing the form to a new target for processing the user input. So let’s build that target.

To begin, select the Scripted REST APIs option on the left-hand sidebar menu, which will bring up the list of all of the existing Scripted REST APIs. From there, click on the New button, which will take you to a blank form on which you can start entering the details about your new Scripted REST API.

Initial Scripted REST API data entry form

Enter the name of your new Scripted REST API along with the API ID, which will become a component in the eventual URL that will take you to your new web service. Once you have saved these initial values, you will be returned to the list, and your newly created API will appear on the list. Select it, and you will be taken to an expanded form where you can now enter the rest of the information.

Full Scripted REST API data entry form

Here is one of the secrets to our little hack: you need to check both of the override boxes so that you can change the MIME type for the data that will be flowing in both directions. For the data that is coming in, you will want to enter x-www-form-urlencoded. These are are the form fields coming in from the input form. For the data going out, you will want to enter text/html. This is the response page that will go back to the browser and be rendered by the browser for display to the user. Form fields come into our new service and HTML comes back out.

Once you have saved your overrides, you can create your Resource. A Scripted REST API can have one or more Resources, and they appear on the form down at the bottom as a Related List. Open up the Resources tab and click on the New button to create your Resource. This brings up the Resource form.

Scripted REST Resource form

On this form, you want to enter the name of your Resource, the HTTP method (POST), and the relative path, which is another component of the URL for this service. Once you save that, all that is left is my favorite part, the coding. In fact, this is probably a good place to stop, as we are done with all of the forms and form fields. Next time out, we can focus exclusively on the code.

Formatted Script Search Results, Corrected

“Never give in, never give in, never, never, never, never.”
Winston Churchill

For the most part, I really liked the way my formatted search result widget came out; however, the fact that I lost my results after following the links on the page was something that I just couldn’t accept. In my rush to get that out, I released a version with that obvious flaw, but I couldn’t just let that be the end of it; there had to be a way to make it remember its way back home without losing track of what was there before. There just had to be.

I tried a lot of things. They say that when you lose something, you always find it in the last place that you look. I always wondered why that was considered such profound wisdom, since it seemed quite obvious to me that once you found it, you would stop looking. Of course you always find it in the last place that you looked. When I finally came across a way to get the results that I wanted, I stopped looking. There may be a better way to achieve the same ends, but once I was able to link out to a page and come back to my original results, I was done. I’m not all that proud of the code, but it works, so I’m good.

The problem turned out to be this statement, which takes you to the next URL, which includes the search string parameter:

$location.search('search', c.data.searchFor);

This uses the AngularJS $location service, which is quite powerful, but apparently not quite powerful enough to leave a trail that you could follow back home again. I tried a number of variations to get things to work, but in the end I abandoned this whole approach and just went full old school. I replaced my ng-click with an onclick, and in my onclick function, I replaced that nice, simple one line of code with several others:

function findIt() {
	var url = window.location.pathname + window.location.search;
	if (url.indexOf('&search=') != -1) {
		url = url.substring(0, url.indexOf('&search='));
	}
	url += '&search=' + document.getElementById('searchFor').value;
	window.location.href = url;
}

Now I will be the first one to admit that this is quite a bit more code than that one simple line that I had before. First, I had to glue together the path and the search string to construct a relative URL, then I had to check to see if a search parameter was already present, and if so, clip it off of the end, then add my new search parameter to the search string, and then finally, set the current location to the newly constructed URL. Aesthetically, I prefer the original much, much better, but this older, brute force method has the advantage of actually working the way that I want, so it gets to be the winner.

I still kept my ng-click, but that was just to toggle on a new loading DIV to let the user know that their input was accepted and now we are working on getting them their results. That simple HTML addition turned out like this:

<div class="row" ng-show="c.data.loading">
  <div class="col-sm-12">
    <h4>
      <i class="fa fa-spinner fa-spin"></i>
      ${Wait for it ...}
    </h4>
  </div>
</div>

One other thing that I tinkered with in this iteration was the encoded query string in the Script Include. There is a particular table (sn_templated_snip_note_template) that kept throwing an error message related to security, so I decided to just filter that one out by name to keep that from happening. The new encoded query string now looks like this:

internal_typeCONTAINSscript^active=true^name!=sn_templated_snip_note_template

There might be a few other clean-up odds and ends included that I can’t quite recall right at the moment, but the major change was to finally get it to come back home again after drilling down into one the listed scripts. If you installed the previous version of this Update Set, I would definitely recommend that you replace it with this one.

Generic Feedback Widget, Part V

“If at first you don’t succeed, you are running about average.”
M.H. Alderson

I looked at several different ways to solve my problem with the Generic Feedback Widget, but I couldn’t come up with anything that didn’t involve inactivating or altering the ACL that was at the heart of the issue.Finally, I settled on a plan that would at least involve minimally invasive alterations to the ACL. The plan was pretty simple: create an obscure User Preference and set it to true just before accessing the live_group_profile record, and then delete the preference as soon as the record was obtained. The alteration to the ACL, then, would be to check for that preference before applying the ACL. The updated version of the ACL script now looked like this:

if (gs.getPreference('snh.live.group.read.authorization') == 'true') {
	answer = true;
} else {
	var gr = new GlideRecord('live_group_member');
	gr.addQuery('member', GlideappLiveProfile().getID());
	gr.addQuery('group', current.sys_id);
	gr.addQuery('state', 'admin').addOrCondition('state', 'active');
	gr.query();
	answer = gr.next();
}

The first thing that we do is check for the preference, and if it’s there, then we bypass the original code; otherwise, things proceed as they always have. I don’t really like tinkering with stock components if I can avoid it, mainly because of the subsequent issues with patches and upgrades potentially skipping an upgrade of anything that you have touched. Still, this one seemed to be unavoidable if I wanted to salvage the original intent and still do what I wanted to do.

The next thing that I needed to do was to set the preference just before attempting the read operation, and then removing it as soon as I was done. That code turned out to look like this:

gs.getUser().setPreference('snh.live.group.read.authorization', 'true');
grp = new GlideRecord('live_group_profile');
grp.addQuery('table', table);
grp.addQuery('document', sys_id);
grp.query();
if (grp.next()) {
	groupId = grp.getValue('sys_id');
}
gs.getUser().setPreference('snh.live.group.read.authorization', null);

I ended up pulling that out of the widget and putting it into its own Script Include, mainly to tuck the specialized code away and out of sight. Anyway, it all sounded like a great plan and all I needed to do now was to test it out, so I did. And it failed. So much for my great plan.

It took a little digging, but I finally figured out that the ACL was not the only thing keeping people from outside the group from reading the group profile record. There are also a number of Business Rules that do pretty much the same thing. I spent a little time combing through all of those looking for ways to hack around them, and then finally decided that, for my purposes anyway, I really didn’t to be running any Business Rules at all. So I added one more line to my read script to turn off all of the Business Rules.

gs.getUser().setPreference('snh.live.group.read.authorization', 'true');
grp = new GlideRecord('live_group_profile');
grp.setWorkflow(false);
grp.addQuery('table', table);
grp.addQuery('document', sys_id);
grp.query();
if (grp.next()) {
	groupId = grp.getValue('sys_id');
}
gs.getUser().setPreference('snh.live.group.read.authorization', null);

That did it. Now, people who are not in the group can still read the group profile record, which is good, because you need the sys_id of that record to read all of the messages in the group, which is what we are using as feedback. The only thing that I have accommodated at this point is situations where a group profile record does not exist at all, and I have to create one.

But that’s an entirely different adventure

Fun with the Service Portal Data List Widget

“I can’t do it never yet accomplished anything. I will try has performed miracles.”
George Burnham

Straight out of the box, the ServiceNow Service Portal comes bundled with a trio of widgets for displaying rows of a table: a primary data table widget and two additional wrapper widgets that provide different means to pass configuration parameters to the primary widget.

Service Portal data table widgets

One of the wrapper widgets is the Data Table from URL Definition widget, which was almost exactly what I was looking for. The problem, of course, was the almost. I needed something that was exactly what I was looking for. So close, but no cigar. The problem was that it took most, but not all, of the parameters that I wanted pass via the URL. You can pass in the name of the table, the name of the view, a query filter, and a number of other, related options, but you cannot pass in a list of columns that you want to have displayed in the list. There is a property called fields, which is set up for that very purpose, but its value is hard-coded rather than being pulled from the URL.

Well, that won’t work!

Here is the line in question:

data.fields = $sp.getListColumns(data.table, data.view);

Here is what I would like see on that line:

data.fields = $sp.getParameter('fields') || $sp.getListColumns(data.table, data.view);

That really shouldn’t hurt anything at all. All that would do would be to take a quick peek at the URL, and if someone provided a list of fields, then it would use that list; otherwise, it would revert to what it is currently doing right now. It would simply add what I wanted without taking away anything that it is already set up to do. Unfortunately, this particular widget is one of those provided in read-only mode and you are not allowed to modify it, even if you are an admin. Well, isn’t that restrictive!

The recommended course of action in these cases is to make a clone or copy of the protected widget under a new name and then modify that one, leaving the original intact. I thought about doing just that, but I’m not really one to blindly follow the recommended course of action at every turn. I just wanted to make this one small change to this one and leave it at that. Fortunately, there is a way to do just that. First, you have to export the widget to XML.

Exporting the widget to XML

Next, make whatever modifications that you want to make to the exported XML, being careful not to disturb anything else, and the save the updated XML. Now, go back to the list of widgets and use the hamburger menu on one of the list columns to select Import XML.

Importing the widget XML back into ServiceNow

Browse for your XML file, upload it, and now the modified widget is back where it belongs with your modification in place. Voila! Easy, peasy. Now, I can get back to doing what I wanted to do with this widget in the first place.

Incident Email Hack Revisited

“The greatest performance improvement of all is when a system goes from not working to working.”
John Ousterhout

The other day I was showing off my Incident email hack, and to my surprise, the thing did not work. I was reminded of something my old boss used to tell me whenever we had blown a demonstration to a potential customer. “There are only two kinds of demos,” he would say, “Those that don’t count and those that don’t work.” But my email hack had been working flawlessly for quite some time, so I couldn’t imagine why it wasn’t working that day. Then I realized that I couldn’t remember trying it since I upgraded my instance to Madrid. Something was different now, and I needed to figure out what that was.

It didn’t take much of an investigation to locate the failing line of code. As it turns out, it wasn’t in anything that I had written, but in an existing function that I had leveraged to populate the selected email addresses. That’s not to suggest that the source of the problem was not my fault; it just meant that I had to do a little more digging to get down to the heart of the issue. The function, addEmailAddressToList, required an INPUT element as one of the arguments, but for my usage, there was no such INPUT element. But when I looked at the code inside the function, the only reference to the INPUT element was to access the value property. So, I just created a simple object and set the value to the email address that I wanted to add, and then passed that in to the function. That worked just fine at the time, but that was the old version of this function.

In the updated version that comes with Madrid, there is new code to access the ac property of the INPUT element and run a function of the ac object called getAddressFilterIds. My little fake INPUT element had no ac property, and thus, no getAddressFilterIds function, so that’s where things broke down. No problem, though. If I can make a fake INPUT element, I can add a fake ac object to it, and give that fake ac object a fake getAddressFilterIds function. I would need to know what the function does, or more importantly, what it is supposed to return, but that was easy enough to figure out as well. In the end, all I really needed to do to get past that error was add these lines to the incident_email_client_hack UI Script:

input.ac = {};
input.ac.getAddressFilterIds = function() {return '';};

Unfortunately, things still didn’t work after that. Once I got past that first error, I ran right into another similar error, as it was trying to run yet another missing function of the ac object called resetInputField. So, I added yet another line of code:

input.ac.resetInputField = function() {return '';};

Voila! Now we were back in action. I did a little more testing, just to be sure, but as far as I can tell, that solved the issue for this version, and since all I did was add bloat to the fake INPUT element that would never be referenced in the old version, it would be backwards compatible as well, and work just fine now in either version. Still, now that all the parts were laid out, I decided that I could clean the whole mess up a little bit by defining the fake INPUT element all in a single line of code:

var input = {
	value: recipient[i].email,
	ac: {
		getAddressFilterIds: function() {
			return '';
		},
		resetInputField: function() {
			return '';
		}
	}
};

There, that’s better! Now, instead of adding three new lines of code, I actually ended up removing a line. For those of you playing along at home, I gathered up all of the original parts and pieces along with the updated version of this script and uploaded a new version of the Update Set for this hack.

Yet Even More Fun with JDBC Data Sources

“We have a strategic plan; it’s called doing things.”
Herb Kelleher

Recently I had the need to write to an external database on demand in much the same way as I had been reading from an external database on demand earlier. I was able to hack the Scheduled Import process to fetch a specific record, so my thought was that there must be some kind of equivalent Scheduled Export process that I might be able to bend to my will to satisfy my current requirement. Unfortunately, the Scheduled Export process is based on dropping a file somewhere, not on writing to an external database. You have a lot of options as to what type of file it is, but it is still just a file, and I wanted to write to a database table.

So, then I looked into lighting up the JDBCProbe, which I did without too much difficulty, but after digging into all of its various capabilities, I realized that it really didn’t have that much more to offer than the techniques that I had already been using. You still have to create a Data Source, you still have to configure the job to run, you still have to launch the job, and you still have to monitor the job for completion and successful execution. Since I had already pretty much worked all of that out with the Scheduled Import process, decided that I would just stick with that.

The difference, of course, was that this time I wanted to write, not read. Fortunately, that problem can be solved with SQL, and nothing else about my original hack really needed to change. Scheduled Imports expect the end result of the SQL execution to be a Result Set, which is the outcome of an SQL query. However, a query is not the only way to produce a Result Set. SQL Insert and Update statements can also produce a Result Set when you use the Output clause. The Output clause returns specified values such as a generated Identity key back in the form of a Result Set. By leveraging the Output clause, you can build an SQL statement that will both update the external database and return a Result Set all in one operation.

Consider the following example SQL:

INSERT
   EMPLOYEE,
   INCIDENT_ID,
   DATE,
   HOURS_WORKED
OUTPUT
   INSERTED.ID,
   INSERTED.EMPLOYEE,
   INSERTED.INCIDENT_ID,
   INSERTED.DATE,
   INSERTED.HOURS_WORKED
INTO
   TIME_CARD
VALUES(
   'snhackery',
   'INC0000456',
   getdate(),
   6.5)

This SQL statement will insert the specified values into the external database table, and then return the inserted values, along with the newly generated ID on the external table, back to ServiceNow in the form of a Result Set. To the Scheduled Import job, this response is indistinguishable from the response received from a standard SELECT statement. Now, you still have to have a somewhat useless table on the ServiceNow end of things to receive the returned data, even though all you really wanted to do was to write to a table in a remote database, but that table can serve as a log of all records written out to the external destination.

It is possible that we could have accomplished this without the need for the local ServiceNow table by utilizing the JDBCProbe, but this happens to work, so that’s good enough to satisfy Rule #1.

Even More Fun with JDBC Data Sources

“When in doubt, use brute force.”
— Butler Lampson, Hints for Computer System Design

After hacking up a JDBC type Data Source to pass in a dynamic SQL parameter and then modifying that hack to provide simultaneous update protection, we still have a couple of unresolved issues related to this approach. For one, since we are watching the table and not the scheduled import job, a Record Not Found result and a Something Bad Happened to the Job and it Never Ran result end up looking exactly the same (no new record ever shows up in the table). It would be nice to be able to distinguish one from the other, particularly if you wanted to log an Event in the case of the latter (but not the former!). Another somewhat related issue involves the age of the imported data. Once you retrieve a record from an outside database, there needs to be some criteria for how long you hang on to it and continue to use it before you go back to the outside database and fetch a fresh copy. There are a number of ways to deal with that, which we can review, but first let’s take a look at the Not Found issue.

We already implemented a post-import script in our last modification, and that’s a great place to insert some code to verify that the job ran. After all, the entire purpose of the script is to run when the job is over, so that’s the perfect spot to check to see if you found a record or not. But if we did not find a record, what do we do about it? One approach would be to add a column to the table called Record Found with a type of True/False and a default value of True. Then, in your post-import script, you can grab the key value from the SQL (before you reset it for others to use!), and use it to read the table to see if you found a record. If you didn’t find a record, the you can then create a record using the key value and setting the Record Found column to False. This way, your process that is spinning endlessly waiting for a record to appear will see the record and get out of the waiting loop, and you will just have to check the Record Found column to see if a record was returned or not. Converting that narrative to code might produce something like this:

var ds = import_set.data_source.getRefRecord();
var sql = ds.getValue('sql_statement');
var parts = sql.split("'");
var keyValue = parts[1];
sql = sql.substring(0, sql.indexOf("'")) + "'AVAILABLE'";
ds.setValue('sql_statement', sql);
ds.update();
var gr new GlideRecord('your_table_name');
gr.addQuery('id', keyValue);
gr.query();
if (!gr.next()) {
    gr.initialize();
    gr.setValue('id', keyValue);
    gr.setValue('record_found', false);
    gr.insert();
}

This code actually combines the simultaneous update protection code from our last outing with the new code needed to handle the Record Not Found condition. To complete the solution, of course, the process that reads in this record from the database table will need to check the Record Found column to ensure that there was an actual record returned, and if not, take the appropriate action.

As to the question of how long an externally fetched record is valid before reaching back out to the external database and fetching a fresh one, that really depends on the specific use case. If your specific implementation requires up-to-the-minute information, then the simplest solution is to just delete the imported record as soon as you grab the data. That will ensure that the initial query will always come up empty, and you will always launch a new Scheduled Import job to get a fresh copy. If, on the other hand, the data is not that volatile, and will last throughout the day, you could always set up a nightly process to clean out the table overnight. Under this strategy, the first person interested in a specific record would pull it from the external database, and everyone else who comes along afterwards would just use the imported copy and not have to pay the penalty of the wait time for the completion of the Scheduled Import.

Yet another approach might be to timestamp all of the records at the time of import, and then compare the timestamp value at read to a specified duration from the current date/time. If the record is too old, then trigger a fetch from the external database. Under this scenario, you wouldn’t necessarily have to ever delete the data in the ServiceNow table, as things would be driven by the timestamp, not the presence of a record. You may still want to schedule some kind of periodic record deletion anyway, though, just to keep from hanging on to records that you may never look at again once they are expired.

Up to this point, we have been talking about leveraging the Scheduled Import capability of ServiceNow to fetch single records, or limited sets of records, on demand based on a single key. The data always flows in one direction: from your internal database into ServiceNow. But what if you have a need to write to an internal database on demand from ServiceNow? Well, that’s an interesting question, but one we will have to address at a later date when we have a little more time …

More Fun with JDBC Data Sources

“Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.”
John F. Woods

Rewriting the SQL of a JDBC type Data Source was a neat trick for a specific, limited use case, but to use that tactic on a more traditional requirement would require that we fill in a few of the holes left unattended by that simplistic example. How we deal with the local copy of the data is one major consideration. How we deal with multiple, concurrent users is another one. The latter concern is probably the easiest to address, so let’s take that one on first.

The simplest way to support active multiple users is to treat the Data Source like an airplane restroom and put up some kind of Occupied sign that limits access to only one user at a time. In this case, it would actually be more like an Unoccupied sign that signals that the Data Source is available for use. Using our previous example’s SQL, it might look something like this:

SELECT
   STATUS,
   EXPIRATION_DATE
FROM
   LICENSE_MASTER
WHERE
   ID='AVAILABLE'

The presence of the key word AVAILABLE is our cue that the Data Source is open for business. If that word is not present, then we wait our turn. In practice, that might look something like this in Javascript:

function fetchData() {
    var ds = new GlideRecord('sys_data_source');
    ds.get('name', data.config.dataSource);
    var sql = ds.sql_statement;
    if (sql.indexOf('AVAILABLE') != -1) {
        ds.sql_statement = sql.replace('AVAILABLE', data.license.number);
        ds.update();
        var job = new GlideRecord('scheduled_import_set');
        job.get('name', data.config.jobName);
        gs.executeNow(job);
    } else {
        if (attempts++ < maxAttempts) {
            gs.sleep(sleepMs);
            fetchData();
        }
    }
}

Once you have replaced the key word AVAILABLE with your own key, everyone else who comes along will no longer find the welcome sign out, and will have to wait their turn to utilize the service. The one thing that you will have to do, however, is to reset the SQL back to its open-for-business state once your import job has completed. That can be handled in the specifications for the import job itself. Near the bottom of the Scheduled Import form, there is a check-box labeled Execute post-import script. Checking this box opens up a Script input field where you can enter a script that will run once the Scheduled Import job has completed:

Post-import Script option

To reset the SQL in the Data Source to make it available for the next potential user waiting in line, you can do something like this:

var ds = import_set.data_source.getRefRecord();
var sql = ds.getValue('sql_statement');
sql = sql.substring(0, sql.indexOf("'")) + "'AVAILABLE'";
ds.setValue('sql_statement', sql);
ds.update();

With the post-import script in place, your key value will be safe through the execution of the import job, but then as soon as it completes, will once again be replaced with the key word AVAILABLE to signal to to the next person in line that it is their turn in the barrel. Things may take a tad bit longer under this strategy, but at least users will not be stepping on each other’s requests when they attempt to use features implementing this concept. And of course, the most important thing is that it satisfies Rule #1.

Fun with JDBC Data Sources

“It is common sense to take a method and try it. If it fails, admit it frankly and try another. But above all, try something.”
Franklin D. Roosevelt

ServiceNow JDBC type data sources are used primarily for importing data from external databases, usually in bulk via periodically scheduled imports. This is a useful feature that allows you to run standard SQL statements either directly from the ServiceNow cloud, or locally on a specified MID server. Unfortunately, the SQL has to be static, and I have yet to find a way to pass dynamic query parameters to the SQL statement so that I can use a JDBC data source for running parameterized queries.

For a standard bulk import, there is really no need to pass query parameters to the SQL statement, but if you want to fetch a specific record, or obtain a list of transactions related to a specific account, it would be nice to be able run a prepared statement to which you could pass one or more parameters. There may actually be a way to set that up, but since I really don’t know what I am doing, I have no knowledge of anything remotely like that. Still, that’s pretty much what I want to be able to do.

The basic use case that I have in mind is a simple UI Action that brings up a modal dialog box containing data from a foreign database table. Let’s say that I have a JDBC connection to a license database on my internal corporate intranet, and the purpose of the button is to check the current status of the license of the person displayed on the screen. I have the person’s license number as a property of the current record, and I want to pass it to a process that will use that number as a key to read the license record from the JDBC data source. If I could pass that number in as a parameter of the SQL, that would be perfect, but since I don’t see any way to do that, we’ll have to resort to a little ServiceNow hackery …

Like most components of ServiceNow, a JDBC type data source is just a record in a table in the system database. In this case, the relevant table is sys_data_source, and the SQL statement can be found in the property sql_statement. Now, I would not recommend this approach for any function that is used with any regularity or by a large, active audience, but because this particular function is a seldomly used, low volume activity, we should be able to get away with it. The little bit of hackery will be to rewrite the SQL statement every time that we use it, injecting a variable value into the stock SQL, saving the data source record, and then launching the job using the updated SQL. The first thing that we will have to do is to create the JDBC data source and configure it with its own SQL statement by setting the Query property to Specific SQL:

JDBC Data Source with Specific SQL

This is the SQL that we will want to modify to insert our dynamic key value in place of the hard-coded key value that is currently in the data source definition. Essentially, we will want to keep everything up to and including that first single quote, and then we can add to that our dynamic value and then a closing single quote to finish things off. We can do that in Javascript with something like this:

sql = sql.substring(0, sql.indexOf("'") + 1) + keyValue + "'";

Of course, before we do that, we have to read the data source definition record in from the database, then replace the SQL statement value, and then update the record in the database, but that’s all simple GlideRecord manipulation that is the foundation of ServiceNow development. But before we dig into all of that, let’s step back just a little bit and talk about what we need to have in place to make all of this work.

To fetch data from a database on your own local system, you will need more than just a JDBC type data source. You will also need a destination Table, an Import Table, a Transform Map, and a Scheduled Import job that you can launch to make the magic happen. ServiceNow tables and transform maps are pretty vanilla items that you can learn about from a variety of sources, so we won’t spend too much time on that here. You will need to create the table, though, before you set up the scheduled import, as you will need to reference that table (as well as the data source) when you set up the scheduled import job. Oh, and let’s not get too overly attached to that word “scheduled” … we really don’t want this thing running on a schedule; we want it running on demand. To create a scheduled import, you do have to define a schedule, but once a year seems like a good enough choice for a schedule if you are going to have to pick one to declare.

For our purpose, though, we will be launching our scheduled import from a script. Once we import the selected record, we don’t really need to import it again, so our script will start out looking in the table first. If we don’t find a record, or if we do find a record and it is out of date, then we launch the job to go get it from the local source database. As this is an asynchronous process, we might want to monitor the job to see when it has been completed, but for our purposes, the presence of the new record in the database table will tell us the same thing, so we can just launch the job and forget it, and then keep looking in the table for the data that we are waiting for. The entire sequence of events would go something like this:

var sleepMs = 1000;
var maxAttempts = 30;
var attempts = 0;
var updateRequested = false;

function getLicense() {
	var gr = new GlideRecord('imported_license_info');
	gr.addQuery('id', data.license.number);
	gr.query();
	if (gr.next()) {
		data.license.status = gr.getDisplayValue('status');
		data.license.expiration_date = gr.getDisplayValue('expiration_date');
	} else {
		if (!updateRequested) {
			var ds = new GlideRecord('sys_data_source');
			ds.get('name', data.config.dataSource);
			var sql = ds.sql_statement;
			ds.sql_statement = sql.substring(0, sql.indexOf("'") + 1) + data.license.number + "'";
			ds.update();
			var job = new GlideRecord('scheduled_import_set');
			job.get('name', data.config.jobName);
			gs.executeNow(job);
			updateRequested = true;
		}
		if (attempts++ < maxAttempts) {
			gs.sleep(sleepMs);
			getLicense();
		}
	}
}

In this example, we first look to the ServiceNow database table for our info, and if we don’t find it, then we check to see if we have launched the import job or not. If we haven’t, then we grab the data source, update the SQL with the key of the record that we want to read, and then launch the job. Then we wait for a bit and check again. The next time through the process, if we still do not have data in the table, we see that we have already launched the job, so now all we have to do is wait for a bit and check again. Rinse. Repeat. Eventually, we will either have the data that we requested or we will run out of attempts and return empty-handed. Since it is possible that something might go wrong with the import, the setting a maximum number of attempts keeps this from just looping through this process forever.

The danger in this approach, of course, is that more that one person will want to run this process at the same time, and one user will rewrite the SQL statement before the other user’s job runs, and then the first user’s SQL will never get a chance to run. That’s a real possibility and one that should not be discounted, but in this particular instance, it was just for an occasionally used convenience feature. The remote possibility of that happening in this case was not significant enough to warrant coding a defensive mitigating strategy.

Later on, I did run into a situation where that would have been a real problem, but that’s a discussion for another time

Reference Type System Properties, Part III

“Everything should be made as simple as possible, but not simpler.”
Albert Einstein

Last time, we got far enough along in the System Property value page modifications to demonstrate that we could replace the rendered input element with something else of our own design. Not having a design of our own for an adequate replacement, we implemented the popular creative avoidance strategy by working on all of the other parts and pieces first until we finally ran out of other parts and pieces. Now it is time to come up with a plan and finish this little project up.

I have to admit that I’m not all that proud of what I eventually came up with, but it does satisfy Rule #1, so at this point, I’ll take it and call it good. I tried a number of other things first, but none of those really got me what I wanted, so here we are. The basic plan is pretty simple, and consists of two parts: 1) a hidden input element to replace the text input element so that it can be submitted with the form, and 2) an iframe into which we will put our new input element via a stand-alone page designed for that purpose. I don’t really like the iframe approach, but it does have the benefit of being independently rendered, which gives us the opportunity to leverage the snRecordPicker for our input, which we really cannot do by simply modifying the main page directly after it has been delivered.

So let’s start out with the script that will build the HTML that we will use to replace the original text input element:

function buildHTML(prop) {
	var html = "";
	html += "<input type=\"hidden\" id=\"" + prop.property + "\" name=\"" + prop.property + "\"/>\n";
	html += "<div style=\"width: 100%; height: auto;\">\n";
	html += " <iframe id=\"frame." + prop.property + "\" src=\"/$sp.do?id=reference_properties&sysparm_property=" + prop.property + "&sysparm_table=" + prop.tableName + "&sysparm_column=" + prop.column + "&sysparm_value=" + prop.value + "\" style=\"border: none; height: 65px; width: 100%;\"></iframe>\n";
	html += "</div>\n";
	return html;
}

The hidden input element virtually replaces the original text input element, having the same name and same id. The iframe element is pretty vanilla stuff as well; the only thing of interest really is the src parameter, which points to the Portal Page that we are about to create, and passes along all of the various values needed to make the page do what we want. The Portal Page itself is just a single page with a single container filled with a single widget. The page is not really worth looking at, so let’s just jump right into the widget, as that’s where all of the magic happens. Here is the HTML:

<div id="pickerdiv">
  <sn-record-picker field="field" table="c.data.table" display-field="c.data.column" value-field="'sys_id'" search-fields="c.data.column" page-size="c.data.pageSize"></sn-record-picker>
</div>

Not much to see there, either. It’s just your standard snRecordPicker with pretty much every attribute defined by a variable. We’ll snag the values for those variables off of the URL for the page, which we populated when we constructed the src attribute for the iframe tag. The widget’s client-side script does most of the heavy lifting here:

function($scope, $location) {
	var c = this;
	var qp = $location.search();
	c.data.property = qp.sysparm_property;
	c.data.table = qp.sysparm_table;
	c.data.column = qp.sysparm_column;
	c.data.pageSize = 20;
	c.data.fieldValue = '';
	c.data.fieldDisplayValue = '';
	if (qp.sysparm_page_size) {
		c.data.pageSize = qp.sysparm_page_size;
	}
	if (qp.sysparm_value) {
		c.data.fieldValue = qp.sysparm_value;
		c.server.update().then(function(response) {
			c.data.fieldDisplayValue = response.fieldDisplayValue;
			$scope.field = {
				displayValue: c.data.fieldDisplayValue,
				value: c.data.fieldValue,
				name: 'field'
			};
		});		
	} else {
		$scope.field = {
			displayValue: '',
			value: '',
			name: 'field'
		};
	}
	$scope.$on('field.change', function(evt, parms) {
		if (parms.field.name == 'field') {
			parent.updateReferenceProperty(c.data.property, parms.field.value);
		}
	});
}

The only reason for the server-side script is to fetch the display value of the property if the property is valued at the time that the page is delivered to the browser.

(function() {
	if (input) {
		if (input.fieldValue) {
			var gr = new GlideRecord(input.table);
			gr.get(input.fieldValue);
			data.fieldDisplayValue = gr.getDisplayValue(input.column);
		} else {
			data.fieldDisplayValue = '';
		}
	}
})();

That’s about all there is to it. For every property on the page where Type=reference, the standard text input element is replaced with a hidden input element and an iframe, and inside the iframe is a ServiceNow Service Portal page that contains a single widget containing a single snRecordPicker. The parameters for the picker are passed from the iframe to the portal page via URL parameters, which are picked up by the widget and used to configure the snRecordPicker. All changes to the snRecordPicker are copied over to the hidden input field, so when the form is submitted, the selected value is sent to the server and posted to the database.

There was a minor problem with this initial version when trying to figure out the optimum height for the iframe. The height of the snRecordPicker depends on whether or not the drop-down list of choices is present, and I couldn’t find a CSS-centric way of having the iframe automatically adjust for the change in height, nor could I find a way to have the drop-down list of selectable choices overlay the content below, which is outside of the iframe. Finally, I resorted to plain old Javascript, and set up a variable called c.data.expanded to indicate whether or not the pick list was present on the screen. With a little view selection source magic, I was able to figure out that the component to watch had an id of select2-drop-mask, and so I modified the widget’s client-side code to check the required iframe height every second and adjust if needed:

function($scope, $location) {
	var c = this;
	var qp = $location.search();
	c.data.property = qp.sysparm_property;
	c.data.table = qp.sysparm_table;
	c.data.column = qp.sysparm_column;
	c.data.pageSize = 20;
	c.data.fieldValue = '';
	c.data.fieldDisplayValue = '';
	c.data.expanded = false;
	if (qp.sysparm_page_size) {
		c.data.pageSize = qp.sysparm_page_size;
	}
	if (qp.sysparm_value) {
		c.data.fieldValue = qp.sysparm_value;
		c.server.update().then(function(response) {
			c.data.fieldDisplayValue = response.fieldDisplayValue;
			$scope.field = {
				displayValue: c.data.fieldDisplayValue,
				value: c.data.fieldValue,
				name: 'field'
			};
		});		
	} else {
		$scope.field = {
			displayValue: '',
			value: '',
			name: 'field'
		};
	}
	$scope.$on('field.change', function(evt, parms) {
		if (parms.field.name == 'field') {
			parent.updateReferenceProperty(c.data.property, parms.field.value);
		}
	});
	checkHeight();
	function checkHeight() {
		var elem = document.getElementById('select2-drop-mask');
		if (elem) {
			if (elem.style.display == 'none') {
				if (c.data.expanded) {
					c.data.expanded = false;
					setHeight('65px');
				}
			} else {
				if (!c.data.expanded) {
					c.data.expanded = true;
					setHeight('300px');
				}
			}
		}
		setTimeout(checkHeight, 1000);
	}
	function setHeight(newHeight) {
		parent.updateFrameHeight(c.data.property, newHeight);
	}
}

Once that code was in place, the unexpanded state looked like this:

Modified input element with choices collapsed

… and the expanded state looked like this:

Modified input element with choices expanded

It still disturbs my sense of The Way Things Ought To Be for the left-hand edge of the revised input element not to line up with the left-hand edge of all of the other original input elements, but a fix to that was not readily apparent to me, so I have managed to let that go for now and move on to more important things. One day, though, I am going to figure out a way to fix that!

Just to recap, we modified a choice list and added an additional field to a table to provide the capability to define properties of type reference. We then created a UI Script and a Script Include so that we could replace the original input element on the property UI page, and then we created a Service Portal page and associated widget to provide the replacement for the original input element. As soon as I get a chance, I will see if I can wrap all of that up into a single Update Set and get it posted out here in case anyone wants just grab the whole package. All in all, it was a fun little project, but one that I hope to throw away one day when ServiceNow actually supports reference type properties right out of the box and having this little tweak is no longer needed.

Update: Well, it took a little longer than I had hoped to get around to this, but here is that Update Set finally.