Fun with Webhooks, Part VI

“The most exciting phrase to hear in science, the one that heralds discoveries, is not ‘Eureka!’ but ‘Now that’s funny …'”
Isaac Asimov

Even though we are not quite finished building out our new Subflow just yet, we have enough elements in place that we can try out what we have so far, and see if it actually works. The simplest way to do that would be to create a Webhook Registry that watches a single Incident, and then go make a qualifying change to that Incident and see what happens. Let’s go find an open Incident and then we can set up our registry.

In my instance, INC0000002 seemed like a likely candidate, so I pulled up the list of Webhook Registrations, clicked on the New button, and created the following new Webhook Registry entry to track changes to that specific Incident.

New Webhook Registry entry

For the URL, I used that same webhook.site address that I had used before so that I could see the result on the other end of the transaction. Once I had saved my new Webhook Registry entry, all I needed to do in order to test it out was to make a change to the Incident. For my first test, I simply changed the Assignment Group from Network to Hardware, which also then removed the Assigned to field value, since the original person specified in that field was not a member of the new Assignment Group. After making the change, I went over to webhook.site to see if anything showed up as a result of that modification. Sure, enough, a new POST had just been received by that site shortly after I saved the record.

New Webhook posting after updating an Incident

Although that proves that the process does indeed work as intended, I did notice one thing that is going to require a little bit of rework. Take a look at the second line of the message property:

Assigned To changed from Howard Johnson to  on INC0000002.

The Assigned to field changed from having a value to not having a value. The code that we have in our buildPayload function doesn’t really handle that circumstance in a way that produces the appropriate text to explain that in plain English. Here is the logic that creates that portion of the message:

if (current.assigned_to != previous.assigned_to) {
	payload.assigned_to = current.assigned_to;
	if (previous.assigned_to) {
		payload.message += separator + 'Assigned To changed from ' + previous.assigned_to + ' to ' + current.assigned_to + ' on ' + payload.id + '.';
	} else {
		payload.message += separator + 'Assigned To set to ' + current.assigned_to + ' on ' + payload.id + '.';
	}
	separator = '\n\n';
}

There is a check to see if the previous value was blank, but there is no check to see if the new value is blank. It wouldn’t take much to throw that in there, though, so let’s go ahead and do that now while we are thinking about it.

if (current.assigned_to != previous.assigned_to) {
	payload.assigned_to = current.assigned_to;
	if (previous.assigned_to) {
		if (current.assigned_to) {
			payload.message += separator + 'Assigned To changed from ' + previous.assigned_to + ' to ' + current.assigned_to + ' on ' + payload.id + '.';
		} else {
			payload.message += separator + 'Assigned To ' + previous.assigned_to + ' removed.';
		}
	} else {
		payload.message += separator + 'Assigned To set to ' + current.assigned_to + ' on ' + payload.id + '.';
	}
	separator = '\n\n';
}

That’s better. While we are at it, we might as well make the same change to the code for the Assignment group, as that is pretty much a line for line copy of the code for the Assigned to field. Everything else looks to be OK. The State is mandatory, so it will never have a new blank value, and the Comments and Work Notes fields are based on new entries and not on changes to existing entries, so if either of those are ever blank, we won’t be doing anything with them at all.

Still, other than that one little annoying language problem, the test was actually quite successful. Our Business Rule fired, the change was detected, our new Subflow was launched, our payload was generated, and the payload was successfully POSTed to the appropriate URL. All in all, I would have to say that that was a pretty good first test of the process. Obviously, we need to do a lot more testing, but this was a great start.

At this point, all I really wanted to do was to make sure that everything was working so far, which I think we accomplished. Before we do too much more testing, though, I think we need to go back and finish up the rest of the Subflow. We still have to add in the step that logs the activity, and we also have to add any error handling for when things don’t go as planned. Once we build all of that out, then we can do a lot more testing to validate the entire process from end to end. It was good to take a quick break and make sure that everything was working up to this point, but now that we know that it is, we need to get back to work on finishing up our development efforts.

Fun with Webhooks, Part V

“Change is easy. Improvement is far more difficult.”
Ferdinand Porsche

Now that we have a Business Rule to launch our Subflow, it’s time to get busy and actually create the Subflow. To create a new Subflow, open up the Flow Designer, click on the New button and select Subflow from drop-down menu. I initially called mine Webhook Poster, but the Business Rule couldn’t find it to launch it for some reason. I finally got it to work when I replaced the space with an underscore, so now the name is Webhook_Poster. I’m not really sure why I had to do that, but it works, so we will leave it at that.

With the Subflow named, the next thing to do is to define all of the inputs. In my mind, there were only two, the current object and the previous object. Unfortunately, the Flow Designer will not recognize any of the properties of those objects unless you explicitly define them, so I had to specify every element of both objects. That seemed like a rather tedious waste of time, but eventually I got through it and now it’s done.

Subflow inputs defined

That takes care of the set-up, so now it’s on to the flow itself. The first thing that we are going to want to do is to gather up all of the registration records that would be triggered by this Incident. Since we gave our users several options for selecting records, we will have to test them all. Essentially, we will need a query filter for each possible Type, which we can then OR together to create one huge query filter. Here is how that looks in practice:

Webhook Registry selection criteria

As a second step, I threw in an IF statement to see if the query returned any records. We are basically done at this point if there are no records, but if we do have records to process, then we will need to build the payload that we will be posting to all of the target URLs. For that job, I needed to create a new Flow Designer Action, so I saved my Subflow and closed the Subflow tab for now.

The payload is basically a generic JSON object containing the relevant information about the the things that have changed about the record since the last time that it was saved. Pulling that all together will basically be a lot of tedious scripting, so that sounds like a job for yet another function in our Script Include. Here is what I came up with:

buildPayload: function(current, previous) {
	var payload = {};

	payload.id = current.number;
	payload.short_description = current.short_description;
	payload.message = '';
	var separator = '';
	if (current.state != previous.state) {
		payload.state = current.state;
		if (previous.state) {
			payload.message += separator + 'State changed from ' + previous.state + ' to ' + current.state + ' on ' + payload.id + '.';
		} else {
			payload.message += separator + 'State set to ' + current.state + ' on ' + payload.id + '.';
		}
		separator = '\n\n';
	}
	if (current.assignment_group != previous.assignment_group) {
		payload.assignment_group = current.assignment_group;
		if (previous.assignment_group) {
			payload.message += separator + 'Assignment Group changed from ' + previous.assignment_group + ' to ' + current.assignment_group + ' on ' + payload.id + '.';
		} else {
			payload.message += separator + 'Assignment Group set to ' + current.assignment_group + ' on ' + payload.id + '.';
		}
		separator = '\n\n';
	}
	if (current.assigned_to != previous.assigned_to) {
		payload.assigned_to = current.assigned_to;
		if (previous.assigned_to) {
			payload.message += separator + 'Assigned To changed from ' + previous.assigned_to + ' to ' + current.assigned_to + ' on ' + payload.id + '.';
		} else {
			payload.message += separator + 'Assigned To set to ' + current.assigned_to + ' on ' + payload.id + '.';
		}
		separator = '\n\n';
	}
	if (current.comments) {
		payload.comments = current.comments;
		payload.message += separator + current.operator + ' has added a new comment to ' + payload.id + ':\n' + current.comments;
		separator = '\n\n';
	}
	if (current.work_notes) {
		payload.work_notes = current.work_notes;
		payload.message += separator + current.operator + ' has added a new work note to ' + payload.id + ':\n' + current.work_notes;
		separator = '\n\n';
	}

	return JSON.stringify(payload, null, '\t');
},

We still need a Flow Designer Action to call this function, but the function does almost all of the heavy lifting and our Action should now be quite simple to put together. Let’s open up the Flow Designer again, click on the New button, and select Action from the drop-down selection list. We’ll call our new Action Build Webhook Payload since that’s what it does, and we will define two inputs, the current and previous objects. Since we won’t be referencing any of the properties of those objects directly, this time we won’t have to invest the time in specifying all of the elements of the objects.

All we will need for our Action is to add a Script step with the same two inputs and the payload as an output. The script itself is just a call to the function that we just added to our Script Include.

(function execute(inputs, outputs) {
	var wru = new WebhookRegistryUtils();
	outputs.payload = wru.buildPayload(inputs.current, inputs.previous);
})(inputs, outputs);

We also need to define the payload as an output of Action itself, and map the Action inputs to the Script step inputs and the Script step output to the Action output. That should complete our new Action.

Build Webhook Payload Action

Now we can go back into our Subflow and select this Action as a third step under our second step conditional that checks to see if there are any records from the query in the first step. For our fourth step, we will add a flow logic step that loops through all of the records from step 1, and inside of that loop, our fifth step will make the POST of the payload. I looked for an out-of-the-box simple HTTP POST Action already included on the platform, but I couldn’t find anything, so that’s another Action that we are going to have to produce ourselves. We already built much of the script for that when we built our testURL function; if we rework that code just a little bit, we can probably find a way to make it work just fine for both purposes.

testURL: function(whrGR) {
	var jsonObj = {message: 'This is a test posting.'};
	return this.postWebhook(whrGR, JSON.stringify(jsonObj, null, 4));
},

postWebhook: function(whrGR, payload) {
	var result = {};

	var request = new sn_ws.RESTMessageV2();
	request.setEndpoint(whrGR.getValue('url'));
	request.setHttpMethod('POST');
	request.setRequestHeader('Content-Type', 'application/json');
	request.setRequestHeader('Accept', 'application/json');
	request.setRequestBody(payload);
	var response = request.execute();
	result.status = response.getStatusCode();
	result.body = response.getBody();
	if (result.body) {
		try {
			result.obj = JSON.parse(result.body);
		} catch (e) {
			result.parse_error = e.toString();
		}
	}
	result.error = response.haveError();
	result.error_code = response.getErrorCode();
	result.error_message = response.getErrorMessage();

	return result;
},

Now the testURL function is just a call to our new postWebhook function that contains most of the code previously contained in the testURL function. The testURL function will pass in our simple test payload, and when we create out new Action, we can call the very same postWebhook function, passing in a real payload. The script for our new Action will then simply be this:

(function execute(inputs, outputs) {
	var wru = new WebhookRegistryUtils();
	var result = wru.postWebhook(inputs.webhook_registry, inputs.payload);
	for (var key in result) {
		outputs[key] = result[key];
	}
})(inputs, outputs);

Stopping right here should give us enough with which to test out the process so far. This isn’t all that we want to do here, because we still want to record both the attempt and the response in our log table. However, since we haven enough built out to just test the POSTing of payloads, I think we should stop and do that. Afterwards, we can circle back and build out the activity logging and any Event logging that we might want to do for any kind of bad responses. But right now, we have enough built out that we can create a few Webhook Registrations and then go update some qualifying Incidents to see what actually happens.

Setting all of that up and verifying the results seems worthy of its own episode, so let’s wrap things up for today and pick that up next time out.

Event Management for ServiceNow, Revisited

“True prevention is not waiting for bad things to happen, it’s preventing things from happening in the first place.”
Don McPherson

Some time ago we built some utility functions to support reporting Events within the ServiceNow Platform. That was before the Flow Designer, though, so that effort did not include any support for that environment. We already have the script to do all of the heavy lifting from our earlier work, so it wouldn’t take much to create a Flow Designer Action that called that script to report an Event that occurred during Flow processing. We can call our new Action Log Event, and set up Action Inputs for all of the usual suspects.

Log Event Action Inputs

For our script step, we will basically set up the same inputs and then source them directly from the primary Action Inputs.

Script step inputs mapped to Action Inputs

Those of you who are paying attention will notice that we defined the additional_input field as a String even though it needs to be an Object when we make the call to our existing script. The assumption here is that the caller will provide a valid JSON String, and then we can turn it into an Object in our script before we make the call. Here is the script to convert the String and then make the call.

(function execute(inputs, outputs) {
	if (inputs.additional_info) {
		try {
			inputs.additional_info = JSON.parse(inputs.additional_info);
		} catch(e) {
			//
		}
	}
	var seu = new ServerEventUtil();
	seu.logEvent(inputs.source, inputs.resource, inputs.metric_name, inputs.severity, inputs.description, inputs.additional_info);
})(inputs, outputs);

There are no outputs from this process, so this is the entire Action. Once we Save and Publish it, it will be available from the Action selection list, and then we can add Log Event steps anywhere in our Flows and Subflows where we want to report an Event. That was fairly quick, easy, and relatively painless. For those of you would like to try it out on your own, here is an Update Set.

Fun with Webhooks, Part IV

“Try not to become a man of success, but rather try to become a man of value.”
Albert Einstein

Last time, we came to a fork in the road, not knowing whether it would be better to jump into the My Webhooks Service Portal widget, or to start working out the details of the process that actually POSTs the Webhooks. At this point, I am thinking that the My Webhooks widget will end up being a fairly simple clone of my old My Delegates widget, so that may not be all that interesting. POSTing the Webhooks, on the other hand, does sound like it might be rather challenging, so let’s start there.

When I first considered attempting this, my main question was whether it would be better to handle this operation in a Business Rule or by building something out in the Flow Designer. After doing a little experimenting with each, I later came to realize that the best alternative involved a little bit of both. In a Business Rule, you have access to both the current and previous values of every field in the record; that information is not available in the Flow Designer. In fact, it’s not available in a Business Rule, either, if you attempt to run it async. But if you don’t run it async, then you are holding everything up on the browser while you wait for everything to get posted. That’s not very good, either. What I ultimately decided to do was to start out with a Business Rule running before the update, gather up all of the needed current and previous values, and then pass them to a Subflow, which runs async in the background.

My first attempt was to just pass the current and previous objects straight into my Subflow, but that failed miserably. Apparently, when you pass a GlideRecord into a Subflow, you are just passing a reference, not the entire record, and then when the Subflow starts up, it uses that reference to fetch the data from the database. That’s not the data that I wanted, though. I want the data as it was before the database was updated. I had to take a different route with that part of it, but the basic rule remained the same.

Business Rule to POST webhooks

The rule is attached to the Incident table, runs on both Insert and Update, and is triggered whenever there is a change to any of the following fields: State, Assignment Group, Assigned to, Work notes, or Additional comments.

To get around my GlideRecord issue, I wound up creating two objects, one for current and one for previous, which I populated with data that I pulled out of the original GlideRecords. When I passed those two objects to my Subflow, everything was there so that I could do what I wanted to do. Populating the objects turned out to be quite a bit of code, so rather than put that all in my Business Rule, I created a function in my Script Include called buildSubflowInputs and I passed it current and previous as arguments. That simplified the Business Rule script quite a bit.

(function executeRule(current, previous) {
	var wru = new WebhookRegistryUtils();
	sn_fd.FlowAPI.startSubflow('Webhook_Poster', wru.buildSubflowInputs(current, previous));
})(current, previous);

In the Script Include, things got a lot more complicated. Since I was going to be turning both the current and previous GlideRecords into objects, I thought it would be helpful to create a function that did that, which I could call once for each. That function would pull out the values for the Sys ID, Number, State, Caller, Assignment group, and Assigned to fields and return an object populated with those values.

getIncidentValues: function(incidentGR) {
	var values = {};

	values.sys_id = incidentGR.getValue('sys_id');
	values.number = incidentGR.getDisplayValue('number');
	if (!incidentGR.short_description.nil()) {
		values.short_description = incidentGR.getDisplayValue('short_description');
	}
	if (!incidentGR.state.nil()) {
		values.state = incidentGR.getDisplayValue('state');
	}
	if (!incidentGR.caller_id.nil()) {
		values.caller = incidentGR.getDisplayValue('caller_id');
		values.caller_id = incidentGR.getValue('caller_id');
	}
	if (!incidentGR.assignment_group.nil()) {
		values.assignment_group = incidentGR.getDisplayValue('assignment_group');
		values.assignment_group_id = incidentGR.getValue('assignment_group');
	}
	if (!incidentGR.assigned_to.nil()) {
		values.assigned_to = incidentGR.getDisplayValue('assigned_to');
		values.assigned_to_id = incidentGR.getValue('assigned_to');
	}

	return values;
},

Since my Business Rule could be fired by either an Update or an Insert, I had to allow for the possibility that there was no previous GlideRecord. I could arbitrarily call the above function for current, but I needed to check to make sure that there actually was a previous before making that call. I also wanted to add some additional data to the current object, including the name of the person making the changes. The field sys_updated_by contains the username of that person, so to get the actual name I had to use that in a query of the sys_user table to access that data.

buildSubflowInputs: function(currentGR, previousGR) {
	var inputs = {};

	inputs.current = this.getIncidentValues(currentGR);
	if (currentGR.isNewRecord()) {
		inputs.previous = {};
	} else {
		inputs.previous = this.getIncidentValues(previousGR);
	}
	inputs.current.operator = currentGR.getDisplayValue('sys_updated_by');
	var userGR = new GlideRecord('sys_user');
	if (userGR.get('user_name', inputs.current.operator)) {
		inputs.current.operator = userGR.getDisplayValue('name');
	}

	return inputs;
},

One other thing that I wanted to track was any new comments or work_notes, and that turned out to be the most the most complicated code of all. If you try the normal getValue or getDisplayValue methods on any of these Journal Fields, you end up with all of the comments ever made. I just wanted the last one. I had to do a little searching around to find it, but there is a function called getJournalEntry that you can use on these fields, and if you pass it a 1 as an argument, it will return the most recent entry. However, it is not just the entry; it is also a collection of metadata about the entry, which I did not want either. To get rid of that, I had to find the first line feed and then pick off all of the text that followed that. Putting all of that together, you end up with the following code:

if (!currentGR.comments.nil()) {
	var c = currentGR.comments.getJournalEntry(1);
	inputs.current.comments = c.substring(c.indexOf('\n') + 1);
}
if (!currentGR.work_notes.nil()) {
	var w = currentGR.work_notes.getJournalEntry(1);
	inputs.current.work_notes = w.substring(w.indexOf('\n') + 1);
}

That’s pretty much all that there is to the new Business Rule and the needed additions to the Script Include. Now we need to start tackling the Subflow that will actually take these current and previous objects and do something with them. That’s a bit much to jump into at this point, so we’ll wrap things up here for now and save all of that for our next time out.

Fun with Webhooks, Part III

“All things are difficult before they are easy.”
Thomas Fuller

Now that we have built the Webhook Registry table and configured the associated form layout, the last thing we need to do before we move on is to set up the Test URL UI Action and the underlying code to make it work. Clicking on the Test URL button should pass the current registry record to a function in a new Script Include that will pull the URL out of the registry, POST an example JSON object to the URL, and verify the response. Since we have to reference the Script Include in the UI Action, let’s work on that first.

To create a new Script Include, navigate to System Definition -> Script Includes and then click on the New button. Since this will be a general purpose Script Include for our application, I named it WebhookRegistryUtils.

New WebhookRegistryUtils Script Include

To test a URL, we will want to add a new function to our new Script Include that accepts a Webhook Registry GlideRecord as an argument and returns an Object that contains the results of the test POST. Let’s call this function testURL().

testURL: function(whrGR) {
	var result = {};

	var jsonObj = {message: 'This is a test posting.'};
	var request = new sn_ws.RESTMessageV2();
	request.setEndpoint(whrGR.getValue('url'));
	request.setHttpMethod('POST');
	request.setRequestHeader('Content-Type', 'application/json');
	request.setRequestHeader('Accept', 'application/json');
	request.setRequestBody(JSON.stringify(jsonObj, null, 4));
	var response = request.execute();
	result.status = response.getStatusCode();
	result.body = new global.JSON().decode(response.getBody());
	result.error = response.haveError();
	result.errorCode = response.getErrorCode();
	result.errorMessage = response.getErrorMessage();

	return result;
},

This code is fairly self-explanatory. The jsonObj is our test payload and the request is an empty sn_ws.RESTMessageV2 that we populate with the URL found in the registry record and the hard-coded Method of POST. The response is the result of executing the request, and then we pull all of the relevant information out of the response and use it to populate the result object, which we then return to the caller. We may end up reworking some of this when we start adding additional functionality, but this should be enough to test a URL with no authentication.

At this point, we could test out the code by writing a quick little Fix Script to grab a registry record and pass it to this function, but it might be just as easy to ahead and just make the UI Action and test it out on the form. To create the UI Action, pull up the Webhook Registry form, select Configure -> UI Actions from the hamburger menu, and then click on the New button. I named mine Test URL, checked the Form button checkbox, and put the following script in the Condition:

current.url > ''

That condition keeps the Test URL button from appearing on the form when there is no URL to test.

Test URL UI Action

For the Script, I just call the testURL function in our new Script Include and then notify the user of the results. There is not much to the code, but here it is:

var wru = new WebhookRegistryUtils();
var result = wru.testURL(current);
action.setRedirectURL(current);
if (result.status == '200') {
	gs.addInfoMessage('URL "' + current.url + '" was tested successfully.');
} else {
	gs.addErrorMessage('URL test failed: ' + JSON.stringify(result, null, '<br/> '));
}

There is undoubtedly a much nicer way to format that failure message, but this will get the job done for now. We can always clean that up later. Right now, though, let’s push the button and see what happens! On second thought, though, we can’t really do that until we insert a record into the registry. let’s take care of that now.

New Webhook Registry entry

Once all of the parts are in place, this entry should POST a payload to the specified URL every time there is a change to an Incident assigned to the owner. For the URL, I went out to check out Webhook.site, and was assigned that end point URL, so I threw that in there. That should be a valid webhook URL, so our first test should be successful. Once we prove that, we can mangle up the URL and hit it again to see what a failure looks like. But first things first — let’s see if we can make it work with a real URL.

Successful URL test

So far, so good! Real quick, before we try out the bad one, let’s pop over to Webhook.site and see if there is any evidence that we ran this test. Oh, look — there is!

Test POST results found on Webhook.site

That’s actually a pretty cool little tool. That’s going to be quite handy for this particular adventure. I like it. Now, let’s try out one that we know will fail.

Unsuccessful URL test

Well, that seems to work as well. So now we have our first table and related form, the beginnings of our Script Include and a handy UI Action. From here, we can work on the My Webhooks Portal Widget, or we can start building out the process that sends out the POSTs to the qualifying destinations. That’s nothing that we need to decide today, though; we can figure that all out next time.

Fun with Webhooks, Part II

“Well done is better than well said.”
Benjamin Franklin

Now that we have the idea fairly well formulated, it’s time to get to work. The first thing that we need to do is create our Scoped Application, which you can do by navigating to My Company Applications and then clicking on the Create new button in the upper right-hand corner. I still use the classic UI rather than the Studio, so you may approach this task a little differently, but the end result should be the same.

New Simple Webhook Application

At this point, I have just created the most basic of empty shells. There are no modules or tables or roles or any other artifacts … it’s basically just the bare application record itself. Creating the app does create a scope, and also puts you into that scope, so as long as you don’t change that, everything that you do from this point on will be built in that scope. The first thing that we will want to build is the database table for our registry, which we can do by navigating to System Definition -> Tables and clicking on the New button.

New Webhook Registry table

Once you give it a Label it will generate the appropriate name, and the next thing that you are going to want to do is to uncheck the Create module option, as we only want the table at this point and we don’t need all of those other artifacts generated. We want to give our registrations an ID using the platform’s built-in tools, so we will want to create a field labeled Number of type String and put the following in the Default value:

javascript:getNextObjNumberPadded();

We have a number of other columns to define, but let’s save this for now and go set up our auto-numbering before we forget. To do that, navigate to System Definition -> Number Maintenance and click on the New button. Select our new table from the list and then let’s set the Prefix to WHR for Webhook Registry,

Setting up auto-numbering for our new Webhook Registry table

With that out of the way, we can go back to our table and add the rest of the fields. Here are the ones that I added to get things started:

  • Active – True/False, with default of True
  • Table – Table Name, with default of incident
  • Owner – Reference to the sys_user table
  • Type – String, with four initial choices (Single Item, Caller / Requester, Assignment Group, and Assignee)
  • URL – URL
  • Authentication – String, with two initial choices (None and Basic)
  • User Name – String
  • Password – Password
  • Document ID – Document ID
  • Person – Reference to the sys_user table
  • Group – Reference to the sys_user_group table

We may add a few more later on as we add new features, but this will get us by for now. Event though my intent is to focus solely on the Incident table for now, I went ahead and added the Table column and just defaulted it to Incident. That was mainly so that I could use as the Dependent field on the Document ID field. If you have never worked with Document ID fields before, you can just think of them as a special form of Reference field where you don’t have to specify the table that is being referenced. Instead, you point to another field on the record that contains the name of the table. This way, one of your records can reference one table and another record in that same table can reference a different table, all based on the table specified in the Dependent field. To set up the Dependent field, use the Dependent Field tab on the Dictionary Entry for the Document ID field.

Document ID Dependent field specification

After entering all of the fields, I used the Show form link at the bottom of the page to bring up the form for the new table. Using the Configure -> Form Layout option, I rearranged the fields on the screen to my liking, and then removed the Table field completely, as that will default to Incident for now and we don’t want anyone changing it to anything else at this point.

After getting everything laid out just right, the next thing that I did was to add a few UI Policies to control certain fields based on the value in other fields. For example, I hide the User Name and Password fields unless you set the Authentication to Basic. Similarly, the presence of the Document ID, Group, and Person fields are all dependent on the value of the Type field. Basically, this just hides fields that are not needed and reveals them when they are.

The other things that I wanted to do on this form was to give the user the ability to test their URL. That seemed like a good use for a new UI Action, but rather than putting all of the code for that process in the Action itself, I decided to start a Script Include to house these types of utilitarian functions. Putting all of that together seems like a good exercise for our next installment in this series, so this looks like a good stopping point for now.

Fun with Webhooks

“Good ideas are common – what’s uncommon are people who’ll work hard enough to bring them about.”
Ashleigh Brilliant

There is quite a bit of Webhook stuff in various IntegrationHub spokes, but it all seems to be oriented towards consuming incoming events from different external event publishers. I want to actually be the publisher, and send out information based on some preferences selected by the consumer. That may be hidden somewhere in the Now Platform already, but I can’t seem to find it, so I have decided that I would try to develop a Scoped Application to do just that. This may very well be recreating something that already exists in the platform today, but it sounds like a fun exercise, so I am going to give it the old college try.

As always, I will attempt to start out with the most basic of offerings, and then incrementally expand to add more and better features. My approach is to treat this feature as somewhat analogous to a Watch List, in that you sign up to follow certain events, but instead of sending a notification to a User when the event occurs, the result will be that the information is posted to a specified URL. This can apply to any number of things, but to start off, I am going to focus on some very specific changes to one particular table (Incident), and then later expand from there.

To make this work, there will need to be some kind of Webhook Registry where a consumer would sign up to receive these posts. When registering your webhook, you would enter the URL to which you want the data posted along with the specifics of what type or types of events you would like to have included. I’m thinking about linking them directly to an owner, and having some kind of My Webhooks Portal Page where you could manage your existing registrations and add new ones. When adding a new one, you should be able to enter and test your URL, and for our first iteration, that may be the only choice that you get. Later on, we will want to add the ability to choose what you want to follow, which specific updates should trigger a new post, and even what you would like to have included in the payload. But we will also want to start out as simple as possible, so the initial registry may turn out to be quite barren as far as input fields go.

Once registered, there will need to be some process to actually send out the posts as requested in the registration. This could be a Business Rule on the source table, or maybe something created in the Flow Designer. Either way, the process should scan the registry for any condition matches and then send out a post for each match. Each post and response should be logged in some kind of Webhook Activity Log, and any bad HTTP Response Codes should be reported to Event Management. A robust service would attempt to repost any failures up to a certain limit before giving up completely, but all of that can be delegated to some Alert Management Rule at some later time. Again, we will want to start out simple, so our initial focus will just be on making that initial post attempt. Everything else can be pushed off until later on in the process.

Those would seem to be the two major functions: registering the webhook and sending out the posts. We may want some other things at some point, such as the ability to review the logs or to manually repost or to clone an existing registration, but for now, just those two things should get the ball rolling. We may also want to set up a sample receiver for testing purposes, but in practice, the receivers would be other products and outside the scope of this development exercise. There is actually an existing service out on the Internet called Webhook.site that might turn out to be just what I need in order to do a little testing. We should check that out when we get to that point.

For our parts list, then, I can see the need for the following artifacts:

  • A table to hold the webhook registrations,
  • A my_webhooks portal widget to list all webhooks owned by the user,
  • A webhook portal widget for editing a single webhook registration,
  • A Business Rule or Flow to send out the posts,
  • A log table to record the posts and response, and possibly
  • A Script Include to contain some common functions.

Of course, before we create any of that, we will have to create the Scoped Application itself, so that should be where we start next time when we initiate the actual construction phase of this effort.

Flow Variables vs. Scratchpad Variables

“An essential part of being a humble programmer is realizing that whenever there’s a problem with the code you’ve written, it’s always your fault.”
Jeff Atwood

Now that you can get yourself a copy of the new Quebec version of ServiceNow, you can start playing around with all of the new features that will be coming out with this release. One of those new features is Flow Variables, which look like they might serve to replace the Scratchpad that I had built earlier for pretty much the exact same purpose. Since I have used that custom feature in a variety of different places after I first put it together, the trick will be to see if I can just do a one-for-one swap without having to re-engineer things using an entirely different philosophy.

I like building parts, but I also like to get rid my home-made parts whenever I find that there is a ServiceNow component that can serve the same purpose. If I can figure out how to make a Flow Variable do all of the things that I have been able to do with a Scratchpad Variable, then I will be the first in line to throw my home-made scratchpad into the dust bin. But before we get too far ahead of ourselves, let’s try a few things and see what we can actually do.

According to the ServiceNow Developer Blog, you can set the value of a Flow Variable inside of a loop and use it to control the number of times that you go through the loop.

Subtracting 1 from an Integer Flow Variable

Let’s create a sample flow, define our first Integer Flow Variable and then set up a loop that we can escape based on the value of the variable. Open up the Flow Designer, click on the New button, select Flow from the drop-down, and then name the new Flow Sample Flow. From the vertical ellipses menu in the upper right-hand corner, we can select Flow Variables and define a new variable called index of type Integer. For our first step, we can set the value of our new variable to 0, and then we can add a loop that will run until the variable value is greater than 5. After that, all we will have to add is another step inside the loop to increment our index variable.

Simple variable test flow

All of that went relatively smoothly for me until I tried to add the +1 to the Set Flow Variables step inside of the loop. Once I dragged the index pill into the value box, it would not accept any more input. I tried putting the +1 in first, and then dragging the pill in front of it second, but that just replaced the +1. I could never get the pill+1 value like the sample image in the developers blog entry. Obviously, I was doing something wrong, but I did not know exactly what. Nothing that I tried seemed to work, though, so eventually I decided to give that up and just script it.

Scripted variable increment

It was a fairly simple script that just added one to the existing value.

return fd_data.flow_var.index + 1;

Unfortunately, that resulted in a text concatenation rather than an increment of the value. 0 + 1 became 01 and 01 + 1 became 011 and so forth. Even though I had defined my variable as an Integer, it was getting treated as if it were text. No problem, though … I will just convert it to an Integer before I attempt to increment the value.

return parseInt(fd_data.flow_var.index) + 1;

That’s better. Now 0 + 1 becomes 1 and 1 + 1 becomes 2 just the way that I thought that it should. But … things are still not working the way that they should. In my test runs, the new value was not always present when I reentered the loop. 0 + 1 became 1, but then the next time through, the variable value was still 0. That’s not right! I kept erroring out by looping through the code more than the maximum 10,000 times until I finally changed my bail-out condition to be any value greater than zero. Even then, it ended up looping 456 times before it dropped out.

Completed test run results

Clearly, something is not quite right here. It is possible that I just got a hold of an early release that still has a few kinks left to be worked out, but if you’re a true believer in The First Rule of Programming, then you know that I’ve just done something wrong somewhere along the line and need to figure out what that is. Obviously, you shouldn’t have to run through the loop over 400 times just bump the count above zero.

Well, back to the drawing board, as they say. Maybe I will have to resort to reading the instructions. Nah … I’ll just keep trying stuff until I get something to work. In the meantime, it looks like I won’t be flushing the old home-grown scratchpad down the drain just yet. At least not until I figure out how to make this thing work. Too bad, though. I had such high hopes when I saw that this was coming out.

Note: Eventually, I was able to figure out what I was doing wrong.

Fun with Outbound REST Events, Part X

“No one has a problem with the first mile of a journey. Even an infant could do fine for a while. But it isn’t the start that matters. It’s the finish line.”
Julien Smith

After our last installment in this series, our Events now spawn Incidents that are pretty much just what we would like to see. The only remaining challenge at this point is to create a meaningful Description field value. Although we have set things up to produce this Description in a Script Include function, I should point out right here at the outset that everything that we are about to do in our script could also be accomplished in the Flow Designer itself. In fact, it probably should be done using the Flow Designer if we are to fully embrace the whole no-code future towards which we all seem to be herded. I’m still an old coder at heart, though, so it seems easier to me to scratch out another quick function than it does to build out all of those action steps using input forms. Still, it would probably be a worthwhile exercise to replace this script with a subflow one day; today is just not that day. Today we code!

Although we are passing the Alert to our function as an argument, much of the data we need is actually in the Event that spawned the Alert, so the first thing that we are going to want to do is go out and get that guy. That’s pretty basic GlideRecord stuff.

// get initial Event
var eventGR = new GlideRecord('em_event');
eventGR.addQuery('alert', alertGR);
eventGR.orderBy('sys_created_on');
eventGR.query();
eventGR.next();

Since it is possible that there could be more than one Event associated with our Alert, we include an orderBy directive to ensure that we get the very first Event out of the bunch. Once we have our Event in hand, we will have access to the additional_info JSON string, which we will want to convert to a Javascript object so that we can reference all of the various component parts.

// get addition info from Event
var additionalInfo = {};
try {
	additionalInfo = JSON.parse(eventGR.getValue('additional_info'));
} catch (e) {
	gs.info('Unable to parse additonal_info from Event ' + eventGR.number);
}

We also have access to the Event resource, which in our case is a User’s user_name. We can use that to get the sys_user record for that user, much in the same way that we retrieved the Event record.

// get affected User record
var userGR = new GlideRecord('sys_user');
userGR.get('user_name', alertGR.getValue('resource'));

This assumes, of course, that the only place that we using our address validation capability is on the User Profile page. If we ever expand its use to other places — say on the Building or Location form — then we would need to have some way to know whether the resource was a User or a Location or a Building or some other entity with an address to validate. Based on that information, we might be retrieving a Building record or a Location record instead of a User record. For now, though, we can safely assume that the resource is a User.

Now that we have gathered up all of the data that we need, we can start building out our Description. To begin, let’s start out with something that will be universal to all of our Incidents, regardless of the problem being reported.

// format description
var alertDesc = alertGR.getDisplayValue('description');
var section = '\n========================================\n';
var subsection = '\n----------------------------------------\n';
desc += additionalInfo.user.name;
desc += ' attempted to update the address on the user profile for user ';
desc += userGR.getDisplayValue('name');
desc += ', but was unable to verify the address using the US Address Validation service due to the following error:\n\n';
desc += alertDesc;
desc += '\n\nIncident Details:';
desc += section;

Beyond this point, we are going to want to be a little more specific based on what actually happened to trigger the Event. We can do that by introducing some conditional code based on the known values found in the Alert’s description field.

if (alertDesc.startsWith('The response code')) {
	// bad response code language will go here
} else if (alertDesc.startsWith('The response object')) {
	// bad response object language will go here
} else if (alertDesc.startsWith('The response content')) {
	// bad response content language will go here
} else {
	// we should never get here, but just in case ...
}

Since most of the Events that we have triggered up to this point have been of the bad response code variety, let’s do those first.

if (!additionalInfo.response.code) {
	desc += 'No response was received from the service, which could be an indication that the service is unavailable or unreachable. Check the status or the external service as well as the status of your connection to the Internet.';
} else if (additionalInfo.response.code == 401) {
	desc += 'A Response Code of 401 indicates an authentication error of some kind. Verify that your account credentials are correct and that your account is in good standing with the service provider.';
} else {
	desc += 'The service returned a Response Code of ';
	desc +=  additionalInfo.response.code;
	desc += '. Additional information on the meaning of this response may be found in the Response Body. Also, you can check with the service provider for further clarification on the appropriate handling of this response.';
	desc += '\n\nDetailed information on the ';
	desc += additionalInfo.response.code;
	desc += ' Response Code can be found on the web at https://httpstatuses.com/';
	desc += additionalInfo.response.code;
}

This gives us specialized language for no response code at all, and a response code of 401. Everything else is handled in a more generalized section that covers any other bad response code. As more knowledge of the potential response codes becomes available through experience with the service, more specialized language can be added that can be more specific to other known response codes.

Now let’s take a look at what we can do for bad response objects.

desc += 'The service returned a valid Response Code and a parsable response, but the response did not contain certain expected elements necessary to determine the validity of the address. Review the response received and check with the service provider to see if anything has changed with the API specifications.';

That one is about as simple as you can get; everyone gets the same language. For the bad response content issues, things are a little bit more sophisticated. Everyone still gets the same language, but there is a possibility for an exception with this group, so we include code to handle that as well.

desc += 'The service returned a valid Response Code, but the response was either empty or ill-formatted. Review the response received and check with the service provider to see if the service is experiencing problems, or if anything has changed with the API specifications.';
if (additionalInfo.exception) {
	desc += '\n\nException Details:';
	desc += subsection;
	desc += '   Exception: ';
	desc += additionalInfo.exception;
	desc += '\n   Stack Trace:\n';
	desc += additionalInfo.stackTrace;
}

Once we complete all of the conditional logic, we wrap things up with some more universal code that applies to everyone. This just serves to include the user’s input and the service’s response at the end of the body of the Description field for reference.

desc += '\n\nAddress Details:';
desc += subsection;
desc += '   Street: ';
desc += additionalInfo.input.street;
desc += '\n   City: ';
desc += additionalInfo.input.city;
desc += '\n   State: ';
desc += additionalInfo.input.state;
desc += '\n   Zip Code: ';
desc += additionalInfo.input.zip;
desc += '\n\nResponse Details:';
desc += subsection;
desc += '   Response Code: ';
desc += additionalInfo.response.code;
desc += '\n   Response Body: ';
desc += additionalInfo.response.content;
if (additionalInfo.response.object) {
	desc += '\n   Response Object: ';
	desc += JSON.stringify(additionalInfo.response.object, null, '\t');
}

There is still more helpful information that we could add, such as links to the service provider’s documentation, or in the case of the 401 error, the names of the system properties that contain the credentials, but this is good enough for a sample. Let’s just save what we have and then trigger another Event and see what comes out the other side.

Incident with updated description from the new Script Include function

Well, that’s much, much better than the description that the original Create Incident flow was producing. It’s not perfect, but I think it does provide the person receiving the Incident enough details about both what happened and what might be done about it that they can get to work on the ticket right away without a whole lot of research. Obviously, it can be fine-tuned over time, but this is a good foundation upon which to build for this particular use case.

That pretty much wraps up all that I had hoped to accomplish with this series. It took us 10 installments to get here, but much of that was due to the fact that we had to build out our own address validation infrastructure before we could use it to demonstrate applying Event Management tools and techniques to internal ServiceNow features and functions. For those of you who like to play along at home, I have bundled what I hope are all of the relevant parts and pieces into an Update Set that you are welcome to pull down and import into your own environment.

Fun with Outbound REST Events, Part IX

“What we hope ever to do with ease, we must first learn to do with diligence.”
Samuel Johnson

Last time, we were able to have our Alert produce an Incident, but it wasn’t exactly the Incident that we wanted. Today, we are going to fix that. Since we don’t want to alter the out-of-the-box Create Incident subflow that we are currently using to create our Incidents, we will want to make a copy of the subflow so that we can customize it for our own purposes. To do that, pull up the Create Incident subflow in the Flow Designer, click on the vertical ellipses in the upper right-hand corner, and select Copy subflow to create a new copy of the subflow.

Copy the Create Incident subflow

A pop-up dialog box will appear where you can enter the name of your new subflow, which we will call Create Address Issue Incident.

Enter the name of the new subflow

After entering the name, click on the Copy button to create your new subflow from the original. This should open up your new subflow for editing.

Your new Create Address Issue Incident subflow

Now that we have our own copy, we can make whatever modifications that we would like to make without disturbing the original. All of the changes that we will want to make are in the Create Task step, so let’s open that up and see what we can do to produce Incidents that include the detail that we would like to provide to the technician working the ticket. Let’s get rid of the Description value entirely, as that’s not the description that we want. That very same text is repeated in the Additional Comments field, anyway. The rest of the values that are there seem to be OK for now, but let’s add a few more using the +Add Field Value button at the bottom of the field list.

Let’s set the State to Assigned, the Assignment Group to ITSM Engineering, the Category to Software, and the Subcategory to Internal Application. Some of that may not be exactly right, but this is just an example of the kinds of things that you can do. For the new Description value, which is going to be conditional depending on the nature of the issue that triggered this Incident, let’s use an inline script. That can be done by clicking on the little f(x) button to the right of the field value.

Create Task step expanded

At this point, we don’t necessarily need to build the entire script, but we will want to stub it out enough to keep things functional for testing. Since the script might get a little complex, I like to push all of the logic out to a Script Include and then limit the code in the subflow to just a call to a function in the Script Include. That keeps the clutter out of the subflow itself, and also allows us to refine the output of the process by just editing the Script Include without having to publish a new version of the subflow. We already have a Script Include devoted to the address verification process, so let’s just add a simple function to that existing artifact so that we have something that we can call in the subflow.

formatIncidentDescription: function(alertGR) {
	return 'Test description for ' + alertGR.number;
},

There isn’t much to this at this point, but there is enough here to verify that we are receiving the Alert, which we will want as a reference when we start building out the actual description that we want. Getting back to our subflow, the script to invoke this new function will look like this:

var avu = new AddressValidationUtils();
return avu.formatIncidentDescription(fd_data.subflow_inputs.ah_alertgr);

Figuring out that fd_data.subflow_inputs.ah_alertgr was the correct syntax for referencing the Alert was not all that intuitive. I have worked with the Flow Designer long enough now to know about the fd_data object, but I couldn’t find much in the way of documentation on identifying the names of the various properties of that object. Fortunately, I did come across some documentation on the type-ahead feature, which finally led me to the information for which I had been searching. Typing a single dot after the fd_data brings up a nice pick list of choices, and another one after selecting the choice does the same for that object as well.

fd_data properties pick list

With our script in place for the Description value, all that is left is to Save and Publish the new subflow and we are done with the Forms Designer. To use our new subflow, we will need to go back into our Alert Management Rule, open up the Action section, and replace the Create Incident subflow with our modified copy.

Modifying our rule to use the newly created subflow

At this point, we should be good to test again and see what kind of Incident gets generated now. Just remember to select a different person so that our Event is not assigned to an existing Alert that has already been processed. We want to be sure to create a brand new Alert to activate our modified rule. Once we force a new Event, we can pull it up and take a look at it to see what values have been set in the Event record.

Newly generated Event record

From the Event, we can navigate to the Alert, and from the Alert we can then navigate to the Incident. Let’s check out the Incident.

Incident generated from our new subflow

This is an improvement over our initial effort, as the ticket has now been properly categorized and routed to an Assignment Group for resolution. We still need a much better Description, but the presence of the Alert ID in the current Description value verifies that we are indeed passing the Alert record to our stubbed-out function, which we can now use to produce a more detailed and informative description value. Scripting that out for all of the various possibilities will be a bit of an effort, through, so let’s just make that the focus of our next installment.