Collaboration Store, Part LXVIII

“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”
Brian Kernighan

Last time, we finished up with my rudimentary testing of the latest version of this project. I can still do a lot more testing on my own, but what I really need is for some person or group who is not me to try to give it all a go. In order for that to be an option, I need to create new Update Sets for the current version and post them out here so that other kind souls can download them and attempt to see if they can make it work and/or find out where all of the problems lie. I did not get much feedback the last time that I tried this, but today is a new day, so maybe there is somebody out there now who wouldn’t mind helping a guy out.

This is by no means a final version of this effort. There are still a number of things that I would like to do that have not been attempted as yet, and there are probably more that I have not yet even considered. But all of the major functions are there now, and I just did quite a bit of major refactoring, so now is a good time to roll out a new version and let folks take it out for a spin. Outside feedback is always helpful, and is always appreciated.

Before you install the Scoped Application Update Set, you need to install the latest version of the snh-form-field tag, which you can find here. Or better yet, you can do what I did and go out and grab the latest SNH Data Table Widgets, which includes everything that you need to support snh-form-fields. Either way, you will need to take care of that before you install these app artifacts in the following order:

When I installed the app on my new San Diego PDI, I got a handful of Preview errors about some missing Flow Designer components, but I just accepted all updates and went ahead and did the Commit, and everything seemed to be fine. It may just be that the app was built on Rome and the installation was done on San Diego, and there are some differences there, but I would be interested in hearing if anyone else had any similar issues with the install.

Once you have everything installed, the next step is to go through the set-up process. The first thing that you will want to do is to create a Host instance. Once the Host has been established, the software can be installed on other instances and those instances can be set up as Client instances by identifying the new Host instance during the set-up process. Instructions for the Set-up process, the Application Publishing process, and the Application Installation process can be found here.

The best test will involve three or more instances, and the more the merrier. You can test the set-up process with a single instance, but until you have at least two instances involved, you can’t really test much of the purpose of the app, which is to share applications between instances. Three or more is obviously better, as that is the only way to test an application being shared by one Client and making its way to another Client via the Host. But any level of testing is useful, so please feel free to pull it all down, install it, and try what you can under any circumstances. All feedback from any experience is always welcome in the Comments. Thanks in advance for your assistance. Hopefully, we will get a little feedback this time and we can take a look at it next time out.

Collaboration Store, Part XXXIX

“It’s not that I’m so smart, it’s just that I stay with problems longer.”
Albert Einstein

Last time, we built the Flow that will call our Script Include function to update all of the instances in the community that might be missing any artifacts. Now we need to get started on the actual function that will check in with each instance, gather its inventory of artifacts, compare it to that of the Host, and send over any missing items. As with most things, there are a number of ways to go about this, but my plan is to go through the instances first, and then work through the applications for each instance as each instance is being processed. Similarly, my intent is to work through the application versions as each application is being processed. To begin, we will need to fetch all of the instances found on the Host and stuff them into an array.

var instance = '';
var token = '';
var instanceList = [];
instanceGR = new GlideRecord('x_11556_col_store_member_organization');
instanceGR.addQuery('active', true);
instanceGR.query();
while (instanceGR.next()) {
	instanceList.push(instanceGR.getDisplayValue('name'));
	if (instanceGR.getUniqueValue() == instanceId) {
		instance = instanceGR.getDisplayValue('name');
		token = instanceGR.getDisplayValue('token');
	}
}

The other thing that this code accomplishes is to gather up the name and credentials for the target instance. The sys_id of the instance is passed to the function from the Flow, but to make the REST API calls, we need the actual name of the instance and the token. We could read that record directly before we started everything, but since we were spinning through the instance records anyway, I decided to just check each one and grab the data when we happened to be processing that particular instance. Of course, once we do that we need to check to make sure that we actually came across an instance with that sys_id before we go any further.

if (instance > '' && token > '') {
	this.fetchInstanceList(instance, token, instanceList);
} else {
	gs.error('InstanceSyncUtils.syncInstance - Requested instance not on file: ' + instanceId);
}

Assuming that we did come across the instance being synced, we now need to contact that instance and gather up all of the instances present on that system. For that, we can call on our old friend sn_ws.RESTMessageV2. But before we do that, we can use the REST API Explorer to come up with an end point URL that will get us the results that we need. We want to gather up all of the active instances from the Client instance, but we will only need a couple of the fields for our purpose here. We can specify those in the sysparm_fields parameter. I also like to alter the sysparm_limit parameter from the default of 1 to the alternative of 10, just to get more than one result in the output to help verify the query.

Using the REST API Explorer to generate an end point URL

Once we have entered all of the appropriate parameters, we can hit the Send button, which will produce the URL and also display some sample results in the Response Body section. After stripping off the server portion and removing the limitation parameter, we are left with the following value to use as our end point:

/api/now/table/x_11556_col_store_member_organization?sysparm_query=active%3Dtrue&sysparm_fields=name%2Csys_id

With our end point URL in hand, we can now build the request object.

fetchInstanceList: function(instance, token, instanceList) {
	var request = new sn_ws.RESTMessageV2();
	request.setHttpMethod('get');
	request.setBasicAuth(this.WORKER_ROOT + instance, token);
	request.setRequestHeader("Accept", "application/json");
	request.setEndpoint('https://' + instance + '.service-now.com/api/now/table/x_11556_col_store_member_organization?sysparm_query=active%3Dtrue&sysparm_fields=name%2Csys_id');
	...
}

Once the request object has been built, we can execute the request and check the results.

var response = request.execute();
if (response.haveError()) {
	gs.error('InstanceSyncUtils.syncInstance - Error returned from attempt to fetch instance list: ' + response.getErrorCode() + ' - ' + response.getErrorMessage());
} else if (response.getStatusCode() == '200') {
	var jsonString = response.getBody();
	var jsonObject = {};
	try {
		jsonObject = JSON.parse(jsonString);
	} catch (e) {
		gs.error('InstanceSyncUtils.syncInstance - Unparsable JSON string returned from attempt to fetch instance list: ' + jsonString);
	}
	if (Array.isArray(jsonObject.result)) {
		...
	} else {
		gs.error('InstanceSyncUtils.syncInstance - Invalid response body returned from attempt to fetch instance list: ' + response.getStatusCode());
	}
} else {
	gs.error('InstanceSyncUtils.syncInstance - Invalid HTTP response code returned from attempt to fetch instance list: ' + response.getStatusCode());
}

Finally, if all has gone well, we can loop through the Host’s list of instances and then for every instance on the Host list, see if we can find it on the Client instance list. If we do not find it, then we need to push it over. Either way, once that has been checked and corrected if necessary, the next thing to do will be to check all of the applications present for that instance from the Host list.

for (var i=0; i<instanceList.length; i++) {
	var thisInstance = instanceList[i];
	var remoteSysId = '';
	for (var j=0; j<jsonObject.result.length && remoteSysId == ''; j++) {
		if (jsonObject.result[j].name == thisInstance) {
			remoteSysId = jsonObject.result[j].sys_id;
		}
	}
	if (remoteSysId == '') {
		remoteSysId = this.sendInstance(instance, token, thisInstance.name);
	}
	this.syncApplications(instance, token, thisInstance, remoteSysId);
}

We already have code to send over an instance record. In fact, we have already cloned that code and have a second version for another purpose. Rather than clone that code yet a third time, it’s long past the point for all of that to be consolidated into a single function that can work for any and all purposes. That seems like a bit of work, though, so let’s stop here and save that for our next time out.

Collaboration Store, Part XXXVIII

“You don’t have to be good to start … you just have to start to be good!”
Joe Sabah

Last time out we really did not accomplish anything of any consequence, but today let’s see if we can actually make some progress on something of real value. We need to create a process that will run every so often and check to make sure that all of the instances in the community have all of the latest content. We can use the Flow Designer to create and schedule a flow that will run daily on the Host instance and compare the artifacts present on each Client instance with the artifacts present on the Host, and then send over any missing items that never made it over originally for whatever reason. Doing it this way will eliminate the need to track and record errors when they happen, since we will just compare the Client inventory with that of the Host and correct any discrepancies. We don’t really need to know what failed or why.

We could make all of the REST API calls to the Client instances using the Integration Hub, but since that is a collection of optional products, not every instance will have all of those components installed and we would not want to make that a prerequisite for the product. To make this work on any ServiceNow configuration, we will want to roll our own calls via Javascript, and we will just use the Flow Designer to schedule a simple flow that loops through all of the Client instance records and then calls a Script Action that will do all of the heavy lifting. Before we build the Action though, we will want to build a Script Include function that can be called in the Action. Since this will be a significant script, we should probably create a new Script Include specific to this purpose rather than adding a number of new functions to any of our existing Script Includes. We can call our new Script Include InstanceSyncUtils and create a simple function called syncInstance with a single argument of the sys_id of the record for the instance to be synced. For now, we can just stub out the function with a simple logging statement and then circle back later to fill in the details. Right now, we just want to have something to call in our Flow Designer Action.

New Script Include function to be called in our new Action

With that out of the way, we can now fire up the Flow Designer and navigate to New -> Action. We will call our new Action Sync Instance and configure a single Input called Instance ID.

New Sync Instance Action with a single Input

For the Script step, we will just pass the Input to our new Script Include function:

(function execute(inputs, outputs) {
	var isu = new InstanceSyncUtils();
	isu.syncInstance(inputs.instance_id);
})(inputs, outputs);

There is currently nothing returned by the function, so there is no need to define any Outputs. At this point, all we need to do is to Save and Publish the Action before turning our attentions to the actual Flow itself.

To build a Flow, go back to the Home Page of the Flow Designer and select New -> Flow. On the resulting pop-up Flow Properties panel, enter all of the details for the Flow and set the Run As value to System User.

New Flow Properties

Since we want this Flow to run on its own every day, we can set the Trigger to Daily, and the Time to 12:30, which should run it in the middle of the lunch hour in the Host time zone, when most instances should be available.

Flow Trigger configuration

Since we only want this Flow to run on the Host instance, the first thing that we need to do is to create a Flow Variable that indicates whether or not this instance is the Host. To set the value of the variable, we add a Set Flow Variables step that uses the following script to determine if this instance is the Host based on System Properties.

return gs.getProperty('instance_name') == gs.getProperty('x_11556_col_store.host_instance');

The next step then will be a conditional that checks the value of the new Flow Variable.

Making sure that this is the Host instance

Failing this test will terminate the Flow, but if this is the Host instance, then the next thing that we need to do is gather up all of the Client instance records and then loop through them and call our new Action for each one in turn.

Find all records where Active is true and Host is false

Inside the following For Each Item loop, we can then invoke our new Action, passing in the sys_id of the current record.

Calling our new Action inside the For Each Item loop

That completes the work on the Flow, and all we need to do now is to Save and Activate the Flow. At this point, the Flow will kick off every day at 12:30 local time, but it won’t really do much except put a few entries in the System Log. The real work will be done in the Script Include, and since that’s a major effort in and of itself, we’ll leave that for our next exciting episode.

Collaboration Store, Part XXXII

“There are two ways to write error-free programs; only the third works.”
Alan J. Perlis

Now that we have all of the Javascript functions needed to push a new application version out to all of the Client instances, we need to create an Action that will call the root function and then create a Subflow that will utilize the new Action. Since we already created a similar Action for the process that shares new Client instances with existing Client instances, the easiest thing to do will be to bring that guy up in the Flow Designer and make a copy. To make a copy, pull up the Action that you want to copy, and use the ellipses menu to select Copy from the drop-down menu.

Making a copy of the Publish New Instance Action

Once we have our copy, we need to make the necessary changes to adapt it to our purpose. The publishNewVersion function that we just created takes three arguments (newVersion, targetInstance, and attachmentId), so we will need to modify the Action Inputs to bring in those values. The new Action Inputs now look like this:

Modified Action Inputs

Similarly, we will need to modify the Input Variables of the Script Step to pass along these values:

Modified Script Input Variables

The last thing that we need to do is modify the script itself. The existing script in the cloned Action was pretty simple, as it was just a call to one of our Script Include functions:

(function execute(inputs, outputs) {
	var csu = new CollaborationStoreUtils();
	csu.publishNewInstance(inputs.new_instance, inputs.target_instance);
})(inputs, outputs);

We also want to call a function, and it is in the very same Script Include, so our modification is just a matter of changing the name of the function along with the arguments passed.

(function execute(inputs, outputs) {
	var csu = new CollaborationStoreUtils();
	csu.publishNewVersion(inputs.new_version, inputs.target_instance, inputs.attachment_id);
})(inputs, outputs);

That’s it for the changes, so now all we have to do is Save and Publish the new Action and we are ready to utilize it in our new Subflow. Creating the Subflow will be another Copy operation, this time from our existing New_Collaboration_Store_Instance Subflow. We’ll call our copy Distribute_New_Application_Version, and once again, we will need to make some modifications to adapt it to our new purpose. The original Subflow had a single input, the New Instance. Instead of a new instance, we will be distributing a new application version, so we can replace the New Instance input with a New Version input that will contain the sys_id of the new version record. We will also need the sys_id of the Attachment record, so we will add an Attachment ID input as well.

Modified Inputs for the new Distribute_New_Application_Version Subflow

The target instances for this process will be the same as those for the original Subflow: all instances in the community except for the Host instance and the instance from which the data was provided (which could also be the Host in this case, since Host instances can publish applications just like any other Client instance). To filter out the data providing instance, we will need to add a new step to use the passed version record sys_id to fetch the GlideRecord that will lead us to the instance that produced the application.

New step to fetch the version record

This gives us the information that we need to complete the filter on the query of the instance table so that we can gather up all of the instances except for the Host and the instance that produced the application.

Modified query filter using the data available in the fetched version record

Now all we need to do is to replace the original custom Action inside of the loop with the new Action that we just created.

New Action added to push application artifacts to the target instance

This completes the changes needed to the cloned Subflow, so again, all we need to do is to Save and Publish and we are all ready to go. Now all we need to is to come up with a way to launch this Subflow whenever a new application version is sent over to the Host instance. We’ll take a look at that next time out, and hopefully release a new Update Set as well so that those of you that have been kind enough to participate in the testing can give this all a trial run.

Collaboration Store, Part XXXI

“Someone’s sitting in the shade today because someone planted a tree a long time ago.”
Warren Buffett

While we wait for additional test results to come in from the corrected Update Set for our latest iteration, it would be a good time to take a look at what we will need to do to complete the last remaining step in the application publishing process. What we have completed so far pushes all of the new application version artifacts to the Host instance. Now we need to build out the process for the Host instance to send out all of those artifacts to all of the other Client instances. Fortunately, we already have in hand most of the code to do that; we just need to clone a few things and make a number of modifications to fit the new purpose.

We have already built a Subflow to push new Client instance information out to all of the existing Client instances during the new instance registration process. Pushing out a new version of an application to those same instances is essentially the same process, so we should be able to clone that guy, and the associated Action, to create our new process. We have also created functions to push over the application record, the version record, and the attached Update Set XML file, and we should be able to copy over those guys as well to complete the process. In fact, we should probably start with the JavaScript functions, as we will need to call the root function in the Action, which will need to be created before it can be referenced in the Subflow.

The functions as written are hard-coded to send everything over to the Host instance. We will want to do basically the same thing, but this time we will be sending things out to various Client instances. Ideally, I should just refactor the code to have the destination instance and credentials passed in so that one function would work in both circumstances, but at this point it was less complicated to just make copies of each function and hack them up for their new purpose. I don’t really like having two copies of essentially the same thing, though, so one day I am going to have to go back and rework the entire mess.

But for now, this is what I came up with when I copied what was once called processPhase5 to create the new publishNewVersion function:

publishNewVersion: function(newVersion, targetInstance, attachmentId) {
	var targetGR = new GlideRecord('x_11556_col_store_member_organization');
	if (targetGR.get('instance', targetInstance)) {
		var token = targetGR.getDisplayValue('token');
		var versionGR = new GlideRecord('x_11556_col_store_member_application_version');
		if (versionGR.get(newVersion)) {
			var canContinue = true;
			var targetAppId = '';
			var mbrAppGR = versionGR.member_application.getRefRecord();
			var request  = new sn_ws.RESTMessageV2();
			request.setHttpMethod('get');
			request.setBasicAuth(this.WORKER_ROOT + targetInstance, token);
			request.setRequestHeader("Accept", "application/json");
			request.setEndpoint('https://' + targetInstance + '.service-now.com/api/now/table/x_11556_col_store_member_application?sysparm_fields=sys_id&sysparm_query=provider.instance%3D' + mbrAppGR.getDisplayValue('provider.instance') + '%5Ename%3D' + encodeURIComponent(mbrAppGR.getDisplayValue('name')));
			var response = request.execute();
			if (response.haveError()) {
				gs.error('CollaborationStoreUtils.publishNewVersion: Error returned from Target instance ' + targetInstance + ': ' + response.getErrorCode() + ' - ' + response.getErrorMessage());
				canContinue = false;
			} else if (response.getStatusCode() == '200') {
				var jsonString = response.getBody();
				var jsonObject = {};
				try {
					jsonObject = JSON.parse(jsonString);
				} catch (e) {
					gs.error('CollaborationStoreUtils.publishNewVersion: Unparsable JSON string returned from Target instance ' + targetInstance + ': ' + jsonString);
					canContinue = false;
				}
				if (canContinue) {
					var payload = {};
					payload.name = mbrAppGR.getDisplayValue('name');
					payload.description = mbrAppGR.getDisplayValue('description');
					payload.current_version = mbrAppGR.getDisplayValue('current_version');
					payload.active = 'true';
					request  = new sn_ws.RESTMessageV2();
					request.setBasicAuth(this.WORKER_ROOT + targetInstance, token);
					request.setRequestHeader("Accept", "application/json");
					if (jsonObject.result && jsonObject.result.length > 0) {
						targetAppId = jsonObject.result[0].sys_id;
						request.setHttpMethod('put');
						request.setEndpoint('https://' + targetInstance + '.service-now.com/api/now/table/x_11556_col_store_member_application/' + targetAppId);
					} else {
						request.setHttpMethod('post');
						request.setEndpoint('https://' + targetInstance + '.service-now.com/api/now/table/x_11556_col_store_member_application');
						payload.provider = mbrAppGR.getDisplayValue('provider.instance');
					}
					request.setRequestBody(JSON.stringify(payload, null, '\t'));
					response = request.execute();
					if (response.haveError()) {
						gs.error('CollaborationStoreUtils.publishNewVersion: Error returned from Target instance ' + targetInstance + ': ' + response.getErrorCode() + ' - ' + response.getErrorMessage());
						canContinue = false;
					} else if (response.getStatusCode() != 200 && response.getStatusCode() != 201) {
						gs.error('CollaborationStoreUtils.publishNewVersion: Invalid HTTP Response Code returned from Target instance ' + targetInstance + ': ' + response.getStatusCode());
						canContinue = false;
					} else {
						jsonString = response.getBody();
						jsonObject = {};
						try {
							jsonObject = JSON.parse(jsonString);
						} catch (e) {
							gs.error('CollaborationStoreUtils.publishNewVersion: Unparsable JSON string returned from Target instance ' + targetInstance + ': ' + jsonString);
							canContinue = false;
						}
						if (canContinue) {
							targetAppId = jsonObject.result.sys_id;
						}
					}
				}
			} else {
				gs.error('CollaborationStoreUtils.publishNewVersion: Invalid HTTP Response Code returned from Target instance ' + targetInstance + ': ' + response.getStatusCode());
			}
			if (canContinue) {
				this.publishVersionRecord(targetInstance, token, versionGR, targetAppId, attachmentId);
			}
		} else {
			gs.error('CollaborationStoreUtils.publishNewVersion: Version record not found: ' + newVersion);
		}
	} else {
		gs.error('CollaborationStoreUtils.publishNewVersion: Target instance record not found: ' + targetInstance);
	}
},

Unlike the original application publication process, we are not bouncing back and forth between the client side and the server side, so with the successful completion of one artifact, we can simply move on to pushing over the next artifact. To create the next step in the process, the publishVersionRecord function, I started out with a copy of the original processPhase6 function. Here is the end result:

publishVersionRecord: function(targetInstance, token, versionGR, targetAppId, attachmentId) {
	var canContinue = true;
	var payload = {};
	payload.member_application = targetAppId;
	payload.version = versionGR.getDisplayValue('version');
	var request  = new sn_ws.RESTMessageV2();
	request.setBasicAuth(this.WORKER_ROOT + targetInstance, token);
	request.setRequestHeader("Accept", "application/json");
	request.setHttpMethod('post');
	request.setEndpoint('https://' + targetInstance + '.service-now.com/api/now/table/x_11556_col_store_member_application_version');
	request.setRequestBody(JSON.stringify(payload, null, '\t'));
	response = request.execute();
	if (response.haveError()) {
		gs.error('CollaborationStoreUtils.publishVersionRecord: Error returned from Target instance ' + targetInstance + ': ' + response.getErrorCode() + ' - ' + response.getErrorMessage());
		canContinue = false;
	} else if (response.getStatusCode() != 201) {
		gs.error('CollaborationStoreUtils.publishVersionRecord: Invalid HTTP Response Code returned from Target instance ' + targetInstance + ': ' + response.getStatusCode());
		canContinue = false;
	} else {
		jsonString = response.getBody();
		jsonObject = {};
		try {
			jsonObject = JSON.parse(jsonString);
		} catch (e) {
			gs.error('CollaborationStoreUtils.publishVersionRecord: Unparsable JSON string returned from Target instance ' + targetInstance + ': ' + jsonString);
			canContinue = false;
		}
		if (canContinue) {
			targetVerId = jsonObject.result.sys_id;
			this.publishVersionAttachment(targetInstance, token, targetVerId, attachmentId);
		}
	}
},

The last step in the process, then, is to send over the Update Set XML file attachment. For that one, I started out with the processPhase7 function and ended up with this:

publishVersionAttachment: function(targetInstance, token, targetVerId, attachmentId) {
	var gsa = new GlideSysAttachment();
	var sysAttGR = new GlideRecord('sys_attachment');
	if (sysAttGR.get(attachmentId)) {
		var url = 'https://';
		url += targetInstance;
		url += '.service-now.com/api/now/attachment/file?table_name=x_11556_col_store_member_application_version&table_sys_id=';
		url += targetVerId;
		url += '&file_name=';
		url += sysAttGR.getDisplayValue('file_name');
		var request  = new sn_ws.RESTMessageV2();
		request.setBasicAuth(this.WORKER_ROOT + targetInstance, token);
		request.setRequestHeader('Content-Type', sysAttGR.getDisplayValue('content_type'));
		request.setRequestHeader('Accept', 'application/json');
		request.setHttpMethod('post');
		request.setEndpoint(url);
		request.setRequestBody(gsa.getContent(sysAttGR));
		response = request.execute();
		if (response.haveError()) {
			gs.error('CollaborationStoreUtils.publishVersionAttachment: Error returned from Target instance ' + targetInstance + ': ' + response.getErrorCode() + ' - ' + response.getErrorMessage());
		} else if (response.getStatusCode() != 201) {
			gs.error('CollaborationStoreUtils.publishVersionAttachment: Invalid HTTP Response Code returned from Target instance ' + targetInstance + ': ' + response.getStatusCode());
		}
	} else {
		gs.error('CollaborationStoreUtils.publishVersionAttachment: Invalid attachment record sys_id: ' + attachmentId);
	}
},

So that takes care of the functions. Now we have to create an Action in the Flow Designer that will call the function, and then produce a new Subflow that will leverage that action. That sounds like a good subject for our next installment.

Special Note to Testers

If you want to test the set-up process, you can do that with a single instance. You just have to select the Host option during the set-up process. If you want to test the Client set-up process, you will need at least two instances, one to serve as the Host, which you will need to reference when you set up the Client (you cannot be a Client of your own Host). But if you really want to test out the full registration process, including the Subflow that informs all existing instances of the new instance, you will need at least three participating instances: 1) the Host instance, 2) the new Client instance, and 3) an existing Client instance that will be notified of the new member of the community.

The same holds true for the application publishing process. You can test a portion of the publishing process by publishing an app on a single Host instance, but if you want to test out all of the parts and pieces, you will want to publish the app from a Client instance and make sure that it makes its way all the way back to the Host. Once we complete the application distribution process, full testing will require at least three instances to verify that the Host distributes the new version of the application to other Clients.

Flow Variables vs. Scratchpad Variables, Revisited

“Every great mistake has a halfway moment, a split second when it can be recalled and perhaps remedied.”
Pearl S. Buck

A while back, I took a quick look at the new Flow Variables feature of the Flow Designer in the hopes that the new feature might eliminate the need for my earlier effort to develop a Flow Designer Scratchpad. Unfortunately, I wasn’t able to figure out how to make it work in the way that it was described in the Blog Entry that I was attempting to follow. I tried a few things on my own, but that did not work for me, either. Since I was not smart enough to figure out how to make it do what I was trying to do, I just gave up.

Since that time, though, I have upgraded my Personal Developer Instance to Rome, so I thought that it might be a good time to go back and see if anything had changed since the last time that I tried it. As it turns out, not much has changed, but during my second look I realized that if I had just kept reading the original Blog Entry that I had been following, I would have come across the way to actually pull this off. The secret is in this little icon that I never bothered to click on when I was looking at things the first time.

Flow variable transform function icon

I still cannot simply add the +1 after the Flow Variable pill as it shows earlier in the post, but by clicking on that little icon, I can bring up the transform options and select Math -> Add from the Suggested Transforms menu. That brings up a brief dialog box where I can enter the 1 as the number that I would like to add to the current value.

Transform pop-up dialog

Unlike my earlier attempt at manually scripting the math, this actually worked when I ran a test execution.

Sample Flow test results

This is great news. Now all that I have to do is hunt down all of the places where I have used Scratchpad variables and see if I can replace them with Flow Variables. One of the Actions that I created using the Scratchpad was the Array Iterator, which was built back when the For Each Flow Logic only supported GlideRecords. Now that For Each can be used on Arrays as well, there is no longer a need for the custom built Array Iterator. I also used the Scratchpad for a Counter, something else that could now be replaced with a Flow Variable (now that I know how to make it work!). I should really hunt down the entire lot and replace them all.

That sounds like a lot of work, though, so I will probably take that that on as I crack things open for other maintenance. Still, it is good to finally be able to use the features of the product rather than these homemade concoctions. They definitely served their purpose at the time, but now that the platform has caught up with my needs, it’s definitely time to move on.

Collaboration Store, Part XIII

“First, solve the problem. Then, write the code.”
John Johnson

Today we need to build out the Subflow that we referenced in the Script Include function that we built last time out. To create a new Subflow, open up the Flow Designer, click on the New button to bring up the selection list, and select Subflow from the list.

Creating a new Subflow

When the initial form pops up, all you really need to enter is the name of the Subflow, which we already referred to in our Script Include function as New_Collaboration_Store_Instance.

New Subflow properties form

Once you submit the initial properties form, the new Subflow will appear in the list of Subflows, and from there you can bring it up in full edit mode. In edit mode, we can add the one Input to the Subflow, the name of the new instance.

Subflow Inputs and Outputs

Since we will be launching this Subflow to run on its own in the background, there is no need for any Outputs.

Once we configure the Inputs and Outputs, we can move on to the steps of the Subflow. Our first step will be to gather up all of the records in the table of instances except for two: the Host instance, which has already been updated, and the new instance, which already knows about itself.

Database query step

We select the Look Up Records Action, select the Member Organization table, and then define two Conditions to get the records that we want: 1) the host value is false, and 2) the instance is not the instance that we brought in as Input. It really doesn’t matter for our purposes what sequence the records come in, but I went ahead and sorted the records by instance, just so they we will always work through them in the same order.

Our next step, then is to loop through the records. We do that with a For Each Item Flow Control step, setting the items to the records obtained in the first step.

Looping through the retrieved records

Now we have the basic structure of the Subflow; we just need to perform the tasks necessary to notify each host of the new instance, and notify the new instance of each host within the the For Each Item loop. This could all be done with additional Subflow steps, but I took the easy way out here and just built another function in our Script Include to handle the REST API calls to the instances. In fact, I built two functions, one to make the call, and another to call that function twice, once to tell the existing host about the new host, and then again to inform the new host of the existing host. To run that script, I had to create a custom Action, which is just a simple Script Action that calls the function, passing in the names of the two instances (existing, from the current record, and new, from the Subflow input). Once I built the custom Action, I was able to select it from the list and then configure it.

Custom Script Action step

That completes the Subflow, but once again, we have referenced a function in our Script Include that does not exist, so we will have to get into that in our next installment.

Collaboration Store, Part XII

“The slogan ‘press on’ has solved and always will solve the problems of the human race.”
Calvin Coolidge

In the previous installment in this series, we created a new Scripted REST API Resource and referenced another nonexistent function in our Script Include. Now it is time to create that function, which will perform some of the work required to register a new client instance and then hand off the remaining tasks to an asynchronous Subflow so that the function can return the results without waiting for all of the other instances to be notified of the new client instance. The only thing to be done in the function will be to insert the new client instance into the database and kick off the Subflow. But before we do that, we need to first check to make sure that the Client has not already registered with the Host.

var result = {body: {error: {}, status: 'failure'}};

var mbrGR = new GlideRecord('x_11556_col_store_member_organization');
if (mbrGR.get('instance', data.instance)) {
	result.status = 400;
	result.body.error.message = 'Duplicate registration error';
	result.body.error.detail = 'This instance has already been registered with this store.';
} else {
	...

As we did before, we construct our result object with the expectation of failure, since there are more error conditions than the one successful path through the logic. In the case of an instance that has already been registered, we respond with a 400 Bad Request HTTP Response Code and accompanying error details. If the instance is not already in the database, then we attempt to insert it.

mbrGR.initialize();
mbrGR.name = data.name;
mbrGR.instance = data.instance;
mbrGR.email = data.email;
mbrGR.description = data.description;
mbrGR.token = data.sys_id;
mbrGR.active = true;
mbrGR.host = false;
mbrGR.accepted = new GlideDateTime();
if (mbrGR.insert()) {
	result.status = 202;
	delete result.body.error;
	result.body.info = {};
	result.body.info.message = 'Registration complete';
	result.body.info.detail = 'This instance has been successfully registered with this store.';
	result.body.status = 'success';
	...

If the new record was inserted successfully, then we response with a 202 Accepted HTTP Response Code, indicating that the registration was accepted, but the complete registration process (notifying all of the other instances) is not yet complete. At this point, all we have left to do is to initiate the Subflow to handle the rest of the process. We haven’t built the Subflow just yet, but for the purposes of this exercise, we can just assume that it is out there and then we can build it out later. There a couple of different ways to launch an asynchronous Subflow in the background, the original way, and the newer, preferred method. Both methods use the Scripted Flow API. Here is the way that we used to do this:

sn_fd.FlowAPI.startSubflow('New_Collaboration_Store_Instance', {new_instance: data.instance});

… and here is way that ServiceNow would like you to do it now:

sn_fd.FlowAPI.getRunner()
	.subflow('New_Collaboration_Store_Instance')
	.inBackground()
	.withInputs({new_instance: data.instance})
	.run();

Right now, both methods will work, and I’m still using the older, simpler way, but one day I’m going to need to switch over.

There should never be a problem inserting the new record, but just in case, we make that a conditional, and if for some reason it fails, we respond with a 500 Internal Server Error HTTP Response Code.

result.status = 500;
result.body.error.message = 'Internal server error';
result.body.error.detail = 'There was a problem processing this registration request.';

That’s it for all of the little parts and pieces. Here is the entire function, all put together.

processRegistrationRequest: function(data) {
	var result = {body: {error: {}, status: 'failure'}};

	var mbrGR = new GlideRecord('x_11556_col_store_member_organization');
	if (mbrGR.get('instance', data.instance)) {
		result.status = 400;
		result.body.error.message = 'Duplicate registration error';
		result.body.error.detail = 'This instance has already been registered with this store.';
	} else {
		mbrGR.initialize();
		mbrGR.name = data.name;
		mbrGR.instance = data.instance;
		mbrGR.email = data.email;
		mbrGR.description = data.description;
		mbrGR.token = data.sys_id;
		mbrGR.active = true;
		mbrGR.host = false;
		mbrGR.accepted = new GlideDateTime();
		if (mbrGR.insert()) {
			result.status = 202;
			delete result.body.error;
			result.body.info = {};
			result.body.info.message = 'Registration complete';
			result.body.info.detail = 'This instance has been successfully registered with this store.';
			result.body.status = 'success';
			sn_fd.FlowAPI.startSubflow('New_Collaboration_Store_Instance', {new_instance: data.instance});
		} else {
			result.status = 500;
			result.body.error.message = 'Internal server error';
			result.body.error.detail = 'There was a problem processing this registration request.';
		}
	}

	return result;
}

Now we have completed the nonexistent function that was referenced in the REST API Resource, but we have also now referenced a new nonexistent Subflow that we will need to build out before things are complete. That sounds like a good subject for our next installment.

Fun with Webhooks, Part VII

“The question isn’t who is going to let me; it’s who is going to stop me.”
Ayn Rand

Now that we have proven that the essential elements of our Subflow all work as intended, it’s time finish out the remainder of the flow’s activities, which include logging the HTTP POST and Response details as well as reporting any undesired results to Event Management. Let’s start with logging the activity.

The first thing that we will need in order to record our Webhook POSTs is a table in which to store the data. As we did with our original Webhook Registry table, we can navigate to System Definition -> Tables and click on the New button to bring up the Table definition screen.

New Webhook Log table

And once again we will give it a Label and let the system generate the associated Name. We will also want to uncheck the Create module checkbox again to prevent the generation of a number of artifacts for which we have no use. Once the table has been defined, we can start adding fields, and the first field that we will want to add is a Reference to the Webhook Registry table. Every log record will be linked to the registry for which the activity was POSTed, so we will want to establish that relationship with a Reference field that we can label Registry.

The other Reference field that we will want is a link back to the original Incident that is the subject of the POST. Since we set things up in a way that would allow us to support tables other than Incident, we will want to do this with a Document ID field rather than a direct reference to the Incident table. This time, when we configure the Dependent Field, we can dot walk through the registry reference to get to the table name column in that related table. This will save us from having to have a column on the log table for the name of the table that holds the record associated with the Document ID, and it will ensure that all of the Document IDs related to each Registry will only come from the table associated with that Registry.

Selecting the Registry record’s Table column as the Dependent Field

The rest of the fields in the log table are just what we sent over, and what we received in response:

  • Payload – The data that we will be POSTing
  • URL – The URL to which we will be POSTing our Payload
  • Status – The HTTP Response Code returned by the target server
  • Body – The body of the message returned by the target server
  • Error – The error flag
  • Error Code – The error code
  • Error Message – The error message
  • Parse Error – Any error that occurred while parsing the body of the response

After defining all of the fields on the table, I brought up the table’s form and arranged all of the fields on the screen in a manner that I thought was most appropriate.

Webhook Log form layout

Those of you who are paying close attention will also have noticed that I added the JSON View Dictionary Attribute to both the Payload and Body fields, just to make reading the JSON content a little easier.

Now that we have a table, we can start putting records into it. We will do this in our Subflow, right after we POST the payload. This is just a simple, out-of-the-box Create Record action that we can configure using data pills from various other steps.

Logging the Webhook POST and Response

In addition to capturing everything related to each POST, the other thing that we wanted to do was to capture any issues that might come up during this process. We are already aware of two possible issues, one being a bad response and the other being a good response, but with an unparsable response body. After we do the POST and log the result, we can throw in a few more conditionals to pick those up, and then add a Log Event Action to the flow for each.

Complete Subflow with Event logging for error conditions

Whenever we log an Event, we will want to capture as much information as we can about what went wrong. In this case, however, we have already logged everything about the transaction to the Webhook Log table, so really all we will need to provide is a way to find that record. Putting the sys_id in the additional_info field should do the trick. Here is how I populated all of the data for the Event:

Event Log data

That should complete the Subflow, at least for now. We may end up adding some other features in the future, but for now, this accomplishes everything that we set out to do. We still need to do a lot more testing to verify that all of these various branches in the tree work out as we are hoping, but the building part should be done now, at least for this portion.

As far as the remaining development goes, we still have to build out the My Webhooks portal widget and we also need to go back into the Script Include and add support for Basic Authentication. We also need to add code, and possibly additional fields in the registry record, for any other authentication protocols that we would like to support. So once again we find ourselves at a crossroads: we can either jump into the Service Portal world and start working on our widget, or we can turn our attentions to authentication and finish things up in that area. There is no need to make any decision on that today, though. We’ll figure all of that out when we meet again.

Fun with Webhooks, Part VI

“The most exciting phrase to hear in science, the one that heralds discoveries, is not ‘Eureka!’ but ‘Now that’s funny …'”
Isaac Asimov

Even though we are not quite finished building out our new Subflow just yet, we have enough elements in place that we can try out what we have so far, and see if it actually works. The simplest way to do that would be to create a Webhook Registry that watches a single Incident, and then go make a qualifying change to that Incident and see what happens. Let’s go find an open Incident and then we can set up our registry.

In my instance, INC0000002 seemed like a likely candidate, so I pulled up the list of Webhook Registrations, clicked on the New button, and created the following new Webhook Registry entry to track changes to that specific Incident.

New Webhook Registry entry

For the URL, I used that same webhook.site address that I had used before so that I could see the result on the other end of the transaction. Once I had saved my new Webhook Registry entry, all I needed to do in order to test it out was to make a change to the Incident. For my first test, I simply changed the Assignment Group from Network to Hardware, which also then removed the Assigned to field value, since the original person specified in that field was not a member of the new Assignment Group. After making the change, I went over to webhook.site to see if anything showed up as a result of that modification. Sure, enough, a new POST had just been received by that site shortly after I saved the record.

New Webhook posting after updating an Incident

Although that proves that the process does indeed work as intended, I did notice one thing that is going to require a little bit of rework. Take a look at the second line of the message property:

Assigned To changed from Howard Johnson to  on INC0000002.

The Assigned to field changed from having a value to not having a value. The code that we have in our buildPayload function doesn’t really handle that circumstance in a way that produces the appropriate text to explain that in plain English. Here is the logic that creates that portion of the message:

if (current.assigned_to != previous.assigned_to) {
	payload.assigned_to = current.assigned_to;
	if (previous.assigned_to) {
		payload.message += separator + 'Assigned To changed from ' + previous.assigned_to + ' to ' + current.assigned_to + ' on ' + payload.id + '.';
	} else {
		payload.message += separator + 'Assigned To set to ' + current.assigned_to + ' on ' + payload.id + '.';
	}
	separator = '\n\n';
}

There is a check to see if the previous value was blank, but there is no check to see if the new value is blank. It wouldn’t take much to throw that in there, though, so let’s go ahead and do that now while we are thinking about it.

if (current.assigned_to != previous.assigned_to) {
	payload.assigned_to = current.assigned_to;
	if (previous.assigned_to) {
		if (current.assigned_to) {
			payload.message += separator + 'Assigned To changed from ' + previous.assigned_to + ' to ' + current.assigned_to + ' on ' + payload.id + '.';
		} else {
			payload.message += separator + 'Assigned To ' + previous.assigned_to + ' removed.';
		}
	} else {
		payload.message += separator + 'Assigned To set to ' + current.assigned_to + ' on ' + payload.id + '.';
	}
	separator = '\n\n';
}

That’s better. While we are at it, we might as well make the same change to the code for the Assignment group, as that is pretty much a line for line copy of the code for the Assigned to field. Everything else looks to be OK. The State is mandatory, so it will never have a new blank value, and the Comments and Work Notes fields are based on new entries and not on changes to existing entries, so if either of those are ever blank, we won’t be doing anything with them at all.

Still, other than that one little annoying language problem, the test was actually quite successful. Our Business Rule fired, the change was detected, our new Subflow was launched, our payload was generated, and the payload was successfully POSTed to the appropriate URL. All in all, I would have to say that that was a pretty good first test of the process. Obviously, we need to do a lot more testing, but this was a great start.

At this point, all I really wanted to do was to make sure that everything was working so far, which I think we accomplished. Before we do too much more testing, though, I think we need to go back and finish up the rest of the Subflow. We still have to add in the step that logs the activity, and we also have to add any error handling for when things don’t go as planned. Once we build all of that out, then we can do a lot more testing to validate the entire process from end to end. It was good to take a quick break and make sure that everything was working up to this point, but now that we know that it is, we need to get back to work on finishing up our development efforts.