Collaboration Store, Part LII

“Long is the road from conception to completion.”
Molière

Last time, we finished up the Update Set Preview process and it looked like all that was left was to code out the Commit process and we would be done with the last major component of this long drawn-out project. Unfortunately, that’s not entirely true. Before we can move on to the Commit process, we have to deal with the fact that the Preview process may have uncovered some issues with the Update Set. In the manual process, these issues are reported to the operator, and the operator is required to deal them all before the Commit option is available. Not only do we need to address that possibility, we also have to add code to update the application and version records to reflect the version that was just installed and to link the newly installed application with the application record. So we have a little more work to do beyond just launching the Commit process before we can declare project completion.

First of all, we need to decide what to do with any Preview issues that may have been detected. Ideally, you would want to give the operator the opportunity to review these issues and make the appropriate decisions based on their knowledge of their instance and the application. However, since we are trying to make this first version as automated as possible, I have decided to have the software make arbitrary decisions about each reported problem, at least for now. In some future version, I may want to pop up a dialog and ask the operator whether they want to do their own review or trust the system to do it for them, but for now, that’s a little more sophisticated than I am ready to tackle. This may not be the best approach, but it is the simplest, and I am trying wrap up the work on this initial version.

My plan is to add yet another client-callable function to our existing ApplicationInstaller Script Include that will hunt down all of the problems and resolve them. The problem records have a field called available_actions that contains a list of all of the actions available for the problem, so I am going to use that as a guide to Accept Remote Update if I can, or Skip Remote Update if I cannot. I also want to keep track of the number of problems found, the number of updates accepted, and the number of updates skipped so that I can report that information back to the caller. In reviewing the code behind the UI Actions that accept and skip updates, I found a call to a global component called GlidePreviewProblemAction, but when I tried to access that component in my scoped Script Include, I got a security violation error. To work around that, I had to add the following new function to our global utilities, where I could make the call without error.

fixRemoteUpdateIssue: function(remUpdGR) {
	var resolution = 'accepted';
	var ppa = new GlidePreviewProblemAction(gs.action, remUpdGR);
	if (remUpdGR.available_actions.contains('43d7d01a97b00100f309124eda2975e4')) {
		ppa.ignoreProblem();
	} else {
		ppa.skipUpdate();
		resolution = 'skipped';
	}
	return resolution;
}

With that out of the way, I was able to put the rest of the code where it belonged, and just called out to the global component for the part that I was unable to do in the scoped component.

evaluatePreview: function() {
	var answer = {problems: 0, accepted: 0, skipped: 0};
	var sysId = this.getParameter('remote_update_set_id');
	if (sysId) {
		problemId = [];
		var remUpdGR = new GlideRecord('sys_update_preview_problem');
		remUpdGR.addQuery('remote_update_set', sysId);
		remUpdGR.query();
		while (remUpdGR.next()) {
			problemId.push(remUpdGR.getUniqueValue());
			answer.problems++;
		}
		var csgu = new global.CollaborationStoreGlobalUtils();
		for (var i=0; i<problemId.length; i++) {
			remUpdGR.get(problemId[i]);
			var resolution = csgu.fixRemoteUpdateIssue(remUpdGR);
			if (resolution == 'accepted') {
				answer.accepted++;
			} else {
				answer.skipped++;
			}
		}
	}
	return JSON.stringify(answer);
}

Now we just need make the GlideAjax call to that function from the client side before we attempt to launch the Commit process. Right now, when the Preview process is complete, a Close button appears on the progress dialog, and when you click on the Close button, our new UI Page reloads and starts all over again because the script that we lifted from the UI Action on the Update Set form was set up to reload that form. For our purposes, we do not want our own page reloaded, and in fact, we don’t even want a Close button; we just want to move on to the process of reviewing the results of the Preview. The relevant portion of the script that we stole looks like this:

dd.on("executionComplete", function(trackerObj) {
	var cancelBtn = $("sysparm_button_cancel");
	if (cancelBtn)
		cancelBtn.remove();
         
	var closeBtn = $("sysparm_button_close");
	if (closeBtn) {
		closeBtn.onclick = function() {
			dd.destroy();
		};
	}
});
     
dd.on("beforeclose", function() {
	reloadWindow(window);
});

Since we do not want to wait for operator action, we can short-cut this entire operation and just move on as soon as execution has been completed. I replaced all of the above with the following:

dd.on("executionComplete", function(trackerObj) {
	dd.destroy();
	checkPreviewResults();
});

Since the Preview process is now complete at this point, and we are now looking at the results, I decided to wrap the original message on the page with a span that had an id attribute so that I could change the message as things moved along. That line of HTML now looks like this:

<span id="status_text">Previewing Uploaded Update Set ...</span>

With that in place, I was able to update the message with the new status before I made the Ajax call to our new Script Include function.

function checkPreviewResults() {
	document.getElementById('status_text').innerHTML = 'Evaluating Preview Results ...';
	var ga = new GlideAjax('ApplicationInstaller');
	ga.addParam('sysparm_name', 'evaluatePreview');
	ga.addParam('remote_update_set_id', updateSetId);
	ga.getXMLAnswer(commitUpdateSet);
}

function commitUpdateSet(answer) {
	alert(answer);
}

I’m not ready to take on the Commit process just yet, so I stubbed out the commitUpdateSet function with a simple alert of the response from our Ajax call. That was enough to let me know that everything was working up to this point, which is what I needed to know before I attempted to move on.

Now that we have dealt with the possibility of Preview problems, we can finally take a look at what it will take to Commit the Update Set. That’s obviously a bit of work, so we’ll leave all of that for our next episode.

Collaboration Store, Part LI

“Plodding wins the race.”
Aesop

Last time, we ended with yet another unresolved fork in the road, whether to launch the Preview process from the upload.do page or to build yet another new page specific to the application installation process. At the time, it seemed as if there were equal merits to either option, but today I have decided that building a new page would be the preferable alternative. For one thing, that keeps the artifacts involved within the scope of our application (our global UI Script to repurpose the upload.do page had to be in the global scope), and it keeps the alterations to upload.do to the bare minimum.

Before we go off and build a new page, though, we will need to figure out how we are going to get there without the involvement of the operator (we want this whole process to be as automatic as possible). Digging through the page source of the original upload.do page, I found something that looks as if it might be relevant to our needs:

<input value="sys_remote_update_set_list.do?sysparm_fixed_query=sys_class_name=sys_remote_update_set" name="sysparm_referring_url" type="hidden"></input>

Now, the name of this element is sysparm_referring_url, which sounds an awful lot like it would be the URL from which we came; however, this is actually the URL where we end up after the Update Set XML file is uploaded, so I am thinking that if we replaced this value with a link to our own page, maybe we would end up there instead. Only one way to find out …

Those of you following along at home may recall that this value, which appears in the HTML source, actually disappeared somehow before the form was submitted, so I had to add this line of code to our script to put it back:

document.getElementsByName('sysparm_referring_url')[0].value = 'sys_remote_update_set_list.do?sysparm_fixed_query=sys_class_name=sys_remote_update_set';

Assuming that we create a new UI Page for the remainder of the process and that we want to pass to it the attachment ID, we should be able to replace that line with something like this:

document.getElementsByName('sysparm_referring_url')[0].value = 'ui_page.do?sys_id=<sys_id of our new page>&sysparm_id=' + window.location.search.substring(15);

Now all we need to do is create the page, put something on it, and then add the code that we stole from the UI Action that launches the Update Set Preview. After we hacked up the upload.do page, the end result turned out looking like this:

Modified upload.do page

To keep things looking consistent, we can steal some of the HTML from that page and make our new page look something like this:

New page layout

To make that happen, we can snag most of the HTML from a quick view frame source and then format it and stuff it into a new UI Page called install_application:

<?xml version="1.0" encoding="utf-8" ?>
<j:jelly trim="true" xmlns:j="jelly:core" xmlns:g="glide" xmlns:g2="null">

<div>
  <nav class="navbar navbar-default" role="navigation">
    <div class="container-fluid">
      <div class="navbar-header">
        <button class="btn btn-default icon-chevron-left navbar-btn" onclick="history.back();">
          <span class="sr-only">Back</span>
        </button>
        <h1 style="display:inline-block;" class="navbar-title">Install Application</h1>
      </div>
    </div>
  </nav>
  <div class="section-content">
    <div id="output_messages" class="outputmsg_container outputmsg_hide">
      <button aria-label="Close Messages" id="close-messages-btn" class="btn btn-icon close icon-cross" onclick="GlideUI.get().clearOutputMessages(this); return false;"></button>
      <div class="outputmsg_div" aria-live="polite" role="region" data-server-messages="false"></div>
    </div>
    <div class="row">
      <div class="col-sm-12">
        <h4 style="padding: 30px;">
          &#160;
          <img src="/images/loading_anim4.gif" height="18" width="18"/>
          &#160;
          Previewing Uploaded Update Set ...
        </h4>
      </div>
    </div>
  </div>
</div>
</j:jelly>

That takes care how the page looks. Now we need to deal with how it works. To Preview an uploaded Update Set, you need the Remote Update Set‘s sys_id. We have a URL parameter that contains the sys_id of the Update Set XML file attachment, but that’s not the sys_id that we need at this point. We will have to build a process that uses the attachment sys_id to locate and return the sys_id that we will need. We can just add another function to our existing ApplicationInstaller Script Include.

getRemoteUpdateSetId: function(attachmentId) {
	var sysId = '';

	var sysAttGR = new GlideRecord('sys_attachment');
	if (sysAttGR.get(attachmentId)) {
		var versionGR = new GlideRecord(sysAttGR.getDisplayValue('table_name'));
		if (versionGR.get(sysAttGR.getDisplayValue('table_sys_id'))) {
			var updateSetGR = new GlideRecord('sys_remote_update_set');
			updateSetGR.addQuery('application_name', versionGR.getDisplayValue('member_application'));
			updateSetGR.addQuery('application_scope', versionGR.getDisplayValue('member_application.scope'));
			updateSetGR.addQuery('application_version', versionGR.getDisplayValue('version'));
			updateSetGR.addQuery('state', 'loaded');
			updateSetGR.query();
			if (updateSetGR.next()) {
				sysId = updateSetGR.getUniqueValue();
			}
		}
	}

	return sysId;
}

Basically, we use the passed attachment record sys_id to get the attachment record, then use data found on the attachment record to get the version record, and then use data found on the version record and associated application record to get the remote update set record, and then pull the sys_id that we need from there. Those of you who have been paying close attention may notice that one of the application record fields being used to find the remote update set is scope. The scope of the application was never included in the original list of data fields for the application record, so I had to go back and add it everywhere in the system where an application record was referenced, modified, or moved between instances. That was a bit of work, and hopefully I have found them all, but I think that was everything.

Anyway, now we have a way to turn an attachment record sys_id into a remote update set record sys_id, so we need to add some code to our UI Page to snag the attachment record sys_id from the URL, use it to get the sys_id that we need, and then stick that value on the page somewhere so that it can be picked up by the client-side code. At the top of the HTML for the page, I added this:

<g2:evaluate jelly="true">

var ai = new ApplicationInstaller();
var attachmentId = gs.action.getGlideURI().get('sysparm_id');
var sysId = ai.getRemoteUpdateSetId(attachmentId);

</g2:evaluate>

Then in the body of the page, just under the text, I added this hidden input element:

<input type="hidden" id="remote_update_set_id" value="$[sysId]"/>

That took care of things on the server side. Now we need to build some client-side code that will run when the page is loaded. We can do that with an addLoadEvent like so:

addLoadEvent(function() {  
	onLoad();
});

Our onLoad function can then grab the value from the hidden field and pass it on to the function that we lifted from the Preview Update Set UI Action earlier (which we need to paste into the client code section of our new UI Page).

function onLoad() {
	var sysId = document.getElementById('remote_update_set_id').value;
	if (sysId) {
		previewRemoteUpdateSet(sysId);
	}
}

That’s all there is to that. The entire Client script portion of the new UI Page, including the code that we lifted from the UI Action, now looks like this:

function onLoad() {
	var sysId = document.getElementById('remote_update_set_id').value;
	if (sysId) {
		previewRemoteUpdateSet(sysId);
	}
}

addLoadEvent(function() {  
	onLoad();
});

function previewRemoteUpdateSet(sysId) {
	var MESSAGE_KEY_DIALOG_TITLE = "Update Set Preview";
	var MESSAGE_KEY_CLOSE_BUTTON = "Close";
	var MESSAGE_KEY_CANCEL_BUTTON = "Cancel";
	var MESSAGE_KEY_CONFIRMATION = "Confirmation";
	var MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE = "Are you sure you want to cancel this update set preview?";
	var map = new GwtMessage().getMessages([MESSAGE_KEY_DIALOG_TITLE, MESSAGE_KEY_CLOSE_BUTTON, MESSAGE_KEY_CANCEL_BUTTON, MESSAGE_KEY_CONFIRMATION, MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
	var dialogClass = window.GlideModal ? GlideModal : GlideDialogWindow;
	var dd = new dialogClass("hierarchical_progress_viewer", false, "40em", "10.5em");

	dd.setTitle(map[MESSAGE_KEY_DIALOG_TITLE]);
	dd.setPreference('sysparm_ajax_processor', 'UpdateSetPreviewAjax');
	dd.setPreference('sysparm_ajax_processor_function', 'preview');
	dd.setPreference('sysparm_ajax_processor_sys_id', sysId);
	dd.setPreference('sysparm_renderer_expanded_levels', '0');
	dd.setPreference('sysparm_renderer_hide_drill_down', true);
	dd.setPreference('focusTrap', true);
	dd.setPreference('sysparm_button_close', map["Close"]);
    dd.on("executionStarted", function(response) {
		var trackerId = response.responseXML.documentElement.getAttribute("answer");

		var cancelBtn = new Element("button", {
			'id': 'sysparm_button_cancel',
			'type': 'button',
			'class': 'btn btn-default',
			'style': 'margin-left: 5px; float:right;'
		}).update(map[MESSAGE_KEY_CANCEL_BUTTON]);

        cancelBtn.onclick = function() {
			var dialog = new GlideModal('glide_modal_confirm', true, 300);
			dialog.setTitle(map[MESSAGE_KEY_CONFIRMATION]);
			dialog.setPreference('body', map[MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
			dialog.setPreference('focusTrap', true);
			dialog.setPreference('callbackParam', trackerId);
			dialog.setPreference('defaultButton', 'ok_button');
			dialog.setPreference('onPromptComplete', function(param) {
				var cancelBtn2 = $("sysparm_button_cancel");
				if (cancelBtn2)
					cancelBtn2.disable();
				var ajaxHelper = new GlideAjax('UpdateSetPreviewAjax');
				ajaxHelper.addParam('sysparm_ajax_processor_function', 'cancelPreview');
				ajaxHelper.addParam('sysparm_ajax_processor_tracker_id', param);
				ajaxHelper.getXMLAnswer(_handleCancelPreviewResponse);
			});
			dialog.render();
			dialog.on("bodyrendered", function() {
				var okBtn = $("ok_button");
				if (okBtn) {
					okBtn.className += " btn-destructive";
				}
			});
        };

		var _handleCancelPreviewResponse = function(answer) {
			var cancelBtn = $("sysparm_button_cancel");
			if (cancelBtn)
				cancelBtn.remove();
		};

        var buttonsPanel = $("buttonsPanel");
        if (buttonsPanel)
			buttonsPanel.appendChild(cancelBtn);
	});

	dd.on("executionComplete", function(trackerObj) {
		var cancelBtn = $("sysparm_button_cancel");
		if (cancelBtn)
			cancelBtn.remove();
		
		var closeBtn = $("sysparm_button_close");
		if (closeBtn) {
			closeBtn.onclick = function() {
				dd.destroy();
			};
		}
	});
	
	dd.on("beforeclose", function() {
		reloadWindow(window);
	});
	
	dd.render();
}

Now all we need to do is pull up the old version record and push that Install button one more time, which I did.

So, there is good news and there is bad news. The good news is that it actually worked! That is to say that clicking on the Install button pulls down the Update Set XML file data, posts it back to the server via the modified upload.do page, and then goes right into previewing the newly created Update Set. That part is very cool, and something that I wasn’t sure that I was going to be able to pull off when I first started thinking about doing this. The bad news is that, once the Preview is complete, the stock code reloads the page and the whole Preview process starts all over again. That’s not good! However, that seems like a minor issue with which we should be able to deal relatively easy. All in all, then, it seems like mostly good news.

Of course, we are still not there yet. Once an Update Set has been Previewed, it sill has to be Committed before the application is actually installed. Rather than continuously reloading the page then, our version of the UI Action code is going to need to launch the Commit process. We should be able to examine the Commit UI Action as we did the Preview UI Action and steal some more code to make that happen. That sounds like a little bit of work, though, so let’s save all of that for our next installment.

Collaboration Store, Part L

“Time is what keeps everything from happening at once.”
Ray Cummings

Welcome to installment #50 of this seemingly never-ending series! That’s a milestone to which we have never even come close on this site. But then, we have never taken on a project of this magnitude before, either. Still, you would think that we would have been done with this endeavor long before now. That’s the way these things go, though. When you strike out into the darkness with just a vague idea of where you want to go, you never really know where you will end up or how long it will take. There are those who would tell you, though, that it’s all about the journey, not the destination! Still, I try to stay focused on the destination. I think we are getting close.

Last time, we wrapped up the coding on our global UI Script that allowed us to repurpose the upload.do page for installing a version of an application. We never really tested it all the way through, though, so we should probably do that before we attempt to go any further. Just to back up a bit, the way that we try this thing out is to pull up a version record for an application and click on the Install button that we added a few episodes back.

Using the Install button to test out the installation process

That should launch the upload.do page, and with the added URL parameter for the attachment sys_id, that should trigger our UI Script, which should then turn that page into this:

Altered upload.do page

Meanwhile, the script should call back to the server for the Update Set XML file information, update the form on the page using that information, and then submit the form. After the form has been submitted, the natural process related to the upload.do page takes you here:

End result of hijacking the upload.do page, a Loaded Update Set from our XML file

So, it looks like it all works, which is good. Unfortunately, the application has still not been installed. From here it is a manual process to first Preview the Update Set, and then Commit it. We don’t really want that to be a manual process, though, so let’s see what we can do to make that all happen without the operator having to click on anything or take any action to move things along. To begin, we should probably take a look how it is done manually, which should help guide us into how we might be able to do it programmatically. If you click on the Update Set in the above screen to bring up the details, you will see a form button, which is just another UI Action, called Preview Update Set.

Preview Update Set UI Action

Using the hamburger menu, we can select Configure -> UI Actions to pull up the list of UI Actions related to this form, and then select the Preview Update Set action and take a peek under the hood. It looks like all of the work is done on the client side with the following script:

function previewRemoteUpdateSet(control) {
	var MESSAGE_KEY_DIALOG_TITLE = "Update Set Preview";
	var MESSAGE_KEY_CLOSE_BUTTON = "Close";
	var MESSAGE_KEY_CANCEL_BUTTON = "Cancel";
	var MESSAGE_KEY_CONFIRMATION = "Confirmation";
	var MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE = "Are you sure you want to cancel this update set preview?";
	var map = new GwtMessage().getMessages([MESSAGE_KEY_DIALOG_TITLE, MESSAGE_KEY_CLOSE_BUTTON, MESSAGE_KEY_CANCEL_BUTTON, MESSAGE_KEY_CONFIRMATION, MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
	var sysId = typeof g_form != 'undefined' && g_form != null ? g_form.getUniqueValue() : null;
	var dialogClass = window.GlideModal ? GlideModal : GlideDialogWindow;
	var dd = new dialogClass("hierarchical_progress_viewer", false, "40em", "10.5em");

	dd.setTitle(map[MESSAGE_KEY_DIALOG_TITLE]);
	dd.setPreference('sysparm_ajax_processor', 'UpdateSetPreviewAjax');
	dd.setPreference('sysparm_ajax_processor_function', 'preview');
	dd.setPreference('sysparm_ajax_processor_sys_id', sysId);
	dd.setPreference('sysparm_renderer_expanded_levels', '0'); // collapsed root node by default
	dd.setPreference('sysparm_renderer_hide_drill_down', true);
	dd.setPreference('focusTrap', true);

	dd.setPreference('sysparm_button_close', map["Close"]);
	// response from UpdateSetPreviewAjax.previewAgain is the progress worker id
    dd.on("executionStarted", function(response) {
		var trackerId = response.responseXML.documentElement.getAttribute("answer");

		var cancelBtn = new Element("button", {
			'id': 'sysparm_button_cancel',
			'type': 'button',
			'class': 'btn btn-default',
			'style': 'margin-left: 5px; float:right;'
		}).update(map[MESSAGE_KEY_CANCEL_BUTTON]);

        cancelBtn.onclick = function() {
			var dialog = new GlideModal('glide_modal_confirm', true, 300);
			dialog.setTitle(map[MESSAGE_KEY_CONFIRMATION]);
			dialog.setPreference('body', map[MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
			dialog.setPreference('focusTrap', true);
			dialog.setPreference('callbackParam', trackerId);
			dialog.setPreference('defaultButton', 'ok_button');
			dialog.setPreference('onPromptComplete', function(param) {
				var cancelBtn2 = $("sysparm_button_cancel");
				if (cancelBtn2)
					cancelBtn2.disable();
				var ajaxHelper = new GlideAjax('UpdateSetPreviewAjax');
				ajaxHelper.addParam('sysparm_ajax_processor_function', 'cancelPreview');
				ajaxHelper.addParam('sysparm_ajax_processor_tracker_id', param);
				ajaxHelper.getXMLAnswer(_handleCancelPreviewResponse);
			});
			dialog.render();
			dialog.on("bodyrendered", function() {
				var okBtn = $("ok_button");
				if (okBtn) {
					okBtn.className += " btn-destructive";
				}
			});
        };

		var _handleCancelPreviewResponse = function(answer) {
			var cancelBtn = $("sysparm_button_cancel");
			if (cancelBtn)
				cancelBtn.remove();
		};

        var buttonsPanel = $("buttonsPanel");
        if (buttonsPanel)
        	buttonsPanel.appendChild(cancelBtn);
	});

	dd.on("executionComplete", function(trackerObj) {
		var cancelBtn = $("sysparm_button_cancel");
		if (cancelBtn)
			cancelBtn.remove();
		
		var closeBtn = $("sysparm_button_close");
		if (closeBtn) {
			closeBtn.onclick = function() {
				dd.destroy();
			};
		}
	});
	
	dd.on("beforeclose", function() {
		reloadWindow(window);
	});
	
	dd.render();
}

I’m not going to attempt to pretend that I understand all that is going on here. I will say, though, that it looks to me as if we could steal this entire script and launch it from a location of our own choosing without having to have the operator click on any buttons. The one line that I see that would need to be modified is the one that gets the sys_id of the Update Set.

var sysId = typeof g_form != 'undefined' && g_form != null ? g_form.getUniqueValue() : null;

I think to start with, I would just delete that line entirely and pass the sys_id in as an argument to the function. Right now, a variable called control is passed in to the function, but I don’t see where that is used anywhere, so I think that I would just change this:

function previewRemoteUpdateSet(control) {

… to this:

function previewRemoteUpdateSet(sysId) {

… and see where that might take us. Maybe that will work and maybe it won’t, but you never know until you try. Of course, not everyone is a big proponent of that Let’s pull the lever and see what happens approach; once I was told that the last words spoken on Earth will be something like “Gee, I wonder what this button does.” Still, it’s just my nature to try things and see how it all turns out. But first we have to figure out where we can put our stolen script.

I can see two ways to go here: 1) we can just add it to our hack of the upload.do page and keep everything all in one place, or 2) since the upload.do page has done it’s job at this point and we don’t want to hack up a stock component any more than is absolutely necessary, let’s create a UI Page of our own and put the rest of the process in there where we can control everything and keep it within the scope of the application. There are, as usual, pros and cons for both approaches. I don’t know if one way is any better than the other, but we don’t have to decide right this minute. Let’s save that for our next installment.

Collaboration Store, Part XLIX

“Don’t tell me the sky’s the limit when there are footprints on the moon.”
Paul Brandt

Last time, we got started on the global UI Script that will run on the upload.do page to take over the page and repurpose it for our needs. Our interest is to convert an Update Set XML file back into an actual Update Set so that we can apply the Update Set, installing a shared Scoped Application. The upload.do page will help set us on that path, but we need our script to implement just a few little modifications. We got as far as launching the GlideAjax process which will fetch the Update Set XML file details from the server side, and now we need to build the function that will process the results coming back and do something with them. The “answer” returned will be a JSON string, so we just need to turn that back into an object so that we can extract the values. We can do just that much and verify the results by popping an alert using one of the values that should be found in the resulting object, the name of the XML file.

function submitForm(answer) {
	var app = {};
	try {
		app = JSON.parse(answer);
	} catch (e) {
		alert('Error parsing JSON response from server: ' + e);
	}
	alert(app.fileName);
}

There is not much here, but we can push the old Install button on the version page, just to verify that all is well so far.

Verification of the server side code, Ajax call, and JSON parsing

Although that wasn’t much in the way of code, it did verify that the server side Script Include that we built a while back does seem to work, as well as the Ajax call that we built last time and the JSON parsing that we just added today. At this point, we have built a UI Action that sends us over to the upload.do page, taken over the page for our own purposes, hiding the original content and adding content of our own, called back to the server side for the XML file information, and demonstrated that the XML file information has indeed been transferred over to the client side. Now that we have it in hand, we have to use it to emulate a file on the local system and send that faux file back over to the server side as an element of a form post. This is where things get a little tricky.

While digging around trying to find a way to do this, I came across the DataTransfer object. This object contains a list of File objects, and you can add to the list using the add() method of the items property. These two lines of code create a new DataTransfer object and add a new file to the empty list using the data that we retrieved from the Ajax call.

var fileList = new DataTransfer();
fileList.items.add(new File([app.xml], app.fileName, {type: 'application/xml'}));

Now that we have our “file” in a file list, we can populate the files attribute of the input element using the files attribute of our DataTransfer object.

document.getElementById('attachFile').files = fileList.files;

Now we just have to submit the form and see what happens. Actually, I did that, and nothing happened. It seems that there are a couple of other form fields that also need to be valued. What seems weird to me is that, if you look at the source code for the page, those fields do start out with a value, but somewhere along the line those values were removed before the form was posted, so I had to add a couple more lines to put those values back.

document.getElementsByName('sysparm_referring_url')[0].value = 'sys_remote_update_set_list.do?sysparm_fixed_query=sys_class_name=sys_remote_update_set';
document.getElementsByName('sysparm_target')[0].value = 'sys_remote_update_set';

Now we can submit the form, which is just one more line of code.

document.forms[0].submit();

All together, our new submitForm function looks like this:

function submitForm(answer) {
	var app = {};
	try {
		app = JSON.parse(answer);
	} catch (e) {
		alert('Error parsing JSON response from server: ' + e);
	}
	var fileList = new DataTransfer();
	fileList.items.add(new File([app.xml], app.fileName, {type: 'application/xml'}));
	document.getElementById('attachFile').files = fileList.files;
	document.getElementsByName('sysparm_referring_url')[0].value = 'sys_remote_update_set_list.do?sysparm_fixed_query=sys_class_name=sys_remote_update_set';
	document.getElementsByName('sysparm_target')[0].value = 'sys_remote_update_set';
	document.forms[0].submit();
}

And that completes (for now) our new global UI Script. Here is the entire script, including all of the work that we did last time out.

if (window.location.pathname == '/upload.do' && window.location.search.startsWith('?attachment_id=')) {
	waitForPageLoad();
}

function waitForPageLoad() {
	if (document.getElementById('attachFile')) {
		installApplication();
	} else {
		setTimeout(waitForPageLoad, 100);
	}
}

function installApplication() {
	var originalContent = document.getElementsByClassName('section-content')[0];
	originalContent.style.visibility = 'hidden';
	var newContent = document.createElement('div');
	newContent.innerHTML = '<h4 style="padding: 30px;">&nbsp;<img src="/images/loading_anim4.gif" height="18" width="18">&nbsp;Uploading Update Set XML file ...</h4>';
	originalContent.parentNode.insertBefore(newContent, originalContent);
	var attachmentId = window.location.search.substring(15);
	var ga = new GlideAjax('x_11556_col_store.ApplicationInstaller');
	ga.addParam('sysparm_name', 'getXML');
	ga.addParam('attachment_id', attachmentId);
	ga.getXMLAnswer(submitForm);
}

function submitForm(answer) {
	var app = {};
	try {
		app = JSON.parse(answer);
	} catch (e) {
		alert('Error parsing JSON response from server: ' + e);
	}
	var fileList = new DataTransfer();
	fileList.items.add(new File([app.xml], app.fileName, {type: 'application/xml'}));
	document.getElementById('attachFile').files = fileList.files;
	document.getElementsByName('sysparm_referring_url')[0].value = 'sys_remote_update_set_list.do?sysparm_fixed_query=sys_class_name=sys_remote_update_set';
	document.getElementsByName('sysparm_target')[0].value = 'sys_remote_update_set';
	document.forms[0].submit();
}

At this point, all that we have accomplished is to load the Update Set. We still have not installed anything. The Update Set still has to be Previewed and then Committed before the version is actually installed. The ultimate goal will be for the operator to be able to just click on that Install button and have everything else takes care of itself, including marking the version record as Installed (and any other version records of the app as not installed). Whether or not we can do all of that without human intervention has yet to be determined, but we have at least accomplished that first step of turning the XML file back into an Update Set. Next time, we will see where we can go from here.

Collaboration Store, Part XLVIII

“Most times, the way isn’t clear, but you want to start anyway. It is in starting that other steps become clearer.”
Israelmore Ayivor

Last time, we created a process to retrieve the Update Set XML data from the server side and then built a UI Action to launch the installation process. At the time that we left off, I was vacillating back and forth between hacking up the original upload.do page and creating a customized copy of my own. Since that time, though, I have decided that I am much too lazy to try to build one of my own, so I am just going to attempt to hack up the one that already exists with as minimal amount of intervention as I can muster. The one way that I know how to do that is to create a global UI Script that modifies the page on the fly without actually altering the source of the page itself. We have already used this technique with our earlier incident email hack, so at least we know that this approach is one that will work.

Unfortunately, you cannot create global UI Scripts in a Scoped Application; the script has to be in the global scope, so this component will be yet another addition to our global components Update Set. I don’t really like having all of these parts outside of the application, but that’s just the way that these things go sometimes. These global scripts run on every single page load in the system, so to be a minimally intrusive as possible, the very first thing that you want to check is whether or not you are running on a page in which this code is needed. For our purposes, we only want this code to run on the upload.do page, and only if our attachment_id parameter is present in the URL.

if (window.location.pathname == '/upload.do' && window.location.search.startsWith('?attachment_id=')) {
	alert('So far, so good ...');
}

We can test this out by going into a version record and clicking on the new Install form button.

First test of the new global UI Script

OK, that works. In fact, that also proves out the code on the UI Action that we created last time. As the alert says, so far, so good. One thing that you will notice, however, is that there is nothing on the underlying screen. This code runs as soon as it is loaded, and the rest of the page has yet to be delivered. Since our plan is to tinker with that page, we really don’t want our code to be running just yet. We will need to wait to make sure that the rest of the page is there as well before we attempt to alter it. We can accomplish that with a little recursive loop that will look for an important field such as the file to be uploaded, and until that element is present, just loop back and check again. Here is a modified version of the script that will accomplish that.

if (window.location.pathname == '/upload.do' && window.location.search.startsWith('?attachment_id=')) {
	waitForPageLoad();
}

function waitForPageLoad() {
	if (document.getElementById('attachFile')) {
		installApplication();
	} else {
		setTimeout(waitForPageLoad, 100);
	}
}

function installApplication() {
	alert('So far, so good ...');
}

If that works as intended, the alert should not pop until at least the parts of the page in which we are interested have arrived.

Second test of the new global UI Script

That’s better. Now at least the stuff that we want to play with is all present in the DOM. The first thing that we will want to do is to hide the original form and then replace it with some kind of message indicating that things are happening in the background and there is nothing for the operator to do right at the moment. Here is a little code that will find the DIV that contains the major components, hides it, and replaces it with something else.

var originalContent = document.getElementsByClassName('section-content')[0];
originalContent.style.visibility = 'hidden';
var newContent = document.createElement('div');
newContent.innerHTML = '<h4 style="padding: 30px;">&nbsp;<img src="/images/loading_anim4.gif" height="18" width="18">&nbsp;Uploading Update Set XML file ...</h4>';
originalContent.parentNode.insertBefore(newContent, originalContent);

There are a couple of things to note on the above code. For one, DOM manipulation is frowned upon in the ServiceNow environment. You will get tagged for that in an Instance Scan as a bad practice, and you should really try to avoid doing things like that if at all possible. Still, sometimes you have to break the rules to get something done; there is a reason that this site is called ServiceNow Hackery and not ServiceNow By The Book. Sometimes you have to step outside of the lines in order to do what you want to do. But again, this should be a last resort and not adopted as a routine way of doing things. The other thing to note is the use of the innerHTML method. Again, the preferred way of doing things would be to create each DOM node individually, set all of the appropriate values on each node, and then link them all up to each other before inserting them into the active DOM. That’s the way that it should be done, but I was just too lazy to go through all of that and I took the easy way out instead. But that’s another thing to which folks might take exception in certain circles.

To test all of this out, we can go back to our version page and click on the new Install button one more time.

Third test of the new global UI Script

With all of that basic housekeeping out of the way, we can now focus on what we are here for. The first thing that we need to do in order to accomplish our goal is to pull down the Update Set details using GlideAjax to access the Script Include that we created last time. Before we do that, though, we need to snag the attachment record sys_id from the URL parameter. With that in hand, we can then make our Ajax call.

var attachmentId = window.location.search.substring(15);
var ga = new GlideAjax('x_11556_col_store.ApplicationInstaller');
ga.addParam('sysparm_name', 'getXML');
ga.addParam('attachment_id', attachmentId);
ga.getXMLAnswer(submitForm);

Now we just need to build a submitForm function that will parse the returned JSON string to access the file name and file contents, and then somehow use that as if it were a file on the local system so that we can submit the form. That sounds like a bit of work in and of itself, and I’m still not exactly sure how I am going to pull that off, so let’s save that exercise for our next exciting installment.

Fun with Outbound REST Events

“A good programmer is someone who always looks both ways before crossing a one-way street.”
Doug Linder

A while back I mentioned that ServiceNow Event Management can be used within ServiceNow itself. I explained how all of that could work, but I never really came up with a real-world Use Case that would demonstrate the value of starting to wander down that road. I have code to generate Events in a lot of places that never get executed, but it is still there, just in case. One place where unwanted things do tend to happen, though, is when interacting with outside resources such as JDBC imports or external REST calls. Here, you are dependent on some outside database or system being up and available, and that’s not always the case, so you need to build in processes that can gracefully handle that. This is an excellent place for Event Management to step in and capture the details of the issue and log it for some kind of investigation or resolution.

So, I thought I should come up with something that anyone can also play around with on their own, that isn’t tied to some proprietary database or internal web service. I started searching for some kind of public REST API, and I stumbled across the Public APIs web site. There is actually a lot of cool stuff here, but I was looking for something relatively simple, and also something that would seem to have some relation to things that go on inside of ServiceNow. After browsing around a bit, I found the US Street Address API, which looked like something that I could use to validate street addresses in the User Profile. That seemed simple enough and applicable enough to serve my purpose, so that settled that.

There are quite a few parts and pieces to do everything that I want to do, so we will just take them on one at a time. Here is the initial list of the things that I think that I will need to accomplish all that I would like to do:

  • Create an Outbound REST Message using the US Street Address API as the end point,
  • Create a Script Include that will encapsulate all of the functions necessary to make the REST call, evaluate the response, log an Event (if needed), and return the results,
  • Create a Client Script on the sys_user table to call the Script Include if any component of the User’s address changes and display an error message if the address is not valid,
  • Create an Alert Management Rule to produce an Incident whenever the new Event spawns an Alert,
  • Test everything to make sure that it all works under normal circumstances, and then
  • Intentionally mangle the REST end point to produce a failure, thereby testing the creation of the Event, Alert, and Incident.

The first thing to do, then, will be to create the Outbound REST Message, but before we do that, let’s explore the web service just a little bit to understand what we are working with. To do that, there is a handy little API tester here. This will allow us to try a few things out and see what happens. First, let’s just run one of their provided test cases:

https://us-street.api.smartystreets.com/street-address?auth-id=21102174564513388&candidates=10&match=invalid&street=3901%20SW%20154th%20Ave&street2=&city=Davie&state=FL&zipcode=33331

The API is just a simple HTTP GET, and the response is a JSON object:

[
  {
    "input_index": 0,
    "candidate_index": 0,
    "delivery_line_1": "3901 SW 154th Ave",
    "last_line": "Davie FL 33331-2613",
    "delivery_point_barcode": "333312613014",
    "components": {
      "primary_number": "3901",
      "street_predirection": "SW",
      "street_name": "154th",
      "street_suffix": "Ave",
      "city_name": "Davie",
      "default_city_name": "Fort Lauderdale",
      "state_abbreviation": "FL",
      "zipcode": "33331",
      "plus4_code": "2613",
      "delivery_point": "01",
      "delivery_point_check_digit": "4"
    },
    "metadata": {
      "record_type": "S",
      "zip_type": "Standard",
      "county_fips": "12011",
      "county_name": "Broward",
      "carrier_route": "C006",
      "congressional_district": "23",
      "rdi": "Commercial",
      "elot_sequence": "0003",
      "elot_sort": "A",
      "latitude": 26.07009,
      "longitude": -80.35535,
      "precision": "Zip9",
      "time_zone": "Eastern",
      "utc_offset": -5,
      "dst": true
    },
    "analysis": {
      "dpv_match_code": "Y",
      "dpv_footnotes": "AABB",
      "dpv_cmra": "N",
      "dpv_vacant": "N",
      "active": "Y"
    }
  }
]

It looks like the response comes in the form of a JSON Array of JSON Objects, and the JSON Objects contain a number of properties, some of which are JSON Objects themselves. This will be useful information when we attempt to parse out the response in our Script Include. Now we should see what happens if we send over an invalid address, but before we do that, we should take a quick peek at the documentation to better understand what may affect the response. One input parameter in particular, match, controls what happens when you send over a bad address. There are two options;

  • strict  The API will return detailed output only if a valid match is found. Otherwise the API response will be an empty array.
  • invalid  The API will return detailed output for both valid and invalid addresses. To find out if the address is valid, check the dpv_match_code. Values of Y, S, or D indicate a valid address.

The default value in the provided tester is invalid, and that seems to be the appropriate setting for our purposes. Assuming that we will always use that mode, we will need to look for one of the following values in the dpv_match_code property to determine if our address is valid:

Y — Confirmed; entire address is present in the USPS data. To be certain the address is actually deliverable, verify that the dpv_vacant field has a value of N. You may also want to verify that the active field has a value of Y. However, the USPS is often months behind in updating this data point, so use with caution. Some users may prefer not to base any decisions on the active status of an address.
S — Confirmed by ignoring secondary info; the main address is present in the USPS data, but the submitted secondary information (apartment, suite, etc.) was not recognized.
D — Confirmed but missing secondary info; the main address is present in the USPS data, but it is missing secondary information (apartment, suite, etc.).

So, let’s give that a shot and see what happens. Let’s drop the state and zipcode from our original query and give that tester another try.

https://us-street.api.smartystreets.com/street-address?auth-id=21102174564513388&candidates=10&match=invalid&street=3901%20SW%20154th%20Ave&street2=&city=Davie

… which give us this JSON Array in response:

[
  {
    "input_index": 0,
    "candidate_index": 0,
    "delivery_line_1": "3901 SW 154th Ave",
    "last_line": "Davie",
    "components": {
      "primary_number": "3901",
      "street_predirection": "SW",
      "street_name": "154",
      "street_suffix": "Ave",
      "city_name": "Davie"
    },
    "metadata": {
      "precision": "Unknown"
    },
    "analysis": {
      "dpv_footnotes": "A1",
      "active": "Y",
      "footnotes": "C#"
    }
  }
]

This result doesn’t even have a dpv_match_code property, which is actually kind of interesting, but that would still fail a positive test for the values Y, S, or D, so that would make it invalid, which is what we wanted to see.

OK, I think we know enough now about the way things work that we can start building out out list of components. This is probably a good place to wind things up for this episode, as we can start out next time with the construction process, beginning with of our first component, the Outbound REST Message.

Static Monthly Calendar, Part III

“Mistakes are the portals of discovery.”
James Joyce

While experimenting with a number of various configurations for my Static Monthly Calendar, I ran into a number of issue that led me to make a few adjustments to the code, and eventually, to actually build a few new parts that I am hoping might come in handy in some future effort. The first problem that I ran into was when I tried to configure a content provider from a scoped app. The code that I was using to instantiate a content provider using the name was this:

var ClassFromString = this[options.content_provider];
contentProvider = new ClassFromString();

This works great for a global Script Include, but for a scoped component, you end up with this:

var ClassFromString = this['my_scope.MyScriptInclude'];

… when what you really need is this:

var ClassFromString = this['my_scope']['MyScriptInclude'];

I started to fix that by adding code to the widget, but then I decided that it was code that would probably be useful in other circumstances, so I ended up creating a separate global component to turn an object name into an instance of that object. That code turned out to look like this:

var Instantiator = Class.create();
Instantiator.prototype = {
	initialize: function() {
	},

	_root: null,

	setRoot: function(root) {
		this._root = root;
	},

	getInstance: function(name) {
		var instance;

		var scope;
		var parts = name.split('.');
		if (parts.length == 2) {
			scope = parts[0];
			name = parts[1];
		}
		var ClassFromString;
		try {
			if (scope) {
				ClassFromString = this._root[scope][name];
			} else {
				ClassFromString = this._root[name];
			}
			instance = new ClassFromString();
		} catch(e) {
			gs.error('Unable to instantiate instance named "' + name + '": ' + e);
		}

		return instance;
	},

	type: 'Instantiator'
};

This handles both global and scoped components, and also simplified the code in the widget, which turned out to be just this:

contentProvider = instantiator.getInstance(options.content_provider);

Another issue that I ran into was when I tried to inject content that allowed the user to click on an event to bring up some additional details about the event in a modal pop-up. I created a function called showDetails to handle the modal pop-up, and then added an ng-click to the enclosing DIV of the HTML provided by my example content provider call this new function. Unfortunately, the ng-click, which was added to the page with the rest of the provided content, was inserted using an ng-bind-html attribute, which simply copies in the raw HTML and doesn’t actually compile the AngularJS code. I tried various approaches to compiling the code myself, but I was never able to get any of those to work. Then I came across this, which seemed like just the thing that I needed. I thought about installing in in my instance, but then I thought that I had better check first, because it’s entirely possible that it is already in there. Sure enough, I came across the Angular Provider scBindHtmlCompile, which seemed like a version of the very same thing. So I attached it to my widget and replaced by ng-bind-html with sc-bind-html-compile.

Unfortunately, that just put the compiler into an endless loop, which ultimately resulted in filling up the Javascript console with quite a few of these error messages:

Error: [$rootScope:infdig] 10 $digest() iterations reached. Aborting!

I searched around for a solution to that problem, but nothing that I tried would get around the problem. I ended up going in the opposite direction and swapping out the ng-click for an onclick, which doesn’t need to be compiled. Of course, the onclick can’t see any of the functions inside the scope of the app, so I had to write a stand-alone UI Script to include with a script tag in order to have a function to call. That function is outside of the scope of the app as well, so I ended up turning the script into yet another generic part that uses the element to get you back to the widget:

function functionBroker(id, func, arg1, arg2, arg3, arg4) {
	var scope = angular.element(document.getElementById(id)).scope();
	scope.$apply(function() {
		scope[func](arg1, arg2, arg3, arg4);
	});
}

You pass it the ID of your HTML element, the name of function that is in scope, and up to four independent arguments that you would like to pass to the function. It uses the element to locate the scope, and then uses the scope to find your desired function and passes in the arguments. After saving the new generic script, I went back into the widget and added a script tag to the widget’s HTML to pull the script onto the page.

<script type="text/javascript" src="/function_broker.jsdbx"></script>

Then I added a function to pop open a modal dialog based on a configuration object passed into the function.

$scope.showDetails = function(modalConfig) {
	spModal.open(modalConfig);
};

Now, I just needed something to pop up to see if it all worked. Not too long ago I made a simple widget to show off my rating type form field, and that looked like a good candidate to use just to see if everything was going to work out the way that it should. I pulled up the ExampleContentProvider that I created earlier, and added one more event in the middle of the month that would bring up this repurposed widget when clicked.

if (dd == 15) {
	response += '<div class="event" id="event15" onclick="functionBroker(\'event15\', \'showDetails\', {title: \'Fifteenth of the Month Celebration\', widget:\'feedback-example\', size: \'lg\'});" style="cursor: pointer;">\n';
	response += '  <div class="event-desc">\n';
	response += '    Fifteenth of the Month Celebration\n';
	response += '  </div>\n';
	response += '  <div class="event-time">\n';
	response += '    Party Time\n';
	response += '  </div>\n';
	response += '</div>\n';

The whole thing is kind of a Rube Goldberg operation, but it should work, so let’s light things up and give it a try.

Modal pop-up from clicking on an Event

After all of the failed attempts at making this happen, it’s nice to see the modal dialog actually appear on the screen! It still seems like there has got to be a simpler way to make this work, but until I figure that out, this will do. If you’d like to play around with it yourself, here’s an Update Set that I hope includes all of the right pieces. There are still a few little things that I would like to add one day, so this may not quite be the last you will see of this one.

Fun with Highcharts, Part VI

“If you’re going through Hell, keep going.”
Someone other than Winston Churchill

Well, it turns out that it wasn’t as bad as I had originally imagined. I converted my server side GenericChartUtil Script Include into a client side UI Script, then created a Widget Dependency referencing the UI Script, and then finally associated the Widget Dependency to my Generic Chart widget. That pushed the chart object generation from the server side to client side (where I would no longer lose any functions in the chart object), but to make it all work, I needed to pass the chart data and chart type around rather than the completed chart object.

On my Generic Chart widget, I removed the chartObject option and replaced it with two new widget options, chartType and chartData. Then I added a line of code in the client side script to pass the chartData and chartType to the new client side UI Script to generate the chartObject. The client side code for the widget now looks like this:

function($scope, $rootScope, $location) {
	var c = this;
	if (c.data.chartData) {
		c.data.chartData.location = $location;
		$scope.chartOptions = genericChartUtil.getChartObject(c.data.chartData, c.data.chartType);
	}
	if (c.options.listen_for) {
		$rootScope.$on(c.options.listen_for, function (event, config) {
			if (config.chartData) {
				config.chartData.location = $location;
				$scope.chartOptions = genericChartUtil.getChartObject(config.chartData, config.chartType);
			}
		});
	}
}

On my Workload Chart widget, I removed all references to the deleted Script Include on the server side, and then modified the broadcast message on the client side to pass the chartData and chartType rather than the entire generated chartObject. That code now looks like this:

function($scope, $rootScope) {
	var c = this;
	$scope.updateChart = function() {
		c.server.update().then(function(response) {
			c.data.config = response.config;
			c.data.group = response.group;
			c.data.type = response.type;
			c.data.frequency = response.frequency;
			c.data.ending = response.ending;
			c.data.chartData = response.chartData;
			$rootScope.$broadcast('refresh-workload', {chartData: c.data.chartData, chartType: 'workload'});
		});
	}
}

That solved my earlier problem of losing the functions built into the chart objects that were generated on the server side. Now I could get back to what I was trying to do in the first place, which was to set things up so that you could click on any given data point on the chart and pull up a list of the records that were represented by that value. This code was now working as it should:

plotOptions: {
        series: {
            cursor: 'pointer',
            point: {
                events: {
                    click: function () {
                        alert('Category: ' + this.category + ', value: ' + this.y);
                    }
                }
            }
        }
    },

For my link URL to show the list of records represented by the chart item clicked, I was going to need the name of the table and the filter used in the GlideAggregate that calculated the value. Neither one of those was currently passed in the chart data, so the first thing that I needed to do was to modify the code that generated the chartData object to include those values. That code now looks like this:

function gatherChartData() {
	var task = new GlideAggregate(data.type);
	var periodData = getPeriodData();
	var chartData = {};
	chartData.table = data.type;
	chartData.filter = {Received: [], Completed: [], Backlog: []};
	var filter = '';
	chartData.title = task.getPlural() + ' assigned to ' + findOption(data.config.groupOptions, data.group).label;
	chartData.subtitle = periodData.frequencyInfo.label + ' through ' + periodData.endingDateInfo.label;
	chartData.labels = periodData.labels;
	chartData.received = [];
	chartData.completed = [];
	chartData.backlog = [];
	for (var i=1; i<periodData.endDate.length; i++) {
		// received
		filter = 'assignment_group=' + data.group + '^opened_at>' + periodData.endDate[i-1] + '^opened_at<=' + periodData.endDate[i];
		task.initialize();
		task.addAggregate('COUNT');
		task.addEncodedQuery(filter);
		task.query();
		task.next();
		chartData.received.push(task.getAggregate('COUNT') * 1);
		chartData.filter.Received.push(filter);
		// completed
		filter = 'assignment_group=' + data.group + '^closed_at>' + periodData.endDate[i-1] + '^closed_at<=' + periodData.endDate[i];
		task.initialize();
		task.addAggregate('COUNT');
		task.addEncodedQuery(filter);
		task.query();
		task.next();
		chartData.completed.push(task.getAggregate('COUNT') * 1);
		chartData.filter.Completed.push(filter);
		// backlog
		filter = 'assignment_group=' + data.group + '^opened_at<=' + periodData.endDate[i] + '^closed_at>' + periodData.endDate[i] + '^ORclosed_atISEMPTY';
		task.initialize();
		task.addAggregate('COUNT');
		task.addEncodedQuery(filter);
		task.query();
		task.next();
		chartData.backlog.push(task.getAggregate('COUNT') * 1);
		chartData.filter.Backlog.push(filter);
	}
	return chartData;
}

Since I need the filter value for multiple purposes, I first assigned that to a variable, and then used that variable wherever it was needed. Inside of Highcharts, my only reference to the series was the category name, which is why there is an odd capitalized property key for the chartData.filter properties (the category names are labels on the chart, so they are capitalized). These changes gave me the data to work with so that I could modify the onclick function to look like this:

plotOptions: {
	series: {
		cursor: 'pointer',
		point: {
			events: {
				click: function (evt) {
					var s = {id: 'snh_list', table: chartData.table, filter: chartData.filter[this.series.name][this.index]};
					var newURL = chartData.location.search(s);
					spAriaFocusManager.navigateToLink(newURL.url());
				}
			}
		}
	}
},

In order for that to work, I need to pass the $location object to Highcharts as well, so my client side GenericChart widget code ended up looking like this:

function($scope, $rootScope, $location) {
	var c = this;
	if (c.data.chartData) {
		c.data.chartData.location = $location;
		$scope.chartOptions = genericChartUtil.getChartObject(c.data.chartData, c.data.chartType);
	}
	if (c.options.listen_for) {
		$rootScope.$on(c.options.listen_for, function (event, config) {
			if (config.chartData) {
				config.chartData.location = $location;
				$scope.chartOptions = genericChartUtil.getChartObject(config.chartData, config.chartType);
			}
		});
	}
}

To get back to the chart itself, once you clicked on a data point to see the underlying records, I added my dynamic breadcrumbs widget to the the top of the chart page and to the top of a new list page that I created for this purpose. Now it was time to test things out …

Well, as often seems to be the case with these things, there is good news and bad news. The good news is that clicking on the data points on the chart actually does bring up the list of records, which is very cool. Even though I had to do a considerable amount of restructuring of my initial concept, everything now seems to work. And I like the feature. Clicking on a bar or point on the line now takes you to a list of the records that make up the value for that data point. And when you are done, you can click on the breadcrumb for the chart and get back to the chart itself. All of that works beautifully.

Unfortunately, when you get back to the chart page, it reverts to the original default settings for all of the options. If you used any of the four selections at the top of the page to get to a specific chart configuration, and then clicked on a data point to see the underlying records, when you returned to the chart, you were no longer looking at the chart that you had selected. You were back to the original, default values for all of the selections. The page basically restarts from the beginning. That, I do not like at all.

The primary reason for that behavior is that your chart option selections are not part of the URL, so they are not preserved when you click on the breadcrumb to return. Making them part of the URL would mean another complete reconfiguration of the way in which that chart works, but something is going to have to be done. I don’t like it the way that it works now. Originally, I was going to release a new Update Set with all of these changes, but I’m not happy with the way things are working right now. I’m going to have to do a little bit more work before I’m ready to release another version. Hopefully, I can do that next time out.

Incident Email Hack Revisited

“The greatest performance improvement of all is when a system goes from not working to working.”
John Ousterhout

The other day I was showing off my Incident email hack, and to my surprise, the thing did not work. I was reminded of something my old boss used to tell me whenever we had blown a demonstration to a potential customer. “There are only two kinds of demos,” he would say, “Those that don’t count and those that don’t work.” But my email hack had been working flawlessly for quite some time, so I couldn’t imagine why it wasn’t working that day. Then I realized that I couldn’t remember trying it since I upgraded my instance to Madrid. Something was different now, and I needed to figure out what that was.

It didn’t take much of an investigation to locate the failing line of code. As it turns out, it wasn’t in anything that I had written, but in an existing function that I had leveraged to populate the selected email addresses. That’s not to suggest that the source of the problem was not my fault; it just meant that I had to do a little more digging to get down to the heart of the issue. The function, addEmailAddressToList, required an INPUT element as one of the arguments, but for my usage, there was no such INPUT element. But when I looked at the code inside the function, the only reference to the INPUT element was to access the value property. So, I just created a simple object and set the value to the email address that I wanted to add, and then passed that in to the function. That worked just fine at the time, but that was the old version of this function.

In the updated version that comes with Madrid, there is new code to access the ac property of the INPUT element and run a function of the ac object called getAddressFilterIds. My little fake INPUT element had no ac property, and thus, no getAddressFilterIds function, so that’s where things broke down. No problem, though. If I can make a fake INPUT element, I can add a fake ac object to it, and give that fake ac object a fake getAddressFilterIds function. I would need to know what the function does, or more importantly, what it is supposed to return, but that was easy enough to figure out as well. In the end, all I really needed to do to get past that error was add these lines to the incident_email_client_hack UI Script:

input.ac = {};
input.ac.getAddressFilterIds = function() {return '';};

Unfortunately, things still didn’t work after that. Once I got past that first error, I ran right into another similar error, as it was trying to run yet another missing function of the ac object called resetInputField. So, I added yet another line of code:

input.ac.resetInputField = function() {return '';};

Voila! Now we were back in action. I did a little more testing, just to be sure, but as far as I can tell, that solved the issue for this version, and since all I did was add bloat to the fake INPUT element that would never be referenced in the old version, it would be backwards compatible as well, and work just fine now in either version. Still, now that all the parts were laid out, I decided that I could clean the whole mess up a little bit by defining the fake INPUT element all in a single line of code:

var input = {
	value: recipient[i].email,
	ac: {
		getAddressFilterIds: function() {
			return '';
		},
		resetInputField: function() {
			return '';
		}
	}
};

There, that’s better! Now, instead of adding three new lines of code, I actually ended up removing a line. For those of you playing along at home, I gathered up all of the original parts and pieces along with the updated version of this script and uploaded a new version of the Update Set for this hack.

Testing ServiceNow Event Utilities

“Testing leads to failure, and failure leads to understanding.”
Burt Rutan

Now that we have put together a basic ServiceNow Event utility and added a few enhancements, it’s time to try it out and see what happens. There are actually two reasons that we would want to do this: 1) to verify that the code performs as intended, and 2) to see what happens to these reported Events once they are generated. We will want to test both the server side process and the client side process, so we will want a simple tool that will allow us to invoke both. One way to do that would be with a basic UI Page that contains a few input fields for Event data and a couple of buttons, one to report the Event via the server side function and another to report the Event using the client side function.

For the sake of simplicity, let’s just collect the description value from the user input and hard code all of the rest of the values. We could provide more options for input fields, but we’re just testing here, so this will be good enough to prove that everything works. We can always add more later. But for now, maybe just something like this:

Simple Event utility tester

The first thing that we will need is some HTML to lay out the page:

<?xml version="1.0" encoding="utf-8" ?>
<j:jelly trim="false" xmlns:j="jelly:core" xmlns:g="glide" xmlns:j2="null" xmlns:g2="null">
<script src="client_event_util.jsdbx"></script>
<div>
 <g:ui_form>
  <h4>Event Tester</h4>
  <label for="description">Enter some text for the Event details:</label>
  <textarea id="description" name="description" class="form-control"></textarea>
  <div style="text-align: center; padding: 10px;">
    <input class="btn" name="submit" type="submit" value="Client Side Test" onclick="clientSideTest();"/>
	 
    <input class="btn" name="submit" type="submit" value="Server Side Test"/>
  </div>
 </g:ui_form>
</div>
</j:jelly>

There’s really nothing too special here; just a single textarea and a couple of submit buttons, one for the client side and one for the server side. On the client side button we add an onclick attribute so that we can run the client side script. On the server side button, we just let the form submit to the server, and then run the server side script when we get to the other side. The client side script is similarly very simple stuff:

function clientSideTest() {
	ClientEventUtil.logEvent('event_tester', 'None', 'Client Event Test', 3, document.getElementById('description').value);
	alert('Event generated via Client Side function');
}

… as is the server side script:

if (submit == "Server Side Test") {
	new ServerEventUtil().logEvent('event_tester', 'None', 'Server Event Test', 3, description);
	gs.addInfoMessage('Event generated via Server Side function');
}

Now all we have to do is hit that Try It button on the UI page, enter some description text, and then click one of the submit buttons to see what happens. On the client side:

Client side Event test

… and on the server side:

Server side Event test

Now that we have generated the Events, we can verify that they were created by going into the Event Management section of the menu and selecting the All Events option. By inspecting the individual Events, you can also see that each Event triggered an Alert, and by setting up Alert Management Rules, these Alerts could drive subsequent actions such as creating an Incident or initiating some automated recovery activity. But now we are getting into the whole Event Management subsystem, which is way outside of the scope of this discussion. My only intent here was to demonstrate that your ServiceNow components can easily leverage the Event Management infrastructure built into the ServiceNow platform, and in fact, do it quite easily once you created a few simple utility modules to handle all of the heavy lifting. Hopefully, that objective has been achieved.

Just in case anyone might be interested in playing around with this code, I bundled the two scripts and the test page together into an Update Set.