Periodic Review, Part IX

“Hide not your Talents, they for Use were made. What’s a Sun-Dial in the shade!”
Benjamin Franklin

Last time, we wrapped up most of the work on the script that will handle the review process right up to the point where we need to send out the notice to the recipient. Today we will look at one way to send out an email notification and then build the notice that we will want to send out.

One of the easiest ways to trigger an outbound email is through the use of a System Event, not to be confused with an Event Management Event, which is an entirely different animal. And neither one of those is related in any way to a ServiceNow Event, but now we are really getting off track. To create a new Event, we will navigate to the Event Registry and then click on the New button.

New System Event

Once we have created our new event, we can create an Email Notification and have the notification triggered by this event. To create our new Email Notification, we will navigate to All > System Notification > Email > Notifications and click on the New button. At this point, let’s not worry too much about the content of the message and let’s just do enough so that we can test things out and make sure that it all works. Once we establish that the email is actually sent out, we can go back in and create the message body that will work for our requirements.

New Email Notification

Under the When to send tab, we select Event is fired from the Send when options and then we select our new event from the Event name options. Then on the Who will receive tab, we check the box labeled Event parm 1 contains recipient, which will allow us to send in the recipient as one of the event parameters.

Identifying the intended recipient

In the What it will contain tab, we will just put the word Testing in the subject and body for now and then save the record so that we can run a test. Now we need to modify our Script Include to initiate the event, passing in the appropriate parameters, namely the notification record and the intended recipient. We will replace this line that we added for earlier testing:

gs.info('This is where we would send a notice to ' + noticeGR.getDisplayValue('recipient'));

… with this new code to add a new instance of the event to the queue:

// now you need to send out the notice, passing in the notice record for variables
gs.eventQueue('x_11556_periodic_r.ReviewNotice', noticeGR, noticeGR.recipient, noticeGR.getUniqueValue());
noticeCt++;

After we save that we can pop back over to Scripts – Background and see if all of this results in some email being sent out.

New test results

Well, that looks pretty good, but let’s take a look at the email logs and see if we actually sent out some notices.

Notification emails generated

OK, that works! Now that we know that our process will send out the notices to the designated recipients, the next thing that we will need to do is to come up with the content of the notice. That sounds like a good project for our next installment.

Collaboration Store, Part XVI

“It’s hard enough to find an error in your code when you’re looking for it; it’s even harder when you’ve assumed your code is error-free.”
Steve McConnell

Now that we have completed all of the parts for the initial set-up process of our new Scoped Application, it’s time to take a step back and see where things stand. On the one hand, after 16 captivating installments, you would think that we would be much further along in this process beyond just the initial set-up. On the other hand, this is a fairly complex endeavor, and it’s good to get this necessary administrative function out of the way so that we can focus on the actual purpose of the application. But before we jump right into that, we should first have quick look at what we have and what we don’t have at this point.

What we have is an initial version of the set-up process for both the Host instance and the Client instances. Now all of this needs to be fully tested in multiple scenarios, but even if we manage to kill all of the bugs that are undoubtedly baked in there at this stage of the game, as it is written, it assumes for the most part that all will go well every time. What I mean by that is that there isn’t a whole lot of error recovery built into the process right now. Everything seems to work if all of the instances are up and running when contacted. That’s not really good enough for prime time, though, as it is always possible that one or more instances might be unavailable or off-line for some reason. At some point, we will have to build in some processes to monitor for that and to deal with it in some way. Right now, if you fail to get some kind of update from the Host, you just don’t get it. That’s not really good enough in the long run, but my approach is always to get things working first, and then add such features later in a future version. Maybe we will even handle that using Event Management, although not everyone has that feature activated, so maybe that’s not a good plan after all.

There are other features that I would like to add as well. For example, it would be nice if each participating instance had some form of logo or image that would visually identify them and all of the items that they have shared with the community. Things like that are nice-to-haves, though, so again, we’ll deal with that later. At this point, I just want to make sure that what we have put together so far actually works the way that it was intended before we go any further.

I also want all of the menu options hidden until set-up is complete, and then once set-up has been completed, I would like the set-up option to be hidden. I haven’t thrown that in there just yet, either, but that’s something that I don’t want to forget to do once I am sure that everything is working as it should be.

Not too long ago, I had an offer to assist with the testing of this particular project. Normally, I like to do all of my own testing, but they say that programmers are the worst testers of their own code, so I’m going to break with tradition and go ahead and put out an Update Set for this app that is clearly not finished and basically not good for anything of value at this point. If anyone want to participate in this effort, all that I ask is that you post any defects that you uncover to the comments section so that I can see if I can’t get them resolved and put out a new version with the corrections.

So, here’s the deal: gather up your friends and neighbors and come up with some strategy to see who draws the short straw and serves as the Host instance, set up the Host first, and then everyone else jump in and set up their Client instances by referencing the Host. This can work with just two instances, but to see the existing instance updates for any new instance, you will need at least three (one for the Host, one for the new Client, and at least one for an existing Client). Four our more would be even better, but three will at least test all of the current features. When all is said and done, everyone’s list of member instances should match, unless something went terribly wrong along the way. And if you really want to put yourself out there, you can set up a Host instance and put your instance ID in the comments so that other people that you don’t even know can attempt to connect to your instance. Your call.

To install this version of the Collaboration Store (we’ll call it version 0.1), you will need this Update Set, which contains the Scoped Application, and you will also need the latest version of snh-form-fields, which you can find here. Install the form fields Update Set first, and then install the Scoped Application. At that point, you should be good to go and should be able to click on the set-up menu option at any time. I’ll let this sit out here for a while and see if anything comes of it. Thanks in advance for helping a guy out. It’s very much appreciated.

Fun with Webhooks, Part X

“Control is for beginners.”
Ane Størmer

I’ve been playing around with our little Incident Webhook subsystem to make sure that everything works, and to make sure that I had finally developed all of the pieces that I had intended to build. For the most part, I’m quite happy with what we have put together during this exercise, but like most end users who finally get their hands on something that they have ordered, now that I have a working model in my hands and have tried to use if for various things, I can envision a number of different enhancements that would make things even better. Still, what we have is pretty nice all on its own, although I did break down and make just a few minor adjustments.

One thing that I had thought about doing, but didn’t, was to skip the confirmation pop-up on the custom Webhook Registry page’s Cancel button when no changes had been made to the form. Going through that a few times was enough to motivate me to put that in there, and I like this version much better. While I was in there, I also built a goBack() function to house the code for returning to the previous page, and then called that function wherever it was appropriate. This didn’t really save that much in the way of code, since the current goBack() logic is only one line itself, but it consolidates the logic in a single place if I ever want to wire in support for something like my Dynamic Breadcrumbs. The entire client side code for the Webhook Registry widget now looks like this:

function WebhookRegistry($scope, $location, spModal) {
	var c = this;

	$scope.cancel = function() {
		if ($scope.form1.$dirty) {
			spModal.confirm('Abandond your changes and return to your Webhooks?').then(function(confirmed) {
				if (confirmed) {
					goBack();
				}
			});
		} else {
			goBack();
		}
	};

	$scope.save = function() {
		if ($scope.form1.$valid) {
			c.server.update().then(function(response) {
				goBack();
			});
		} else {
			$scope.form1.$setSubmitted(true);
		}
	};

	function goBack() {
		$location.search('id=my_webhooks');
	}
}

One other thing that I noticed when attempting to integrate with various other targets is that many sites are looking for a property named text as opposed to message. I ended up renaming my message field to text to be more compatible with this convention, but it would really be nice to be able to pick and chose what properties you would like to have in your payload, as well as being able to specify what you wanted them to be named. That’s on my wish list for a future version for sure.

Something that I meant to include in this version, but forgot to do, was to emulate the Test URL UI Action on the Webhook Registry widget so that Service Portal users could have that same capability on that portal page. That was definitely on my plan to include, but I just spaced it out when I was putting that all together. I definitely want to be sure to include that at some point in the near future. I would do it now, but I already built the Update Set and I’m just too lazy to go back and fix it now.

One other thing that is on my wish list for some future version is the ability to set this up for more than just the Incident table. I thought about just switching over to the Task table, which includes Incident as well as quite a few other things derived from Task, but the base Task table does not include the Incident’s Caller or the Request’s Requested for, so there would have to be some special considerations included to cover that. The Task table has Opened by, but that’s not really the same thing when you are dealing with folks calling in and dealing with an Agent entering their information. I thought about adding some additional complexity to cover that, but in the end I just put all of that on my One Day … list and left well enough alone.

Based on what I first set out to do, I think it all came out OK, though. Yes, there are quite a few more things that we could add to make it applicable to a broader domain, and there are a number of things that we could do to make it more flexible, user-friendly, and user-customizable, but it’s a decent start. Certainly good enough to warrant the release of an initial version, which you can download here. Since this is a scoped app, I did not bundle any of the dependencies in the Update Set, so if you want to try this out in your own instance as is, you will need to also grab the latest version of SNH Form Fields and SNH ServiceNow Events, which you can find here. All in all, I am happy with the way that it came out, but I am also looking forward to making it even better one day, after I have spent some time attempting to use it as it is today.

Update: There is a better (improved) version here.

Event Management for ServiceNow, Revisited

“True prevention is not waiting for bad things to happen, it’s preventing things from happening in the first place.”
Don McPherson

Some time ago we built some utility functions to support reporting Events within the ServiceNow Platform. That was before the Flow Designer, though, so that effort did not include any support for that environment. We already have the script to do all of the heavy lifting from our earlier work, so it wouldn’t take much to create a Flow Designer Action that called that script to report an Event that occurred during Flow processing. We can call our new Action Log Event, and set up Action Inputs for all of the usual suspects.

Log Event Action Inputs

For our script step, we will basically set up the same inputs and then source them directly from the primary Action Inputs.

Script step inputs mapped to Action Inputs

Those of you who are paying attention will notice that we defined the additional_input field as a String even though it needs to be an Object when we make the call to our existing script. The assumption here is that the caller will provide a valid JSON String, and then we can turn it into an Object in our script before we make the call. Here is the script to convert the String and then make the call.

(function execute(inputs, outputs) {
	if (inputs.additional_info) {
		try {
			inputs.additional_info = JSON.parse(inputs.additional_info);
		} catch(e) {
			//
		}
	}
	var seu = new ServerEventUtil();
	seu.logEvent(inputs.source, inputs.resource, inputs.metric_name, inputs.severity, inputs.description, inputs.additional_info);
})(inputs, outputs);

There are no outputs from this process, so this is the entire Action. Once we Save and Publish it, it will be available from the Action selection list, and then we can add Log Event steps anywhere in our Flows and Subflows where we want to report an Event. That was fairly quick, easy, and relatively painless. For those of you would like to try it out on your own, here is an Update Set.

Fun with Webhooks

“Good ideas are common – what’s uncommon are people who’ll work hard enough to bring them about.”
Ashleigh Brilliant

There is quite a bit of Webhook stuff in various IntegrationHub spokes, but it all seems to be oriented towards consuming incoming events from different external event publishers. I want to actually be the publisher, and send out information based on some preferences selected by the consumer. That may be hidden somewhere in the Now Platform already, but I can’t seem to find it, so I have decided that I would try to develop a Scoped Application to do just that. This may very well be recreating something that already exists in the platform today, but it sounds like a fun exercise, so I am going to give it the old college try.

As always, I will attempt to start out with the most basic of offerings, and then incrementally expand to add more and better features. My approach is to treat this feature as somewhat analogous to a Watch List, in that you sign up to follow certain events, but instead of sending a notification to a User when the event occurs, the result will be that the information is posted to a specified URL. This can apply to any number of things, but to start off, I am going to focus on some very specific changes to one particular table (Incident), and then later expand from there.

To make this work, there will need to be some kind of Webhook Registry where a consumer would sign up to receive these posts. When registering your webhook, you would enter the URL to which you want the data posted along with the specifics of what type or types of events you would like to have included. I’m thinking about linking them directly to an owner, and having some kind of My Webhooks Portal Page where you could manage your existing registrations and add new ones. When adding a new one, you should be able to enter and test your URL, and for our first iteration, that may be the only choice that you get. Later on, we will want to add the ability to choose what you want to follow, which specific updates should trigger a new post, and even what you would like to have included in the payload. But we will also want to start out as simple as possible, so the initial registry may turn out to be quite barren as far as input fields go.

Once registered, there will need to be some process to actually send out the posts as requested in the registration. This could be a Business Rule on the source table, or maybe something created in the Flow Designer. Either way, the process should scan the registry for any condition matches and then send out a post for each match. Each post and response should be logged in some kind of Webhook Activity Log, and any bad HTTP Response Codes should be reported to Event Management. A robust service would attempt to repost any failures up to a certain limit before giving up completely, but all of that can be delegated to some Alert Management Rule at some later time. Again, we will want to start out simple, so our initial focus will just be on making that initial post attempt. Everything else can be pushed off until later on in the process.

Those would seem to be the two major functions: registering the webhook and sending out the posts. We may want some other things at some point, such as the ability to review the logs or to manually repost or to clone an existing registration, but for now, just those two things should get the ball rolling. We may also want to set up a sample receiver for testing purposes, but in practice, the receivers would be other products and outside the scope of this development exercise. There is actually an existing service out on the Internet called Webhook.site that might turn out to be just what I need in order to do a little testing. We should check that out when we get to that point.

For our parts list, then, I can see the need for the following artifacts:

  • A table to hold the webhook registrations,
  • A my_webhooks portal widget to list all webhooks owned by the user,
  • A webhook portal widget for editing a single webhook registration,
  • A Business Rule or Flow to send out the posts,
  • A log table to record the posts and response, and possibly
  • A Script Include to contain some common functions.

Of course, before we create any of that, we will have to create the Scoped Application itself, so that should be where we start next time when we initiate the actual construction phase of this effort.

Fun with Outbound REST Events, Part X

“No one has a problem with the first mile of a journey. Even an infant could do fine for a while. But it isn’t the start that matters. It’s the finish line.”
Julien Smith

After our last installment in this series, our Events now spawn Incidents that are pretty much just what we would like to see. The only remaining challenge at this point is to create a meaningful Description field value. Although we have set things up to produce this Description in a Script Include function, I should point out right here at the outset that everything that we are about to do in our script could also be accomplished in the Flow Designer itself. In fact, it probably should be done using the Flow Designer if we are to fully embrace the whole no-code future towards which we all seem to be herded. I’m still an old coder at heart, though, so it seems easier to me to scratch out another quick function than it does to build out all of those action steps using input forms. Still, it would probably be a worthwhile exercise to replace this script with a subflow one day; today is just not that day. Today we code!

Although we are passing the Alert to our function as an argument, much of the data we need is actually in the Event that spawned the Alert, so the first thing that we are going to want to do is go out and get that guy. That’s pretty basic GlideRecord stuff.

// get initial Event
var eventGR = new GlideRecord('em_event');
eventGR.addQuery('alert', alertGR);
eventGR.orderBy('sys_created_on');
eventGR.query();
eventGR.next();

Since it is possible that there could be more than one Event associated with our Alert, we include an orderBy directive to ensure that we get the very first Event out of the bunch. Once we have our Event in hand, we will have access to the additional_info JSON string, which we will want to convert to a Javascript object so that we can reference all of the various component parts.

// get addition info from Event
var additionalInfo = {};
try {
	additionalInfo = JSON.parse(eventGR.getValue('additional_info'));
} catch (e) {
	gs.info('Unable to parse additonal_info from Event ' + eventGR.number);
}

We also have access to the Event resource, which in our case is a User’s user_name. We can use that to get the sys_user record for that user, much in the same way that we retrieved the Event record.

// get affected User record
var userGR = new GlideRecord('sys_user');
userGR.get('user_name', alertGR.getValue('resource'));

This assumes, of course, that the only place that we using our address validation capability is on the User Profile page. If we ever expand its use to other places — say on the Building or Location form — then we would need to have some way to know whether the resource was a User or a Location or a Building or some other entity with an address to validate. Based on that information, we might be retrieving a Building record or a Location record instead of a User record. For now, though, we can safely assume that the resource is a User.

Now that we have gathered up all of the data that we need, we can start building out our Description. To begin, let’s start out with something that will be universal to all of our Incidents, regardless of the problem being reported.

// format description
var alertDesc = alertGR.getDisplayValue('description');
var section = '\n========================================\n';
var subsection = '\n----------------------------------------\n';
desc += additionalInfo.user.name;
desc += ' attempted to update the address on the user profile for user ';
desc += userGR.getDisplayValue('name');
desc += ', but was unable to verify the address using the US Address Validation service due to the following error:\n\n';
desc += alertDesc;
desc += '\n\nIncident Details:';
desc += section;

Beyond this point, we are going to want to be a little more specific based on what actually happened to trigger the Event. We can do that by introducing some conditional code based on the known values found in the Alert’s description field.

if (alertDesc.startsWith('The response code')) {
	// bad response code language will go here
} else if (alertDesc.startsWith('The response object')) {
	// bad response object language will go here
} else if (alertDesc.startsWith('The response content')) {
	// bad response content language will go here
} else {
	// we should never get here, but just in case ...
}

Since most of the Events that we have triggered up to this point have been of the bad response code variety, let’s do those first.

if (!additionalInfo.response.code) {
	desc += 'No response was received from the service, which could be an indication that the service is unavailable or unreachable. Check the status or the external service as well as the status of your connection to the Internet.';
} else if (additionalInfo.response.code == 401) {
	desc += 'A Response Code of 401 indicates an authentication error of some kind. Verify that your account credentials are correct and that your account is in good standing with the service provider.';
} else {
	desc += 'The service returned a Response Code of ';
	desc +=  additionalInfo.response.code;
	desc += '. Additional information on the meaning of this response may be found in the Response Body. Also, you can check with the service provider for further clarification on the appropriate handling of this response.';
	desc += '\n\nDetailed information on the ';
	desc += additionalInfo.response.code;
	desc += ' Response Code can be found on the web at https://httpstatuses.com/';
	desc += additionalInfo.response.code;
}

This gives us specialized language for no response code at all, and a response code of 401. Everything else is handled in a more generalized section that covers any other bad response code. As more knowledge of the potential response codes becomes available through experience with the service, more specialized language can be added that can be more specific to other known response codes.

Now let’s take a look at what we can do for bad response objects.

desc += 'The service returned a valid Response Code and a parsable response, but the response did not contain certain expected elements necessary to determine the validity of the address. Review the response received and check with the service provider to see if anything has changed with the API specifications.';

That one is about as simple as you can get; everyone gets the same language. For the bad response content issues, things are a little bit more sophisticated. Everyone still gets the same language, but there is a possibility for an exception with this group, so we include code to handle that as well.

desc += 'The service returned a valid Response Code, but the response was either empty or ill-formatted. Review the response received and check with the service provider to see if the service is experiencing problems, or if anything has changed with the API specifications.';
if (additionalInfo.exception) {
	desc += '\n\nException Details:';
	desc += subsection;
	desc += '   Exception: ';
	desc += additionalInfo.exception;
	desc += '\n   Stack Trace:\n';
	desc += additionalInfo.stackTrace;
}

Once we complete all of the conditional logic, we wrap things up with some more universal code that applies to everyone. This just serves to include the user’s input and the service’s response at the end of the body of the Description field for reference.

desc += '\n\nAddress Details:';
desc += subsection;
desc += '   Street: ';
desc += additionalInfo.input.street;
desc += '\n   City: ';
desc += additionalInfo.input.city;
desc += '\n   State: ';
desc += additionalInfo.input.state;
desc += '\n   Zip Code: ';
desc += additionalInfo.input.zip;
desc += '\n\nResponse Details:';
desc += subsection;
desc += '   Response Code: ';
desc += additionalInfo.response.code;
desc += '\n   Response Body: ';
desc += additionalInfo.response.content;
if (additionalInfo.response.object) {
	desc += '\n   Response Object: ';
	desc += JSON.stringify(additionalInfo.response.object, null, '\t');
}

There is still more helpful information that we could add, such as links to the service provider’s documentation, or in the case of the 401 error, the names of the system properties that contain the credentials, but this is good enough for a sample. Let’s just save what we have and then trigger another Event and see what comes out the other side.

Incident with updated description from the new Script Include function

Well, that’s much, much better than the description that the original Create Incident flow was producing. It’s not perfect, but I think it does provide the person receiving the Incident enough details about both what happened and what might be done about it that they can get to work on the ticket right away without a whole lot of research. Obviously, it can be fine-tuned over time, but this is a good foundation upon which to build for this particular use case.

That pretty much wraps up all that I had hoped to accomplish with this series. It took us 10 installments to get here, but much of that was due to the fact that we had to build out our own address validation infrastructure before we could use it to demonstrate applying Event Management tools and techniques to internal ServiceNow features and functions. For those of you who like to play along at home, I have bundled what I hope are all of the relevant parts and pieces into an Update Set that you are welcome to pull down and import into your own environment.

Fun with Outbound REST Events, Part IX

“What we hope ever to do with ease, we must first learn to do with diligence.”
Samuel Johnson

Last time, we were able to have our Alert produce an Incident, but it wasn’t exactly the Incident that we wanted. Today, we are going to fix that. Since we don’t want to alter the out-of-the-box Create Incident subflow that we are currently using to create our Incidents, we will want to make a copy of the subflow so that we can customize it for our own purposes. To do that, pull up the Create Incident subflow in the Flow Designer, click on the vertical ellipses in the upper right-hand corner, and select Copy subflow to create a new copy of the subflow.

Copy the Create Incident subflow

A pop-up dialog box will appear where you can enter the name of your new subflow, which we will call Create Address Issue Incident.

Enter the name of the new subflow

After entering the name, click on the Copy button to create your new subflow from the original. This should open up your new subflow for editing.

Your new Create Address Issue Incident subflow

Now that we have our own copy, we can make whatever modifications that we would like to make without disturbing the original. All of the changes that we will want to make are in the Create Task step, so let’s open that up and see what we can do to produce Incidents that include the detail that we would like to provide to the technician working the ticket. Let’s get rid of the Description value entirely, as that’s not the description that we want. That very same text is repeated in the Additional Comments field, anyway. The rest of the values that are there seem to be OK for now, but let’s add a few more using the +Add Field Value button at the bottom of the field list.

Let’s set the State to Assigned, the Assignment Group to ITSM Engineering, the Category to Software, and the Subcategory to Internal Application. Some of that may not be exactly right, but this is just an example of the kinds of things that you can do. For the new Description value, which is going to be conditional depending on the nature of the issue that triggered this Incident, let’s use an inline script. That can be done by clicking on the little f(x) button to the right of the field value.

Create Task step expanded

At this point, we don’t necessarily need to build the entire script, but we will want to stub it out enough to keep things functional for testing. Since the script might get a little complex, I like to push all of the logic out to a Script Include and then limit the code in the subflow to just a call to a function in the Script Include. That keeps the clutter out of the subflow itself, and also allows us to refine the output of the process by just editing the Script Include without having to publish a new version of the subflow. We already have a Script Include devoted to the address verification process, so let’s just add a simple function to that existing artifact so that we have something that we can call in the subflow.

formatIncidentDescription: function(alertGR) {
	return 'Test description for ' + alertGR.number;
},

There isn’t much to this at this point, but there is enough here to verify that we are receiving the Alert, which we will want as a reference when we start building out the actual description that we want. Getting back to our subflow, the script to invoke this new function will look like this:

var avu = new AddressValidationUtils();
return avu.formatIncidentDescription(fd_data.subflow_inputs.ah_alertgr);

Figuring out that fd_data.subflow_inputs.ah_alertgr was the correct syntax for referencing the Alert was not all that intuitive. I have worked with the Flow Designer long enough now to know about the fd_data object, but I couldn’t find much in the way of documentation on identifying the names of the various properties of that object. Fortunately, I did come across some documentation on the type-ahead feature, which finally led me to the information for which I had been searching. Typing a single dot after the fd_data brings up a nice pick list of choices, and another one after selecting the choice does the same for that object as well.

fd_data properties pick list

With our script in place for the Description value, all that is left is to Save and Publish the new subflow and we are done with the Forms Designer. To use our new subflow, we will need to go back into our Alert Management Rule, open up the Action section, and replace the Create Incident subflow with our modified copy.

Modifying our rule to use the newly created subflow

At this point, we should be good to test again and see what kind of Incident gets generated now. Just remember to select a different person so that our Event is not assigned to an existing Alert that has already been processed. We want to be sure to create a brand new Alert to activate our modified rule. Once we force a new Event, we can pull it up and take a look at it to see what values have been set in the Event record.

Newly generated Event record

From the Event, we can navigate to the Alert, and from the Alert we can then navigate to the Incident. Let’s check out the Incident.

Incident generated from our new subflow

This is an improvement over our initial effort, as the ticket has now been properly categorized and routed to an Assignment Group for resolution. We still need a much better Description, but the presence of the Alert ID in the current Description value verifies that we are indeed passing the Alert record to our stubbed-out function, which we can now use to produce a more detailed and informative description value. Scripting that out for all of the various possibilities will be a bit of an effort, through, so let’s just make that the focus of our next installment.

Fun with Outbound REST Events, Part VIII

“Computer science education cannot make anybody an expert programmer any more than studying brushes and pigment can make somebody an expert painter.”
Eric S. Raymond

Now that we have completed our address verification feature, added Event logging, and tested the creation of those Events, it’s time to actually do something with the Events when they come out. Well, actually, we won’t be doing anything with the Events themselves; we will be doing something with the Alerts that come out as a result of the Events. To process those Alerts, we will need to create a new Alert Management Rule.

To create a new rule, pull up the list of Alert Management Rules and click on the New button at the top of the list. The form is divided into three sections and the first section is the Alert info section. In that section, you will want to enter the name of the Alert, a description of the Alert, and you will want to set the Multiple alert rules field to Stop search for additional rules. This will prevent additional rules from evaluating or taking action on your Alert, as your rule should handle everything that needs to be done and no further rules should be applied.

Alert Info section of Alert Management Rule form

After completing the Alert info section, use the progress bar at the top of the form to move on to the next section of the form, the Alert Filter section. This is where you specify which Alerts you would like to process with this rule. In our case, we want to handle anything that comes out of our Script Include, which is identified in the Alert in the Source field. That makes our filter quite simple, as we only want to process Alerts where the Source is AddressValidationUtils:

Alert Filter section of Alert Management Rule form

After completing the Alert Filter section, use the progress bar at the top of the form once again to move on to the next section of the form, the Action section. This is where you specify what action should be taken whenever a new Alert is created that meets your filter criteria. There are quite a lot of things that you can do here, but we want to create an Incident. Fortunately for us, there is a built-in, out-of-the-box Subflow already developed that does exactly that. To select this Subflow, called Create Incident, double-click on the Insert new row … line to open up a new row and then double-click on the Subflow column of the newly inserted row to select the Create Incident subflow from the selection list.

Action section of Alert Management Rule form

After completing all three sections of the form, click on the Submit button to save your new rule. Once the rule has been created and saved, it is now active in the system, and the next time any Events are logged by our Script Include, the rule will be triggered. Let’s go ahead and do that now, just to see what happens.

We want to trigger an Event, so we can mangle our credentials again and then update someone’s address, which should do the trick. We will want to select a different person for this test, just to make sure that we trigger a brand new Alert, and not just have our Event associated with an existing Alert from any of our previous testing. Once we submit the address change, we can find our Event, and from there, navigate to the Alert.

Alert resulting from address service Event

Unlike all of our previous Alerts, this new one now has an Incident number in the Task field. This is the Incident that was generated from the execution of our new Alert Management Rule. Click on the info icon to the right of the Task field and then click on the Open Record button in the resulting pop-up window to bring up the Incident.

Incident generated from the Alert Management Rule

This may not be exactly the Incident that we would like to see, but we are taking things one step at a time, and we just produced an Incident from our Alert, which is a huge step in and of itself. Now let’s take a look at this Incident and see where we might make things a little better.

One of the first things that you might notice is that there is no Assignment Group, so it has not been routed to anyone for resolution. We should know to whom this Incident should be assigned, which might be the ServiceNow support team or at a minimum, the Service Desk, so we should populate that field right from the start. If we do that, then we should also set the State to Assigned rather than New.

We should also use a more appropriate Category, but the biggest improvement that we could make would be in the Description. When you generate an Incident via Event Management, you want to do as much as you can to explain both what happened and what can be done about it to the technician who will end up having to work the Incident. The Description that we are generating right now really doesn’t do that at all. We can do much better.

All of these fields are populated in the Create Incident flow that we assigned to our rule. Since that’s an out-of-the-box generic flow, we don’t really want to modify it, but we can make our own copy of it and then make whatever changes we want to make to our copy. That sounds like a bit of a project, though, so let’s make that exercise the subject or our next installment.

Fun with Outbound REST Events, Part VII

“It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.”
Theodore Roosevelt

Now that we have added the code to log all of our potential Events, we need to test that code out to make sure that it actually works. The only way to do that is for something to happen to trigger the logging of the Event. Some errors are easier to produce than others, so we might as well start out with an easy one first.

Probably the easiest of all, particularly since we have already done this in our earlier testing, is to force an invalid HTTP Response Code. We accomplished that when we were testing our Outbound REST Message by having the wrong credentials for the service. That got us a 401 response code instead of the desired 200. Since we are storing our credential values in System Properties, all we need to do in order to force a 401 response code is to change the value of one or both of those properties. Let’s do that now.

Updating the credentials properties

Now all that we need to do is make an address change on some User Profile and see what happens. Since our approach to service failures was to allow the update to proceed without address validation, you won’t really see anything when you update the user’s record. To find out if an Event was actually generated from the issue, we will have to take a peek at the Events table. The easiest way to do that is to select the All Events option from the left-hand navigation. Sure enough, our new Event is now sitting out there. Let’s take a look.

Event generated from address service failure

Everything looks to be in order, and thanks to the Event logging utility that we were able to leverage, there is data populated in the Event that we did not have to pass in ourselves. The JSON data in the Additional Info field is a little hard to read, but we have already gone over a quick fix for that. We should go ahead and do that same thing here.

Additional Info formatted using the JSON View Dictionary Attribute

That’s much better.

One other thing that you may have noticed is that logging this Event generated an Alert. Let’s take a look at the Alert now by clicking on the little info icon on the right side of the Alert field and then clicking on the Open Record button in the pop-up window.

The Alert generated from logging the Event

One of the things that you may have noticed is that ServiceNow generated a Message Key for our Event by combining a number of other Event properties. The generated message key for this Event is:

AddressValidationUtils_ServiceNow_ServiceNow_alene.rabeck_Invalid Response Code

If you do not supply a Message Key of your own, then one will be generated for you by combining the Source, Node, Type, Resource, and Metric Name. ServiceNow collects all Events with the same Message Key under a single Alert. This prevents multiple actions from being initiated for the same issue. For example, if a user attempted to update the profile of the same User multiple times, an Event would be logged for every failed attempt to reach the address validation service. However, all of those Events would be associated with a single Alert, so only one remediation action would be invoked. On the other hand, if an update was attempted for a different User, any Events logged as a result of that activity would be consolidated under a different Alert, as the Resource (the User, in our example) would be different, which would generate a different Message Key.

Another thing that you may have noticed is that there is no Task associated with this Alert. Tasks can be generated from Alerts using Alert Management Rules, but there are currently no rules in place that apply to this Alert, so no further action was taken. Before we are through with this exercise, we will be building a rule to spawn Incidents from our Alerts, but that’s not today’s concern. Today I want to focus on the testing of our Events.

We added code to our Script Include to log 4 different kinds of Events, and so far, we have only tested one of those, the invalid HTTP Response Code. The other three all have something to do with the response content returned from the service, which makes it a little more difficult to test, since we have no control over the response returned from the service. To test these other three, we will we need to add some temporary code to alter the response that comes back from the service to something that will trigger each of our other Events. We can add that code right after we get the actual response from the service and then alter it to force an error for testing purposes. Here is the original line of code that grabs the response content along with our alterations to produce an error condition:

var body = resp.getBody();
// temporary test code (remove after testing)
body = '[';
// end temporary test code

That value should trigger the unparsable response error. Now, all we need to do to test it is to issue an address change and then check the Events table for the resulting Event. To trigger the invalid response content error, you can change the inserted line to this:

body = '[]';

Now the response is parsable, but it is empty, which should take us to our third error condition. To get to the fourth, we can alter it again to this:

body = '[{}]';

Now the response is parsable and the array contains a single element, so that should get us past the earlier two issues. Since the object does not have an analysis property, though, that should drop us into our fourth error condition, which should log yet a different Event.

Once you complete all of your testing, you will want to go back into the code and remove all of the lines we added for testing purposes, and then test one more time, just to make sure that everything is now back working as it should. With that out of the way, we have now completed the testing for all of our recent changes.

Now that we are successfully logging all of these Events, we are going to want to do something with them. That process deserves an installment devoted exclusively to that effort, so we will leave that exercise for our next time out.

Fun with Outbound REST Events, Part VI

“Quality is never an accident. It is always the result of intelligent effort.”
John Ruskin

Now that we have completed our address verification feature, we can finally turn our full and complete attention to the actual purpose of this entire adventure, which which is to explore the use of ServiceNow Event Management practices on the internal workings of ServiceNow itself. When we last left our Script Include, we had identified a number of places in the script where things could potentially go wrong. As a temporary measure, we just put a simple gs.info statement in each one of those places. Now we want to replace those with Event logging so that we can leverage the built-in power of the ServiceNow Event Management infrastructure.

To make that easier, we built a utility a while back to handle much of the heavy lifting of logging an Event. We can take advantage of that utility and minimize the code that we will need to our Script Include. Each gs.info statement will need to be replaced with something like this:

var seu = new ServerEventUtil();
seu.logEvent(source, resource, metric_name, severity, description, additional_info);

Now we just need to figure out what values to send for each of those function arguments. Let’s take them one at a time.

source

This is the source of the Event, which is our case is the Script Include that is logging the Event. Since the name of the Script Include is always stored in an internal property called type, I just like to pass this.type for this argument, which works in all Script Includes without modification.

resource

This is a reference to thing that you were working on when the problem occurred. In our case, this would be a User, but in the current configuration, we do not have a handle on the User record that is being updated. We could use the address here, just to have some kind of unique value, but when we turn this Event into an Incident, it would be good to know which User was being updated. The solution to that would be to have the calling script pass some reference to User record as an additional argument to the function. That’s a little more work, but it will be worth it in the long run.

metric_name

This is basically the problem that occurred, and we will end up with a different value here for different issues such as an unparsable JSON string or a bad HTTP Response Code.

severity

This is just your standard severity values, and for our purposes, I think we will just pass a hard-coded 3 (Moderate) here.

description

As the name implies, this is just a text description of what happened. Ours will be unique to the problem that occurred.

additional_info

This is an open-ended JSON object into which you can stuff basically anything that you might want to know about what happened that isn’t already in a defined property. The Event logging utility automatically adds some standard things to this object such as user information and a stack trace, but we will want to add some additional information as well such as what was sent to the service and what came back. It takes a bit of code to construct the additional info object, so I like to build a function for that so that it can be called from wherever it is needed instead of duplicating the code everywhere. Here is the one that we will add for this exercise:

buildAdditionalInfo: function(input, response, respObject, exception) {
	var additionalInfo = {input: {}, response: {}};

	additionalInfo.input.street = input.street;
	additionalInfo.input.city = input.city;
	additionalInfo.input.state = input.state;
	additionalInfo.input.zip = input.zip;
	additionalInfo.response.code = response.getStatusCode();
	additionalInfo.response.content = response.getBody();
	additionalInfo.response.headers = response.getHeaders();
	if (respObject) {
		additionalInfo.response.object = respObject;
	}
	if (exception) {
		additionalInfo.exception = exception.toString();
		additionalInfo.stackTrace = exception.stack;
	}

	return additionalInfo;
},

Using a function for this not only consolidates the code into a single place, it also ensures some consistency between the various Events, which makes it easier to pull the data back out when you want to use it for things like formatting the description of a resulting Incident.

Now that know how we are going to populate these arguments, let’s go down through the code and replace each of our gs.info statements with Event logging. The first one that we come across is the JSON parsing exception.

try {
	respArray = JSON.parse(body);
} catch (e) {
	seu.logEvent(
		this.type,
		user,
		'Unparsable response',
		3,
		'The response content received from the US Address validation service could not be parsed.',
		this.buildAdditionalInfo(response, resp, null, e));
}

At this point in the process, we do not have a response object, but we do have an exception, so we pass null as the response object to the function that builds out the additional info. All of the others will be very similar, so we don’t have to go through each one individually. Here is the complete function, with all of the gs.info statements replaced and the user identifier added as a function argument:

validateAddress: function(user, street, city, state, zip) {
	var response = {result: 'failure', street: street, city: city, state: state, zip: zip};

	var seu = new ServerEventUtil();
	var rest = new RESTMessage('US Street Address API', 'get');
	rest.setStringParameter('authid', gs.getProperty('us.address.service.auth.id'));
	rest.setStringParameter('authToken', gs.getProperty('us.address.service.auth.token'));
	rest.setStringParameter('street', encodeURIComponent(street));
	rest.setStringParameter('city', encodeURIComponent(city));
	rest.setStringParameter('state', encodeURIComponent(state));
	rest.setStringParameter('zip', encodeURIComponent(zip));
	var resp = rest.execute();
	var body = resp.getBody();
	if (resp.getStatusCode() == 200) {
		var eventLogged = false;
		var respArray = [];
		try {
			respArray = JSON.parse(body);
		} catch (e) {
			seu.logEvent(
				this.type,
				user,
				'Unparsable response',
				3,
				'The response content received from the US Address validation service could not be parsed.',
				this.buildAdditionalInfo(response, resp, null, e));
			eventLogged = true;
		}
		if (respArray && respArray.length > 0) {
			respObj = respArray[0];
			if (typeof respObj.analysis == 'object') {
				var validity = respObj.analysis.dpv_match_code;
				if (validity == 'Y' || validity == 'S' || validity == 'D') {
					response.result = 'valid';
					response.street = respObj.delivery_line_1;
					response.city = respObj.components.city_name;
					response.state = respObj.components.state_abbreviation;
					response.zip = respObj.components.zipcode;
					if (respObj.components.plus4_code) {
						response.zip += '-' + respObj.components.plus4_code;
					}
				} else {
					response.result = 'invalid';
				}
			} else {
				seu.logEvent(
					this.type,
					user,
					'Invalid Response Object',
					3,
					'The response object received from the US Address validation service was not valid.',
					this.buildAdditionalInfo(response, resp, respObj));
			}
		} else {
			if (!eventLogged) {
				seu.logEvent(
					this.type,
					user,
					'Invalid Response Content',
					3,
					'The response content received from the US Address validation service was not valid.',
					this.buildAdditionalInfo(response, resp));
			}
		}
	} else {
		seu.logEvent(
			this.type,
			user,
			'Invalid Response Code',
			3,
			'The response code received from the US Address validation service was not valid.',
			this.buildAdditionalInfo(response, resp));
	}

	return response;
},

The one place where we had to add a little bit of extra logic was the Event that is triggered when the respArray is empty. One possible reason for that field to be empty would be if we failed to successfully parse the JSON string. When that happens, we have already logged an Event, so we would not want to now log a second one for the same issue. To prevent that from happening, we added the eventLogged variable, and then we only log an Event later on if that variable is still set to false. Other than that one special circumstance, all of these are pretty much the same other than the unique values that are specific to the particular problem triggering the Event.

That completes the modifications necessary to support Event logging, but since we added the user identifier to the list of function arguments, we still have a little work to do to carry that change forward through all of the other components. To begin, we will have to collect the user value from the Ajax parameters and pass that on to the primary function. That client callable function now looks like this:

validateAddressViaClient: function() {
	var user = this.getParameter('sysparm_user');
	var street = this.getParameter('sysparm_street');
	var city = this.getParameter('sysparm_city');
	var state = this.getParameter('sysparm_state');
	var zip = this.getParameter('sysparm_zip');
	return JSON.stringify(this.validateAddress(user, street, city, state, zip));
},

Not much change here; we pull one more parameter into a variable and then add that variable to the function call arguments. Of course, none of that will do any good if we don’t send that extra parameter with the Ajax call, so we will need to modify our Client Script as well. Again, there is not much to change here, but we need to make the change. Our code to value the parameters now has one additional line:

ga.addParam('sysparm_name', 'validateAddressViaClient');
ga.addParam('sysparm_user', g_form.getValue('user_name'));
ga.addParam('sysparm_street', street);
ga.addParam('sysparm_city', city);
ga.addParam('sysparm_state', state);
ga.addParam('sysparm_zip', zip);

That completes the changes that we need to make in order to log an Event whenever something unexpected occurs. We still need to test everything to make sure that it all works, but to do that, we are going to have to force some kind of error to occur. That sounds like a project in and of itself, so this seems like a good stopping place for now. We’ll figure out all of that testing stuff in our next installment.