Periodic Review, Part III

“We conquer by continuing.”
George Matheson

Last time, we created our app, added a few properties, and built our first table. Today we are going to work with that table to configure the layout of both the list and the form, and then maybe do a few other things before we move on to the rest of the tables. Let’s start with the list.

To edit the fields that will show up on the list view, we can bring up the list view and then select Configure -> List Layout from the context menu.

Configuring the list layout

Using the slush bucket, we can select Number, Short description, Item label and Description from the available fields on the table.

Selecting the fields to appear on the list view

That will give us a list view that looks like this.

Newly configured list view

Using basically the same method, we can arrange the fields on the form view.

Configuring the form layout

The form is a bit more complicated than the list, so to help organize things, we can divide the form into sections. After we lay out the main section of the form, we can scroll down to the Section list and click on the New… option, which brings up a small dialog box where we can give our new section a name.

Creating a new section on the form

Once we have created the new section, we can drag in and arrange all of the fields that we would like to see in that section of the form.

Populating the fields in our new section

Once that has been completed, we can take a look at our new form.

The new form layout

We are still not quite done with the form just yet, though. All of the columns that reference fields on the selected table should have a selection list that is limited to just the fields on that table. To accommodate that, we need to pull up the dictionary record for each of those fields and set up a dependency. To do that, right click on the field label and select Configure Dictionary from the resulting context menu.

Editing the dictionary record from the form

Using the Advanced view, go into the Dependent Field tab and check the Use dependent field checkbox and select Table from the list of fields.

Setting up the dependent field

This process will need to be repeated for all of the columns that represents fields on the configured table.

The last thing that we need to add to this form, at least for now, is the ability to test the Filter against the specified table. It would probably be more user-friendly if our Filter field was some kind of query builder, but since it is just a simple String field, the least we can do is to provide some mechanism to test out the query string once it has been entered. The easiest way to do that would be to create a UI Action called Test Filter that used the Table and the Filter fields to branch to the List view of that table. Building a link to the List view in script would look something like this:

current.table + '_list.do?sysparm_query=' + encodeURIComponent(current.filter)

Branching to that page in a UI Action script would then just be this:

action.setRedirectURL(current.table + '_list.do?sysparm_query=' + encodeURIComponent(current.filter));

Clicking on the button would then take you to the list where you could see what records would be selected using that filter. To create the UI Action, we can use the context menu on the form and select Configure -> UI Actions and then click on the New button to create a new UI Action for that form.

Creating a new UI Action to test out the entered query filter

Once our action has been configured and saved, the button should appear at the top of the form.

New form button from new UI Action configuration

That should just be about it for our first table and all of the associated fields, forms, and views. Next time, we can use our Service Account Management app as a potential first user of this app and see if we can set up the configuration for that app before we move on to creating other tables.

Fun with Webhooks, Part VII

“The question isn’t who is going to let me; it’s who is going to stop me.”
Ayn Rand

Now that we have proven that the essential elements of our Subflow all work as intended, it’s time finish out the remainder of the flow’s activities, which include logging the HTTP POST and Response details as well as reporting any undesired results to Event Management. Let’s start with logging the activity.

The first thing that we will need in order to record our Webhook POSTs is a table in which to store the data. As we did with our original Webhook Registry table, we can navigate to System Definition -> Tables and click on the New button to bring up the Table definition screen.

New Webhook Log table

And once again we will give it a Label and let the system generate the associated Name. We will also want to uncheck the Create module checkbox again to prevent the generation of a number of artifacts for which we have no use. Once the table has been defined, we can start adding fields, and the first field that we will want to add is a Reference to the Webhook Registry table. Every log record will be linked to the registry for which the activity was POSTed, so we will want to establish that relationship with a Reference field that we can label Registry.

The other Reference field that we will want is a link back to the original Incident that is the subject of the POST. Since we set things up in a way that would allow us to support tables other than Incident, we will want to do this with a Document ID field rather than a direct reference to the Incident table. This time, when we configure the Dependent Field, we can dot walk through the registry reference to get to the table name column in that related table. This will save us from having to have a column on the log table for the name of the table that holds the record associated with the Document ID, and it will ensure that all of the Document IDs related to each Registry will only come from the table associated with that Registry.

Selecting the Registry record’s Table column as the Dependent Field

The rest of the fields in the log table are just what we sent over, and what we received in response:

  • Payload – The data that we will be POSTing
  • URL – The URL to which we will be POSTing our Payload
  • Status – The HTTP Response Code returned by the target server
  • Body – The body of the message returned by the target server
  • Error – The error flag
  • Error Code – The error code
  • Error Message – The error message
  • Parse Error – Any error that occurred while parsing the body of the response

After defining all of the fields on the table, I brought up the table’s form and arranged all of the fields on the screen in a manner that I thought was most appropriate.

Webhook Log form layout

Those of you who are paying close attention will also have noticed that I added the JSON View Dictionary Attribute to both the Payload and Body fields, just to make reading the JSON content a little easier.

Now that we have a table, we can start putting records into it. We will do this in our Subflow, right after we POST the payload. This is just a simple, out-of-the-box Create Record action that we can configure using data pills from various other steps.

Logging the Webhook POST and Response

In addition to capturing everything related to each POST, the other thing that we wanted to do was to capture any issues that might come up during this process. We are already aware of two possible issues, one being a bad response and the other being a good response, but with an unparsable response body. After we do the POST and log the result, we can throw in a few more conditionals to pick those up, and then add a Log Event Action to the flow for each.

Complete Subflow with Event logging for error conditions

Whenever we log an Event, we will want to capture as much information as we can about what went wrong. In this case, however, we have already logged everything about the transaction to the Webhook Log table, so really all we will need to provide is a way to find that record. Putting the sys_id in the additional_info field should do the trick. Here is how I populated all of the data for the Event:

Event Log data

That should complete the Subflow, at least for now. We may end up adding some other features in the future, but for now, this accomplishes everything that we set out to do. We still need to do a lot more testing to verify that all of these various branches in the tree work out as we are hoping, but the building part should be done now, at least for this portion.

As far as the remaining development goes, we still have to build out the My Webhooks portal widget and we also need to go back into the Script Include and add support for Basic Authentication. We also need to add code, and possibly additional fields in the registry record, for any other authentication protocols that we would like to support. So once again we find ourselves at a crossroads: we can either jump into the Service Portal world and start working on our widget, or we can turn our attentions to authentication and finish things up in that area. There is no need to make any decision on that today, though. We’ll figure all of that out when we meet again.

Fun with Outbound REST Events, Part VII

“It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.”
Theodore Roosevelt

Now that we have added the code to log all of our potential Events, we need to test that code out to make sure that it actually works. The only way to do that is for something to happen to trigger the logging of the Event. Some errors are easier to produce than others, so we might as well start out with an easy one first.

Probably the easiest of all, particularly since we have already done this in our earlier testing, is to force an invalid HTTP Response Code. We accomplished that when we were testing our Outbound REST Message by having the wrong credentials for the service. That got us a 401 response code instead of the desired 200. Since we are storing our credential values in System Properties, all we need to do in order to force a 401 response code is to change the value of one or both of those properties. Let’s do that now.

Updating the credentials properties

Now all that we need to do is make an address change on some User Profile and see what happens. Since our approach to service failures was to allow the update to proceed without address validation, you won’t really see anything when you update the user’s record. To find out if an Event was actually generated from the issue, we will have to take a peek at the Events table. The easiest way to do that is to select the All Events option from the left-hand navigation. Sure enough, our new Event is now sitting out there. Let’s take a look.

Event generated from address service failure

Everything looks to be in order, and thanks to the Event logging utility that we were able to leverage, there is data populated in the Event that we did not have to pass in ourselves. The JSON data in the Additional Info field is a little hard to read, but we have already gone over a quick fix for that. We should go ahead and do that same thing here.

Additional Info formatted using the JSON View Dictionary Attribute

That’s much better.

One other thing that you may have noticed is that logging this Event generated an Alert. Let’s take a look at the Alert now by clicking on the little info icon on the right side of the Alert field and then clicking on the Open Record button in the pop-up window.

The Alert generated from logging the Event

One of the things that you may have noticed is that ServiceNow generated a Message Key for our Event by combining a number of other Event properties. The generated message key for this Event is:

AddressValidationUtils_ServiceNow_ServiceNow_alene.rabeck_Invalid Response Code

If you do not supply a Message Key of your own, then one will be generated for you by combining the Source, Node, Type, Resource, and Metric Name. ServiceNow collects all Events with the same Message Key under a single Alert. This prevents multiple actions from being initiated for the same issue. For example, if a user attempted to update the profile of the same User multiple times, an Event would be logged for every failed attempt to reach the address validation service. However, all of those Events would be associated with a single Alert, so only one remediation action would be invoked. On the other hand, if an update was attempted for a different User, any Events logged as a result of that activity would be consolidated under a different Alert, as the Resource (the User, in our example) would be different, which would generate a different Message Key.

Another thing that you may have noticed is that there is no Task associated with this Alert. Tasks can be generated from Alerts using Alert Management Rules, but there are currently no rules in place that apply to this Alert, so no further action was taken. Before we are through with this exercise, we will be building a rule to spawn Incidents from our Alerts, but that’s not today’s concern. Today I want to focus on the testing of our Events.

We added code to our Script Include to log 4 different kinds of Events, and so far, we have only tested one of those, the invalid HTTP Response Code. The other three all have something to do with the response content returned from the service, which makes it a little more difficult to test, since we have no control over the response returned from the service. To test these other three, we will we need to add some temporary code to alter the response that comes back from the service to something that will trigger each of our other Events. We can add that code right after we get the actual response from the service and then alter it to force an error for testing purposes. Here is the original line of code that grabs the response content along with our alterations to produce an error condition:

var body = resp.getBody();
// temporary test code (remove after testing)
body = '[';
// end temporary test code

That value should trigger the unparsable response error. Now, all we need to do to test it is to issue an address change and then check the Events table for the resulting Event. To trigger the invalid response content error, you can change the inserted line to this:

body = '[]';

Now the response is parsable, but it is empty, which should take us to our third error condition. To get to the fourth, we can alter it again to this:

body = '[{}]';

Now the response is parsable and the array contains a single element, so that should get us past the earlier two issues. Since the object does not have an analysis property, though, that should drop us into our fourth error condition, which should log yet a different Event.

Once you complete all of your testing, you will want to go back into the code and remove all of the lines we added for testing purposes, and then test one more time, just to make sure that everything is now back working as it should. With that out of the way, we have now completed the testing for all of our recent changes.

Now that we are successfully logging all of these Events, we are going to want to do something with them. That process deserves an installment devoted exclusively to that effort, so we will leave that exercise for our next time out.

Fun with Outbound REST Events, Part III

“If it’s a good idea, go ahead and do it. It is much easier to apologize than it is to get permission.”
Rear Admiral Grace Murray Hopper (“Amazing Grace”)

In the last installment in this series, we set up an Outbound REST Message and announced the intention of devoting this installment to the creation of a Script Include that would utilize the new REST message. In testing out that new Script Include, however, I ended up making a few tweaks to that REST message, so we should probably go over those first, just to bring everyone up to speed on what the REST message looks like today.

The first thing that I ended up changing was the name of the address variable. The web service uses the word street for that parameter, but for some reason I was thinking that ServiceNow called the property on the sys_user table address, so I wanted to adopt their name and not the one used by the web service. As it turns out, however, ServiceNow uses the word street as well, so I went into the REST message definition and changed it back to street in the end point URL and in the variable test value list.

The other thing that I ended up doing was adding a completely new variable to both the URL and the variable list called authToken. When we tested the GET method, we were using the auth-id from the web site’s testing tool, and we expected to get the 401 HTTP Response that we received. However, when testing the new Script Include with my own auth-id, I got the same thing. After rooting around in the documentation on the service’s web site, I discovered that server-side calls to the service require both an auth-id and an auth-token, both of which you receive when you sign up for their services. Once I figured that out, I added an auth-token parameter to the URL, added an authToken variable, and added a new System Property to store the value of the auth-token called us.address.service.auth.token.

After doing all of that, before getting back to the Script Include, I went ahead and tested the modified GET method using the Test link on the method form. That actually got me valid results, but they were a little hard to read.

Web Service Test Results

Fortunately, there is a handy little trick that you can use in ServiceNow to clean up that big, long string quite nicely. If you right click on the Response field label and select Configure Dictionary, you will be taken to the Dictionary Entry form for the Response field. Down at the bottom of the form, you will find the Related Lists. Open up the Attributes tab and then click on the New button at the top of the list. This will take you to the Dictionary Attribute form.

Dictionary Attribute form

Select JSON view from the drop-down list or pop-up selection and then type the word true in the Value field. Save the form, which will take you back to the Dictionary Attribute form, and from there you can use the back arrow on the form to return to the test results. Now you should see a new little icon next to the field label, and if you click on it, a new pop-up screen will appear with the contents of the Response field formatted for much easier reading.

Formatted JSON response

That’s a much better way to look at that data. The JSON View attribute is just one of many field attributes available on the platform. When you have some time, it’s a worthwhile exercise to go back into that selection list and take a look at all of the various choices. It’s very easy to try one or two out, just to see what they do. Some of them, like this one, can be quite useful.

Well, we never got around to actually working on the Script Include, but at least we are all caught up on the changes that I made to get to this point. Since the Script Include discussion will undoubtedly consume an entire post all on its own, I think this will be a good place to stop for now. We will tackle that Script Include in the next installment in this series.