“I dream of men who take the next step instead of worrying about the next thousand steps.” — Theodore Roosevelt
Last time, we did a little bit of work on the list and form for our new table, and now we need to try it out and make sure that everything works as intended. As we mentioned earlier, we will be using our recent Service Account Management demonstration app as the test case for this project, so let’s pull up the form and start entering some data.
The first thing that we will want to do is to select our Service Account table from the list of tables. Once that has been established, all of the table field drop-down choices will be fields from that table. Now we can select the columns to be used for each of the configuration values that are fields from the specified table.
Here, we run into a little bit of a problem. The fields we need for the Short Description, Description, and Recipient columns are all values from the base table. For the Escalation recipient column, however, what we would really like to specify there is the Manager of the Owner, which is a reference field on the sys_user table called manager. We would normally write that as a dot-walked field name using the value owner.manager. Since this is a drop-down, though, we don’t have that option. We’ll have to do a little research to see if there is a way around that limitation.
Those are the only required fields, so we can save the record at this point, just to establish our new configuration. Once we do that, though, we will want to bring the record back up again, as we need to test out the Test Filter button that we added to the form to make sure that it works. For now, let’s just use active=true as our test filter and see what happens.
Once we save the configuration record with the new filter, we can click on the Test Filter button as see where we land.
Well, there is good news and bad news here. The good news is that we ended up on the Service Account table’s list page, which is where we wanted to be. The bad news is that there are no records being displayed. Fortunately, the reason for that is that I cleared out all of my sample test cases on the instance once that project was complete, so the problem is not because of any flaw in the UI Action. Still, if I want to use the Service Account Management app for testing, I am going to need to recreate some of those test cases before we get too far. Anyway, you can look at the filter display on the list and see that the filter that we entered on the configuration is accurately represented in the resulting list. So that looks good.
There are a lot more fields to enter, but some of them, such as the Resolution Script or the Resolution Subflow will have to wait until we actually build a Script Include or a Subflow. For now, I think we have done enough to check out all of the form customizations, so now it is time to get back to building tables. Next time, we will jump into creating the table that we will use to store the execution information each time the process runs. That one should be pretty straightforward, so maybe we may even get a chance to tackle another one after that.
Last time, we created our app, added a few properties, and built our first table. Today we are going to work with that table to configure the layout of both the list and the form, and then maybe do a few other things before we move on to the rest of the tables. Let’s start with the list.
To edit the fields that will show up on the list view, we can bring up the list view and then select Configure -> List Layout from the context menu.
Using the slush bucket, we can select Number, Short description, Item label and Description from the available fields on the table.
That will give us a list view that looks like this.
Using basically the same method, we can arrange the fields on the form view.
The form is a bit more complicated than the list, so to help organize things, we can divide the form into sections. After we lay out the main section of the form, we can scroll down to the Section list and click on the New… option, which brings up a small dialog box where we can give our new section a name.
Once we have created the new section, we can drag in and arrange all of the fields that we would like to see in that section of the form.
Once that has been completed, we can take a look at our new form.
We are still not quite done with the form just yet, though. All of the columns that reference fields on the selected table should have a selection list that is limited to just the fields on that table. To accommodate that, we need to pull up the dictionary record for each of those fields and set up a dependency. To do that, right click on the field label and select Configure Dictionary from the resulting context menu.
Using the Advanced view, go into the Dependent Field tab and check the Use dependent field checkbox and select Table from the list of fields.
This process will need to be repeated for all of the columns that represents fields on the configured table.
The last thing that we need to add to this form, at least for now, is the ability to test the Filter against the specified table. It would probably be more user-friendly if our Filter field was some kind of query builder, but since it is just a simple String field, the least we can do is to provide some mechanism to test out the query string once it has been entered. The easiest way to do that would be to create a UI Action called Test Filter that used the Table and the Filter fields to branch to the List view of that table. Building a link to the List view in script would look something like this:
Clicking on the button would then take you to the list where you could see what records would be selected using that filter. To create the UI Action, we can use the context menu on the form and select Configure -> UI Actions and then click on the New button to create a new UI Action for that form.
Once our action has been configured and saved, the button should appear at the top of the form.
That should just be about it for our first table and all of the associated fields, forms, and views. Next time, we can use our Service Account Management app as a potential first user of this app and see if we can set up the configuration for that app before we move on to creating other tables.
“The power of one, if fearless and focused, is formidable, but the power of many working together is better.” — Gloria Macapagal Arroyo
Last time, we released yet another beta version of the app for testing, so now might be a good time to talk a little bit about what exactly needs to be tested, and maybe a little bit about where things stand and where we go from here. We have had a lot of good, quality feedback in the past, and I am hoping for even more from this version. Every bit helps drive out annoying errors and improves the quality of the product, so keep it coming. It is very much appreciated.
Installation
The first thing that needs to be tested, of course, is the installation itself. Just to review, you need to install three Update Sets in the appropriate order, SNH Form Fields, the primary Scoped Application, and the accompanying global components that could not be bundled with the scoped app. You can find the latest version of SNH Form Fieldshere, or you can simply grab the latest SNH Data Table Widgets from Share, which includes the latest version of the form field tag. Once that has been installed, you can then install collaboration_store_v0.7.5.xml, after which you can then install the globals, collaboration_store_globals_v0.7.xml.
There are two types of installations, a brand new installation and an upgrade of an existing installation. Both types of installs have had various issues reported with both Preview and Commit errors. On a brand new installation, just accept all updates and you should be good to go. I don’t actually know why some of those errors come up on a brand new install, but if anyone knows of any way to keep that from happening, I would love to hear about it. It doesn’t seem to hurt anything, but it would be so much better if those wouldn’t come up at all. On an upgrade to an existing installation, you will want to reject any updates related to system properties. The value of the application’s properties are established during the set-up process, and if you have already gone through the set-up process, you don’t want those values overlaid by the installation of a new version. Everything else can be accepted as is. Once again, if anyone has any ideas on how to prevent that kind of thing from happening, please let us all know in the comments below.
The Set-up Process
Once you have the software installed for the first time, you will need to go through the set-up process. This is another thing that needs to be tested thoroughly, both for a Host instance and a Client instance. It needs to tested with logo images and without, and for Client instances, you will need to check all of the other member instances in the community to ensure that the newly set-up instance now appears in each instance. During the set-up process, a verification email will be sent to the email address entered on the form, and if your instance does not actually send out mail, you will need to look in the system email logs for the code that you will need to complete the process.
The Publishing Process
Once the software has been installed and the set-up process completed, you can now publish an application to the store. Both Client instances and Host instances can publish apps to the store. Publishing is accomplished on the system application form via a UI Action link at the bottom of the form labeled Publish to Application Store. Click on the link and follow the prompts to publish your application to the store. If you run into any issues, please report them in the comments below.
The Installation Process
Once published to the store, shared applications can be installed by any other Host or Client instance from either the version record of the version desired, or the Collaboration Store itself. Simply click on the Install button and the installation should proceed. Once again, if you run into any issues, please use the comments below to provide us with some detailed information on where things went wrong.
The Collaboration Store
The Collaboration Store page is where you should see all of the applications shared with the community and there are a lot of things that need to be tested here. This is the newest addition to the application, so we will want to test this thing out thoroughly. One thing that hasn’t been tested at all is the paging, as I have never shared enough apps to my own test environment to exceed the page limit. The more the merrier as far as testing is concerned, so if you can add as many Client instances as possible, that would be helpful, and if each Client could share as many applications as possible, that would be helpful as well. Several pages worth, in varying states would help in the testing of the search widget as well as the primary store widget. And again, if you run into any problems, please report them in the comments.
The Periodic Sync Process
The periodic sync process is designed to recover from any previous errors and ensure that all Clients in the community have all of the same information that is stored in the Host. Testing the sync process is simply a matter of removing some artifacts from some Client instance and then running the sync process to see if those artifacts were restored. The sync process runs every day on the Host instance over the lunch hour, but you can also pull up the Flow and run it by hand.
Thanks in advance to those of you who have already contributed to the testing and especially to those of you who have decided to jump in at this late stage and give things a try. Your feedback continues to be quite helpful, and even if you don’t run into any issues, please leave us a comment and let us know that as well. Hopefully, we are nearing the end of this long, drawn out project, and your assistance will definitely help to wrap things up. Next time, we will talk a little bit more about where things go from here.
“You just have to keep driving down the road. It’s going to bend and curve and you’ll speed up and slow down, but the road keeps going.” — Ellen DeGeneres
Last time, we got the ability to toggle between the card/tile view and table view working as it should, and we made an initial stab at making the local apps look a little different from the apps that could be or have been pulled down from the Host. Now we need to figure out what we want to happen when the operator clicks on any of the links that are present on the tiles or table rows. Currently, the main portion of the tile is clickable, and in the footer, the version number could be clickable as well and could potentially launch the install process. In fact, let’s take a look at that first.
Right now, if you want to install a version of a store application, you go to the version record and click on the Install button. Let’s take a quick peek at that UI Action and see how that works.
var attachmentGR = new GlideRecord('sys_attachment');
attachmentGR.addQuery('table_name', 'x_11556_col_store_member_application_version');
attachmentGR.addQuery('table_sys_id', current.sys_id);
attachmentGR.addQuery('content_type', 'CONTAINS', 'xml');
attachmentGR.query();
if (attachmentGR.next()) {
action.setRedirectURL('/upload.do?attachment_id=' + attachmentGR.getUniqueValue());
} else {
gs.addErrorMessage('No Update Set XML file found attached to this version');
}
Basically, it just links to the stock upload.do page (which we hacked up a bit sometime back) with the attachment sys_id as a parameter. Assuming that we had the attachment sys_id included along with the rest of the row data, we could simply build an anchor tag to launch the install such as the following.
To get the attachment sys_id, we could steal some of the UI Action code above to build a function for that purpose.
function getAttachmentId(applicationId, version) {
var attachmentId = '';
var versionGR = new GlideRecord('x_11556_col_store_member_application_version');
versionGR.addQuery('member_application', applicationId);
versionGR.addQuery('version', version);
versionGR.query();
if (versionGR.next()) {
var attachmentGR = new GlideRecord('sys_attachment');
attachmentGR.addQuery('table_name', 'x_11556_col_store_member_application_version');
attachmentGR.addQuery('table_sys_id', versionGR.getUniqueValue());
attachmentGR.addQuery('content_type', 'CONTAINS', 'xml');
attachmentGR.query();
if (attachmentGR.next()) {
attachmentId = attachmentGR.getUniqueValue();
}
}
return attachmentId;
}
For local apps, or apps that are already installed and up to date, there is no action to take. But for apps that have not been installed, or those that have, but are not on the latest version, a link to the install process would be appropriate. Two mutually exclusive tags would cover both cases.
To test this out, we will need to publish an app on our Host instance and bring up the store on our Client instance to see how this looks.
That takes care of the install link. For the main action when clicking on the tile itself (or on the app name in the table view), we should have some way of displaying all of the details of the app. We could link to the existing form for the application table, but we might want something a little more formatted, and maybe even just a pop-up so that you don’t actually leave the shopping experience. Let’s see if we can throw something like that together next time out.
“Continuous delivery without continuous feedback is very, very dangerous.” — Colin Humphreys
Last time, we started to tackle some of the issues that were reported with the last set of Update Sets released for this project. Today we need to attempt to address a couple more things and then put out another version with all of the latest corrections. One of the issues that was reported was that the application publishing failed because the Host instance was unavailable. While technically not a problem with the software, it seems rather rude to allow the user to go through half of the process only to find out right in the middle that things cannot proceed. A better approach would be check on the Host first, and then only proceed if the Host is up and running.
We already have a function to contact the Host and obtain its information. We should be able to leverage that existing function in a new client-callable function in our ApplicationPublisherScript Include.
verifyHost: function() {
var hostAvailable = 'false';
if (gs.getProperty('x_11556_col_store.host_instance') == gs.getProperty('instance_name')) {
hostAvailable = 'true';
} else {
var csu = new CollaborationStoreUtils();
var resp = csu.getStoreInfo(gs.getProperty('x_11556_col_store.host_instance'));
if (resp.status == '200' && resp.name > '') {
hostAvailable = 'true';
}
}
return hostAvailable;
}
First we check to see if this is the Host instance, and if not, then we attempt to contact the Host instance to verify that it is currently online. To invoke this new function, we modify the UI Action that launches the application publishing process and add a new function ahead of the existing function that kicks off the process.
function verifyHost() {
var ga = new GlideAjax('ApplicationPublisher');
ga.addParam('sysparm_name', 'verifyHost');
ga.getXMLAnswer(function (answer) {
if (answer == 'true') {
publishToCollaborationStore();
} else {
g_form.addErrorMessage('This application cannot be published at this time because the Collaboration Store Host is offline.');
}
});
}
With this code in place, if you attempt to publish a new version of the application and the Host is unreachable, the publication process will not start, and you will be notified that the Host is down.
That’s a little nicer than just throwing an error half way through the process. This way, if the Host is out of service for any reason, the publishing process will not even begin.
I still do not have a solution for the application logo image that would not copy, but these other issues that have been resolved should be tested out, so I think it is time for a new Update Set for those folks kind enough to try things out and provide feedback. There were no changes to any of the globals, so v0.7 of those artifacts should still be good, but the Scoped Application contains a number of corrections, so here is a replacement for the earlier v0.7 that folks have been testing out.
If you have already been testing, this should just drop in on top of the old; however, if this is your first time, or you are trying to install this on a fresh instance, you will want to follow the installation instructions found here, and just replace the main Update Set with the one above. Thanks once again to all of you who have provided (or are about to provide!) feedback. It is always welcome and very much appreciated. Hopefully, this version will resolve some of those earlier issues and we can move on to discovering new issues that have yet to be detected. If we get any additional feedback, we will take a look at that next time out.
“Discovering the unexpected is more important than confirming the known.” — George E. P. Box
Last time, we wrapped up all of the modifications necessary to add the new logging feature to all of the remaining REST API calls in the application. Now we just need to run everything through its paces to make sure that it all still works before we release another Update Set to those folks willing to test this thing out. For the purposes of this initial testing, I went ahead and requested a brand new PDI from the ServiceNowDeveloper Site. Then I installed the latest version of the SNH Data Table Widgets, mainly because it includes the snh-form-field package, which is a requirement of this app as well. Then I installed the Collaboration Store app, and then the Collaboration Store Globals. Once everything was installed, I ran the set-up process to create a new Host instance.
After entering all of the details on the initial screen, the next step was to enter the email verification code sent to the email address entered on the form.
Once the email address was verified, the set-up process completed and sent out the final notification to the operator.
With that out of the way, I could now return to the primary development instance and clean out all of the tables to get a fresh start, then register the instance as a Client of the new Host, which basically just repeats the steps above. Once that was done, I could attempt to publish an application, which should push that application, including its logo image, over to the new Host instance. As before, I selected the Simple Webhook application for this test.
I scrolled to the bottom of the page and selected the Publish to Collaboration StoreRelated Link. That launched the application publishing process, the progress of which could be monitored on the resulting pop-up dialog box.
So far, so good. Now we need to bounce back over to the new Host instance and make sure that everything arrived intact.
And there it is, complete with its logo image. Excellent. The next thing to do will be to attempt to install the shared application on the Host instance. That’s a fairly straightforward process as well, but if you look closely at the image above, you will see that there is no Install button. That’s a problem. Time to stop testing a do a little debugging. Well, that’s why we test these things. I’ll see if I can figure out what’s up with that and report on the solution next time out.
Well, we’re almost there! Last time, we wrapped up the code to handle any possible Preview issues, so now it is time to finally see if we can issue a Commit and actually get the version of the application installed. As we did with the Preview process, we can hunt down the UI Action that handles the Commit and see if we can steal much, if not all, of the code. Here is what I found:
var commitInProgress = false;
function commitRemoteUpdateSet(control) {
if (commitInProgress)
return;
// get remoteUpdateSetId from g_form if invoked on remote update set page
var rusId = typeof g_form != 'undefined' && g_form != null ? g_form.getUniqueValue() : null;
var ajaxHelper = new GlideAjax('com.glide.update.UpdateSetCommitAjaxProcessor');
ajaxHelper.addParam('sysparm_type', 'validateCommitRemoteUpdateSet');
ajaxHelper.addParam('sysparm_remote_updateset_sys_id', rusId);
ajaxHelper.getXMLAnswer(getValidateCommitUpdateSetResponse);
}
function getValidateCommitUpdateSetResponse(answer) {
try {
if (answer == null) {
console.log('validateCommitRemoteUpdateSet answer was null');
return;
}
console.log('validateCommitRemoteUpdateSet answer was ' + answer);
var returnedInfo = answer.split(';');
var sysId = returnedInfo[0];
var encodedQuery = returnedInfo[1];
var delObjList = returnedInfo[2];
if (delObjList !== "NONE") {
console.log('showing data loss confirm dialog');
showDataLossConfirmDialog(sysId, delObjList, encodedQuery);
}
else {
console.log('skipping data loss confirm dialog');
runTheCommit(sysId);
}
} catch (err) {
}
}
function runTheCommit(sysId) {
console.log('running commit on ' + sysId);
commitInProgress = true;
var ajaxHelper = new GlideAjax('com.glide.update.UpdateSetCommitAjaxProcessor');
ajaxHelper.addParam('sysparm_type', 'commitRemoteUpdateSet');
ajaxHelper.addParam('sysparm_remote_updateset_sys_id', sysId);
ajaxHelper.getXMLAnswer(getCommitRemoteUpdateSetResponse);
}
var dataLossConfirmDialog;
function destroyDialog() {
dataLossConfirmDialog.destroy();
}
function showDataLossConfirmDialog(sysId, delObjList, encodedQuery) {
var dialogClass = typeof GlideModal != 'undefined' ? GlideModal : GlideDialogWindow;
var dlg = new dialogClass('update_set_data_loss_commit_confirm');
dataLossConfirmDialog = dlg;
dlg.setTitle('Confirm Data Loss');
if(delObjList == null) {
dlg.setWidth(300);
} else {
dlg.setWidth(450);
}
dlg.setPreference('sysparm_sys_id', sysId);
dlg.setPreference('sysparm_encodedQuery', encodedQuery);
dlg.setPreference('sysparm_del_obj_list', delObjList);
console.log('rendering data loss confirm dialog');
dlg.render();
}
function getCommitRemoteUpdateSetResponse(answer) {
try {
if (answer == null)
return;
var map = new GwtMessage().getMessages(["Close", "Cancel", "Are you sure you want to cancel this update set?", "Update Set Commit", "Go to Subscription Management"]);
var returnedIds = answer.split(',');
var workerId = returnedIds[0];
var sysId = returnedIds[1];
var shouldRefreshNav = returnedIds[2];
var shouldRefreshApps = returnedIds[3];
var dialogClass = window.GlideModal ? GlideModal : GlideDialogWindow;
var dd = new dialogClass("hierarchical_progress_viewer", false, "40em", "10.5em");
dd.setTitle(map["Update Set Commit"]);
dd.setPreference('sysparm_renderer_execution_id', workerId);
dd.setPreference('sysparm_renderer_expanded_levels', '0'); // collapsed root node by default
dd.setPreference('sysparm_renderer_hide_drill_down', true);
dd.setPreference('sysparm_button_subscription', map["Go to Subscription Management"]);
dd.setPreference('sysparm_button_close', map["Close"]);
dd.on("bodyrendered", function(trackerObj) {
var buttonsPanel = $("buttonsPanel");
var table = new Element("table", {cellpadding: 0, cellspacing: 0, width : "100%"});
buttonsCell = table.appendChild(new Element("tr")).appendChild(new Element("td"));
buttonsCell.align = "right";
buttonsPanel.appendChild(table);
var closeBtn = $("sysparm_button_close");
if (closeBtn)
closeBtn.disable();
var cancelBtn = new Element("button");
cancelBtn.id = "sysparm_button_cancel";
cancelBtn.type = "button";
cancelBtn.innerHTML = map["Cancel"];
cancelBtn.onclick = function() {
var response = confirm(map["Are you sure you want to cancel this update set?"]);
if (response != true)
return;
var ajaxHelper = new GlideAjax('UpdateSetCommitAjax');
ajaxHelper.addParam('sysparm_type', 'cancelRemoteUpdateSet');
ajaxHelper.addParam('sysparm_worker_id', workerId);
ajaxHelper.getXMLAnswer(getCancelRemoteUpdateSetResponse);
};
buttonsCell.appendChild(cancelBtn);
});
dd.on("executionComplete", function(trackerObj) {
var subBtn = $("sysparm_button_subscription");
var tableCount = Number(trackerObj.result.custom_table_count)
if (tableCount > 0) {
if (subBtn) {
subBtn.enable();
subBtn.onclick = function() {
window.open(trackerObj.result.inventory_uri);
};
}
} else {
subBtn.hide();
}
var closeBtn = $("sysparm_button_close");
if (closeBtn) {
closeBtn.enable();
closeBtn.onclick = function() {
dd.destroy();
};
}
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.hide();
});
dd.on("beforeclose", function() {
if (shouldRefreshNav)
refreshNav();
var top = getTopWindow();
if (shouldRefreshApps && typeof top.g_application_picker != 'undefined')
top.g_application_picker.fillApplications();
reloadWindow(window); //reload current form after closing the progress viewer dialog
});
dd.render();
} catch (err) {
}
}
function getCancelRemoteUpdateSetResponse(answer) {
if (answer == null)
return;
// Nothing really to do here.
}
Once again, I cannot claim to understand every single thing that is going on here, but that doesn’t mean that I can’t snag the code and see if I can make it work. As with the Preview logic, there is code in there to grab the sys_id of the Remote Update Set from the form. Since our process does not run on that form, that isn’t going to work, but we have already determined the sys_id when we were doing the Preview, so we can rip that line out and use the value that we have already established. Since we are going to need that in more than one function, and there are other global variables present in this script, I decided to make that a global variable as well and not pass it in to each function as an argument. I ended up with the following list of variables and then modified our onLoad function accordingly.
var dataLossConfirmDialog;
var attachmentId = '';
var updateSetId = '';
var commitInProgress = false;
function onLoad() {
attachmentId = document.getElementById('attachment_id').value;
updateSetId = document.getElementById('remote_update_set_id').value;
if (updateSetId) {
previewRemoteUpdateSet();
}
}
I pasted in the rest of the Commit code from the UI Action down at the bottom of the Client script of our UI Page and then deleted those global variables that were embedded amongst the various functions. Then I updated our earlier commitUpdateSet function to update the status message with the results of our earlier review of the Preview results and then launch the Commit.
function commitUpdateSet(answer) {
var result = JSON.parse(answer);
var message = '';
if (result.accepted > 0) {
if (result.accepted > 1) {
message += result.accepted + ' Flagged Updates Accepted; ';
} else {
message += 'One Flagged Update Accepted; ';
}
}
if (result.skipped > 0) {
if (result.skipped > 1) {
message += result.skipped + ' Flagged Updates Skipped; ';
} else {
message += 'One Flagged Update Skipped; ';
}
}
message += 'Committing Update Set ...';
document.getElementById('status_text').innerHTML = message;
commitRemoteUpdateSet();
}
The last thing to do, then, is to modify what happens after the Commit, which in our case will be the updating of the Collaboration Store records to reflect the installation of this version. Once again, we do not want to wait for the operator to hit the Close button, so we can take the same approach that we took with the Preview code and modify the dd.on(“executionComplete” function to be simply this:
Of course, we will have to build an updateStoreData function, which should update the version and application records and then return the operator back to the version record form where all of this started, but that’s a job that we will have to take on in our next installment.
“Long is the road from conception to completion.” — Molière
Last time, we finished up the Update SetPreview process and it looked like all that was left was to code out the Commit process and we would be done with the last major component of this long drawn-out project. Unfortunately, that’s not entirely true. Before we can move on to the Commit process, we have to deal with the fact that the Preview process may have uncovered some issues with the Update Set. In the manual process, these issues are reported to the operator, and the operator is required to deal them all before the Commit option is available. Not only do we need to address that possibility, we also have to add code to update the application and version records to reflect the version that was just installed and to link the newly installed application with the application record. So we have a little more work to do beyond just launching the Commit process before we can declare project completion.
First of all, we need to decide what to do with any Preview issues that may have been detected. Ideally, you would want to give the operator the opportunity to review these issues and make the appropriate decisions based on their knowledge of their instance and the application. However, since we are trying to make this first version as automated as possible, I have decided to have the software make arbitrary decisions about each reported problem, at least for now. In some future version, I may want to pop up a dialog and ask the operator whether they want to do their own review or trust the system to do it for them, but for now, that’s a little more sophisticated than I am ready to tackle. This may not be the best approach, but it is the simplest, and I am trying wrap up the work on this initial version.
My plan is to add yet another client-callable function to our existing ApplicationInstallerScript Include that will hunt down all of the problems and resolve them. The problem records have a field called available_actions that contains a list of all of the actions available for the problem, so I am going to use that as a guide to Accept Remote Update if I can, or Skip Remote Update if I cannot. I also want to keep track of the number of problems found, the number of updates accepted, and the number of updates skipped so that I can report that information back to the caller. In reviewing the code behind the UI Actions that accept and skip updates, I found a call to a global component called GlidePreviewProblemAction, but when I tried to access that component in my scoped Script Include, I got a security violation error. To work around that, I had to add the following new function to our global utilities, where I could make the call without error.
fixRemoteUpdateIssue: function(remUpdGR) {
var resolution = 'accepted';
var ppa = new GlidePreviewProblemAction(gs.action, remUpdGR);
if (remUpdGR.available_actions.contains('43d7d01a97b00100f309124eda2975e4')) {
ppa.ignoreProblem();
} else {
ppa.skipUpdate();
resolution = 'skipped';
}
return resolution;
}
With that out of the way, I was able to put the rest of the code where it belonged, and just called out to the global component for the part that I was unable to do in the scoped component.
evaluatePreview: function() {
var answer = {problems: 0, accepted: 0, skipped: 0};
var sysId = this.getParameter('remote_update_set_id');
if (sysId) {
problemId = [];
var remUpdGR = new GlideRecord('sys_update_preview_problem');
remUpdGR.addQuery('remote_update_set', sysId);
remUpdGR.query();
while (remUpdGR.next()) {
problemId.push(remUpdGR.getUniqueValue());
answer.problems++;
}
var csgu = new global.CollaborationStoreGlobalUtils();
for (var i=0; i<problemId.length; i++) {
remUpdGR.get(problemId[i]);
var resolution = csgu.fixRemoteUpdateIssue(remUpdGR);
if (resolution == 'accepted') {
answer.accepted++;
} else {
answer.skipped++;
}
}
}
return JSON.stringify(answer);
}
Now we just need make the GlideAjax call to that function from the client side before we attempt to launch the Commit process. Right now, when the Preview process is complete, a Close button appears on the progress dialog, and when you click on the Close button, our new UI Page reloads and starts all over again because the script that we lifted from the UI Action on the Update Set form was set up to reload that form. For our purposes, we do not want our own page reloaded, and in fact, we don’t even want a Close button; we just want to move on to the process of reviewing the results of the Preview. The relevant portion of the script that we stole looks like this:
dd.on("executionComplete", function(trackerObj) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
var closeBtn = $("sysparm_button_close");
if (closeBtn) {
closeBtn.onclick = function() {
dd.destroy();
};
}
});
dd.on("beforeclose", function() {
reloadWindow(window);
});
Since we do not want to wait for operator action, we can short-cut this entire operation and just move on as soon as execution has been completed. I replaced all of the above with the following:
Since the Preview process is now complete at this point, and we are now looking at the results, I decided to wrap the original message on the page with a span that had an id attribute so that I could change the message as things moved along. That line of HTML now looks like this:
<span id="status_text">Previewing Uploaded Update Set ...</span>
With that in place, I was able to update the message with the new status before I made the Ajax call to our new Script Include function.
function checkPreviewResults() {
document.getElementById('status_text').innerHTML = 'Evaluating Preview Results ...';
var ga = new GlideAjax('ApplicationInstaller');
ga.addParam('sysparm_name', 'evaluatePreview');
ga.addParam('remote_update_set_id', updateSetId);
ga.getXMLAnswer(commitUpdateSet);
}
function commitUpdateSet(answer) {
alert(answer);
}
I’m not ready to take on the Commit process just yet, so I stubbed out the commitUpdateSet function with a simple alert of the response from our Ajax call. That was enough to let me know that everything was working up to this point, which is what I needed to know before I attempted to move on.
Now that we have dealt with the possibility of Preview problems, we can finally take a look at what it will take to Commit the Update Set. That’s obviously a bit of work, so we’ll leave all of that for our next episode.
Last time, we ended with yet another unresolved fork in the road, whether to launch the Preview process from the upload.do page or to build yet another new page specific to the application installation process. At the time, it seemed as if there were equal merits to either option, but today I have decided that building a new page would be the preferable alternative. For one thing, that keeps the artifacts involved within the scope of our application (our global UI Script to repurpose the upload.do page had to be in the global scope), and it keeps the alterations to upload.do to the bare minimum.
Before we go off and build a new page, though, we will need to figure out how we are going to get there without the involvement of the operator (we want this whole process to be as automatic as possible). Digging through the page source of the original upload.do page, I found something that looks as if it might be relevant to our needs:
Now, the name of this element is sysparm_referring_url, which sounds an awful lot like it would be the URL from which we came; however, this is actually the URL where we end up after the Update Set XML file is uploaded, so I am thinking that if we replaced this value with a link to our own page, maybe we would end up there instead. Only one way to find out …
Those of you following along at home may recall that this value, which appears in the HTML source, actually disappeared somehow before the form was submitted, so I had to add this line of code to our script to put it back:
Assuming that we create a new UI Page for the remainder of the process and that we want to pass to it the attachment ID, we should be able to replace that line with something like this:
document.getElementsByName('sysparm_referring_url')[0].value = 'ui_page.do?sys_id=<sys_id of our new page>&sysparm_id=' + window.location.search.substring(15);
Now all we need to do is create the page, put something on it, and then add the code that we stole from the UI Action that launches the Update SetPreview. After we hacked up the upload.do page, the end result turned out looking like this:
To keep things looking consistent, we can steal some of the HTML from that page and make our new page look something like this:
To make that happen, we can snag most of the HTML from a quick view frame source and then format it and stuff it into a new UI Page called install_application:
That takes care how the page looks. Now we need to deal with how it works. To Preview an uploaded Update Set, you need the Remote Update Set‘s sys_id. We have a URL parameter that contains the sys_id of the Update Set XML file attachment, but that’s not the sys_id that we need at this point. We will have to build a process that uses the attachment sys_id to locate and return the sys_id that we will need. We can just add another function to our existing ApplicationInstallerScript Include.
getRemoteUpdateSetId: function(attachmentId) {
var sysId = '';
var sysAttGR = new GlideRecord('sys_attachment');
if (sysAttGR.get(attachmentId)) {
var versionGR = new GlideRecord(sysAttGR.getDisplayValue('table_name'));
if (versionGR.get(sysAttGR.getDisplayValue('table_sys_id'))) {
var updateSetGR = new GlideRecord('sys_remote_update_set');
updateSetGR.addQuery('application_name', versionGR.getDisplayValue('member_application'));
updateSetGR.addQuery('application_scope', versionGR.getDisplayValue('member_application.scope'));
updateSetGR.addQuery('application_version', versionGR.getDisplayValue('version'));
updateSetGR.addQuery('state', 'loaded');
updateSetGR.query();
if (updateSetGR.next()) {
sysId = updateSetGR.getUniqueValue();
}
}
}
return sysId;
}
Basically, we use the passed attachment record sys_id to get the attachment record, then use data found on the attachment record to get the version record, and then use data found on the version record and associated application record to get the remote update set record, and then pull the sys_id that we need from there. Those of you who have been paying close attention may notice that one of the application record fields being used to find the remote update set is scope. The scope of the application was never included in the original list of data fields for the application record, so I had to go back and add it everywhere in the system where an application record was referenced, modified, or moved between instances. That was a bit of work, and hopefully I have found them all, but I think that was everything.
Anyway, now we have a way to turn an attachment record sys_id into a remote update set record sys_id, so we need to add some code to our UI Page to snag the attachment record sys_id from the URL, use it to get the sys_id that we need, and then stick that value on the page somewhere so that it can be picked up by the client-side code. At the top of the HTML for the page, I added this:
<g2:evaluate jelly="true">
var ai = new ApplicationInstaller();
var attachmentId = gs.action.getGlideURI().get('sysparm_id');
var sysId = ai.getRemoteUpdateSetId(attachmentId);
</g2:evaluate>
Then in the body of the page, just under the text, I added this hidden input element:
That took care of things on the server side. Now we need to build some client-side code that will run when the page is loaded. We can do that with an addLoadEvent like so:
addLoadEvent(function() {
onLoad();
});
Our onLoad function can then grab the value from the hidden field and pass it on to the function that we lifted from the Preview Update SetUI Action earlier (which we need to paste into the client code section of our new UI Page).
function onLoad() {
var sysId = document.getElementById('remote_update_set_id').value;
if (sysId) {
previewRemoteUpdateSet(sysId);
}
}
That’s all there is to that. The entire Client script portion of the new UI Page, including the code that we lifted from the UI Action, now looks like this:
function onLoad() {
var sysId = document.getElementById('remote_update_set_id').value;
if (sysId) {
previewRemoteUpdateSet(sysId);
}
}
addLoadEvent(function() {
onLoad();
});
function previewRemoteUpdateSet(sysId) {
var MESSAGE_KEY_DIALOG_TITLE = "Update Set Preview";
var MESSAGE_KEY_CLOSE_BUTTON = "Close";
var MESSAGE_KEY_CANCEL_BUTTON = "Cancel";
var MESSAGE_KEY_CONFIRMATION = "Confirmation";
var MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE = "Are you sure you want to cancel this update set preview?";
var map = new GwtMessage().getMessages([MESSAGE_KEY_DIALOG_TITLE, MESSAGE_KEY_CLOSE_BUTTON, MESSAGE_KEY_CANCEL_BUTTON, MESSAGE_KEY_CONFIRMATION, MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
var dialogClass = window.GlideModal ? GlideModal : GlideDialogWindow;
var dd = new dialogClass("hierarchical_progress_viewer", false, "40em", "10.5em");
dd.setTitle(map[MESSAGE_KEY_DIALOG_TITLE]);
dd.setPreference('sysparm_ajax_processor', 'UpdateSetPreviewAjax');
dd.setPreference('sysparm_ajax_processor_function', 'preview');
dd.setPreference('sysparm_ajax_processor_sys_id', sysId);
dd.setPreference('sysparm_renderer_expanded_levels', '0');
dd.setPreference('sysparm_renderer_hide_drill_down', true);
dd.setPreference('focusTrap', true);
dd.setPreference('sysparm_button_close', map["Close"]);
dd.on("executionStarted", function(response) {
var trackerId = response.responseXML.documentElement.getAttribute("answer");
var cancelBtn = new Element("button", {
'id': 'sysparm_button_cancel',
'type': 'button',
'class': 'btn btn-default',
'style': 'margin-left: 5px; float:right;'
}).update(map[MESSAGE_KEY_CANCEL_BUTTON]);
cancelBtn.onclick = function() {
var dialog = new GlideModal('glide_modal_confirm', true, 300);
dialog.setTitle(map[MESSAGE_KEY_CONFIRMATION]);
dialog.setPreference('body', map[MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
dialog.setPreference('focusTrap', true);
dialog.setPreference('callbackParam', trackerId);
dialog.setPreference('defaultButton', 'ok_button');
dialog.setPreference('onPromptComplete', function(param) {
var cancelBtn2 = $("sysparm_button_cancel");
if (cancelBtn2)
cancelBtn2.disable();
var ajaxHelper = new GlideAjax('UpdateSetPreviewAjax');
ajaxHelper.addParam('sysparm_ajax_processor_function', 'cancelPreview');
ajaxHelper.addParam('sysparm_ajax_processor_tracker_id', param);
ajaxHelper.getXMLAnswer(_handleCancelPreviewResponse);
});
dialog.render();
dialog.on("bodyrendered", function() {
var okBtn = $("ok_button");
if (okBtn) {
okBtn.className += " btn-destructive";
}
});
};
var _handleCancelPreviewResponse = function(answer) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
};
var buttonsPanel = $("buttonsPanel");
if (buttonsPanel)
buttonsPanel.appendChild(cancelBtn);
});
dd.on("executionComplete", function(trackerObj) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
var closeBtn = $("sysparm_button_close");
if (closeBtn) {
closeBtn.onclick = function() {
dd.destroy();
};
}
});
dd.on("beforeclose", function() {
reloadWindow(window);
});
dd.render();
}
Now all we need to do is pull up the old version record and push that Install button one more time, which I did.
So, there is good news and there is bad news. The good news is that it actually worked! That is to say that clicking on the Install button pulls down the Update Set XML file data, posts it back to the server via the modified upload.do page, and then goes right into previewing the newly created Update Set. That part is very cool, and something that I wasn’t sure that I was going to be able to pull off when I first started thinking about doing this. The bad news is that, once the Preview is complete, the stock code reloads the page and the whole Preview process starts all over again. That’s not good! However, that seems like a minor issue with which we should be able to deal relatively easy. All in all, then, it seems like mostly good news.
Of course, we are still not there yet. Once an Update Set has been Previewed, it sill has to be Committed before the application is actually installed. Rather than continuously reloading the page then, our version of the UI Action code is going to need to launch the Commit process. We should be able to examine the CommitUI Action as we did the PreviewUI Action and steal some more code to make that happen. That sounds like a little bit of work, though, so let’s save all of that for our next installment.
“Time is what keeps everything from happening at once.” — Ray Cummings
Welcome to installment #50 of this seemingly never-ending series! That’s a milestone to which we have never even come close on this site. But then, we have never taken on a project of this magnitude before, either. Still, you would think that we would have been done with this endeavor long before now. That’s the way these things go, though. When you strike out into the darkness with just a vague idea of where you want to go, you never really know where you will end up or how long it will take. There are those who would tell you, though, that it’s all about the journey, not the destination! Still, I try to stay focused on the destination. I think we are getting close.
Last time, we wrapped up the coding on our global UI Script that allowed us to repurpose the upload.do page for installing a version of an application. We never really tested it all the way through, though, so we should probably do that before we attempt to go any further. Just to back up a bit, the way that we try this thing out is to pull up a version record for an application and click on the Install button that we added a few episodes back.
That should launch the upload.do page, and with the added URL parameter for the attachment sys_id, that should trigger our UI Script, which should then turn that page into this:
Meanwhile, the script should call back to the server for the Update Set XML file information, update the form on the page using that information, and then submit the form. After the form has been submitted, the natural process related to the upload.do page takes you here:
So, it looks like it all works, which is good. Unfortunately, the application has still not been installed. From here it is a manual process to first Preview the Update Set, and then Commit it. We don’t really want that to be a manual process, though, so let’s see what we can do to make that all happen without the operator having to click on anything or take any action to move things along. To begin, we should probably take a look how it is done manually, which should help guide us into how we might be able to do it programmatically. If you click on the Update Set in the above screen to bring up the details, you will see a form button, which is just another UI Action, called Preview Update Set.
Using the hamburger menu, we can select Configure -> UI Actions to pull up the list of UI Actions related to this form, and then select the Preview Update Set action and take a peek under the hood. It looks like all of the work is done on the client side with the following script:
function previewRemoteUpdateSet(control) {
var MESSAGE_KEY_DIALOG_TITLE = "Update Set Preview";
var MESSAGE_KEY_CLOSE_BUTTON = "Close";
var MESSAGE_KEY_CANCEL_BUTTON = "Cancel";
var MESSAGE_KEY_CONFIRMATION = "Confirmation";
var MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE = "Are you sure you want to cancel this update set preview?";
var map = new GwtMessage().getMessages([MESSAGE_KEY_DIALOG_TITLE, MESSAGE_KEY_CLOSE_BUTTON, MESSAGE_KEY_CANCEL_BUTTON, MESSAGE_KEY_CONFIRMATION, MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
var sysId = typeof g_form != 'undefined' && g_form != null ? g_form.getUniqueValue() : null;
var dialogClass = window.GlideModal ? GlideModal : GlideDialogWindow;
var dd = new dialogClass("hierarchical_progress_viewer", false, "40em", "10.5em");
dd.setTitle(map[MESSAGE_KEY_DIALOG_TITLE]);
dd.setPreference('sysparm_ajax_processor', 'UpdateSetPreviewAjax');
dd.setPreference('sysparm_ajax_processor_function', 'preview');
dd.setPreference('sysparm_ajax_processor_sys_id', sysId);
dd.setPreference('sysparm_renderer_expanded_levels', '0'); // collapsed root node by default
dd.setPreference('sysparm_renderer_hide_drill_down', true);
dd.setPreference('focusTrap', true);
dd.setPreference('sysparm_button_close', map["Close"]);
// response from UpdateSetPreviewAjax.previewAgain is the progress worker id
dd.on("executionStarted", function(response) {
var trackerId = response.responseXML.documentElement.getAttribute("answer");
var cancelBtn = new Element("button", {
'id': 'sysparm_button_cancel',
'type': 'button',
'class': 'btn btn-default',
'style': 'margin-left: 5px; float:right;'
}).update(map[MESSAGE_KEY_CANCEL_BUTTON]);
cancelBtn.onclick = function() {
var dialog = new GlideModal('glide_modal_confirm', true, 300);
dialog.setTitle(map[MESSAGE_KEY_CONFIRMATION]);
dialog.setPreference('body', map[MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
dialog.setPreference('focusTrap', true);
dialog.setPreference('callbackParam', trackerId);
dialog.setPreference('defaultButton', 'ok_button');
dialog.setPreference('onPromptComplete', function(param) {
var cancelBtn2 = $("sysparm_button_cancel");
if (cancelBtn2)
cancelBtn2.disable();
var ajaxHelper = new GlideAjax('UpdateSetPreviewAjax');
ajaxHelper.addParam('sysparm_ajax_processor_function', 'cancelPreview');
ajaxHelper.addParam('sysparm_ajax_processor_tracker_id', param);
ajaxHelper.getXMLAnswer(_handleCancelPreviewResponse);
});
dialog.render();
dialog.on("bodyrendered", function() {
var okBtn = $("ok_button");
if (okBtn) {
okBtn.className += " btn-destructive";
}
});
};
var _handleCancelPreviewResponse = function(answer) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
};
var buttonsPanel = $("buttonsPanel");
if (buttonsPanel)
buttonsPanel.appendChild(cancelBtn);
});
dd.on("executionComplete", function(trackerObj) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
var closeBtn = $("sysparm_button_close");
if (closeBtn) {
closeBtn.onclick = function() {
dd.destroy();
};
}
});
dd.on("beforeclose", function() {
reloadWindow(window);
});
dd.render();
}
I’m not going to attempt to pretend that I understand all that is going on here. I will say, though, that it looks to me as if we could steal this entire script and launch it from a location of our own choosing without having to have the operator click on any buttons. The one line that I see that would need to be modified is the one that gets the sys_id of the Update Set.
I think to start with, I would just delete that line entirely and pass the sys_id in as an argument to the function. Right now, a variable called control is passed in to the function, but I don’t see where that is used anywhere, so I think that I would just change this:
function previewRemoteUpdateSet(control) {
… to this:
function previewRemoteUpdateSet(sysId) {
… and see where that might take us. Maybe that will work and maybe it won’t, but you never know until you try. Of course, not everyone is a big proponent of that Let’s pull the lever and see what happens approach; once I was told that the last words spoken on Earth will be something like “Gee, I wonder what this button does.” Still, it’s just my nature to try things and see how it all turns out. But first we have to figure out where we can put our stolen script.
I can see two ways to go here: 1) we can just add it to our hack of the upload.do page and keep everything all in one place, or 2) since the upload.do page has done it’s job at this point and we don’t want to hack up a stock component any more than is absolutely necessary, let’s create a UI Page of our own and put the rest of the process in there where we can control everything and keep it within the scope of the application. There are, as usual, pros and cons for both approaches. I don’t know if one way is any better than the other, but we don’t have to decide right this minute. Let’s save that for our next installment.