“It is not the most intellectual of the species that survives; it is not the strongest that survives; but the species that survives is the one that is able best to adapt and adjust to the changing environment in which it finds itself.” — Charles Darwin
Last time, we introduced the idea of building a simple little app to manage Service Accounts. Today, we are going to jump into the App Engine Studio and see if we can throw something together using that tool. To be completely honest, I have been more than a little resistant to try doing anything with that tool since it first came out, mainly because I am quite comfortable with (and quite productive) doing things the way that I always have, and quite frankly, I never really understood the benefit. Still, it’s what all of the cool kids are doing these days, so what the heck … let’s give it a shot.
To begin, let’s click on that big Let’s Go button on the initial splash screen, and then click on the Create App button up in the top corner of the App Engine Studio home page.
On this initial app creation screen we can enter the name of our app, a brief description, and upload an image that we snagged from someone else’s web side. Then we can hit that Continue button and see what comes next.
The default roles that were generated by the process seem to be sufficient for now, so let’s just click that Continue button again and see what’s next.
At this point, it looks like it has enough information to actually create the artifacts for the app, so we will watch this screen for a bit before it automatically jumps into the next page.
At this point, we are done with the initial app creation, so the only thing left to is to click on the Go to app dashboard button to continue working on developing the app.
From here, it looks like we can do all kinds of things, but the first thing that we need to do is to create our tables, so let’s click on the Add link next to the word Data.
This brings up a couple of choices, and since we don’t have any data to import, we want to select the first option and create a new table.
There is not much to do here on this screen, so we will just click o the Begin button to continue.
Now we are presented with a few options, but again we have no external source for our data and the entire thing will be net new, so let’s select the Create from scratch option and Continue.
Now we are at a point where we can name our new table and set up the auto numbering for our records. The first table that we want to build is the master table of accounts, so we will call it Service Account, and give it a number prefix of SA. Once we complete the form, we can click on the Continue button to proceed.
This gets us to the permissions settings, and for now, let’s give all permissions to everyone, and then click on the Continue button.
After watching this screen for a bit we are finally taken to this screen, completing the table creation process.
At this point, our new table is created. We have not defined any fields as yet, but every ServiceNow table gets a set of stock fields, and since we set up auto numbering, we will get a number field as well. The rest will obviously have to be done after the creation of the table. One of those new fields, however, will be a reference field pointing to our new technology type control table. That table has yet to be created though, so before we start adding fields to our account master table, we should probably repeat this exercise for the control table. There is no point in going through the entire exercise, but here is the screen shot with the relevant information for our second table.
Now that we have created both of our new tables, we can start defining all of the fields, including the reference field that defines the relationship between the two. That’s probably a bit of work in and of itself, so let’s save all of that for our next installment.
As most of you are aware, a Service Account is a special kind of user account that does not actually represent a physical human being. Most user accounts are for real users, but for automated interfaces and other integrations, many times you need credentials on external systems for use in your automation, and these accounts are not tied to any specific user. They are what is commonly referred to as a Service Account. In any give IT organization, there can be any number of Service Accounts for quite a wide range of services and technologies. Managing all of those accounts can become quite a burdensome task. There are a number of third party apps that provide tools for managing Service Accounts, but it occurred to me that the concepts are not all that sophisticated and it might be relatively simple to build a Scoped Application to handle the basic management of such accounts.
Here is what I am thinking: A single custom table could be built to store the account details. A second table could be used to store the various types of Service Accounts that could be stored in the main table, and one or more Service Catalog items could be used to manage the accounts. That’s not a huge number of artifacts, so it would appear that you could throw something together rather quickly. You know what I always ask myself: How hard could it be?
So let’s take a look at the lifecycle of an account. To create an account, you would request the account through the Service Catalog, providing all of the necessary information needed to set up the account on the appropriate technology. Let’s assume that the request has to be approved through the normal Service Request approval process, and that the requested item also has to be approved by the technology owner based on the type of account requested. Information on that second level of approval would be stored in the technology type table. Once approved, a fulfillment work flow would initiate, and once again we would turn to the technology type table to determine what specific fulfillment workflow should be executed to fulfill the request. Generally speaking, there would be two types of fulfillment, automated and manual. If the account creating and notification can be handled through automation or integration, then the entire process would be handled without human intervention. If not, one or more tasks would be generated and routed to the appropriate technical support resources. Either way, once the account was created and the requester notified, the requested item would be closed.
Creating the account is only half of account management process. It is never a good idea to just allow these things to live forever once they are created. There needs to be another catalog request process to manually terminate the account, and there should also be some process by which the requester reaffirms the need for the account on some predetermined interval. This Periodic Attestation process could, in fact, be another entirely different stand-alone app that could be used not only for this use case, but quite a number of others. That, however, would be an entirely different collection of blog entries, so we will leave that for another time. Still, we need a way to create an account, alter an account, get rid of account, and every so often, revisit the need for the account to continue to exist.
For our purpose, which is just to demonstrate the concept, we can just have two example technologies, one that we can automate, and one that we cannot (or more accurately, one that we choose to handle with a manual process). That will at least give us a demonstration of how things would work in either use case. Putting this all together, we will need to create the following:
A Scoped Application
A Master account table
A technology type control table
One or more Service Catalog items to create, alter, and terminate accounts
A generic workflow for the catalog item(s)
A type-specific workflow for each type of account in the type table
Some kind of periodic workflow to ensure that the account is still needed.
It’s not a huge list of parts, but there is still a little bit there, so we will just point ourselves in that general direction, start pushing ahead, and see what we run into along the way. Maybe we will use the App Engine Studio for this one and see how far we can get with that. Regardless, it should be a fun little project, so we will jump right into it next time out.
“Ideas don’t happen in isolation. You must embrace opportunities to broadcast and then refine your ideas through the energy of those around you.” — Scott Belsky
Now that we have released the latest version of the Collaboration StoreUpdate Sets for testing purposes, we should take a quick look ahead to see where things might go from here. Obviously, the first order of business is to validate the existing build and work to put out that 1.0 version with the current feature set. However, it won’t hurt to take a moment and look a little bit beyond that to see what may lie ahead for this app. This first version accomplishes the three main goals of providing the ability to set-up an instance, publish an app, and install an app, but there are a lot things missing that would be helpful and/or really nice to have. So let’s take a look at that list.
Images
One of the things that has disturbed me for quite some time is the fact that the logo image for an app is not included in the Update Set when you select the Publish to Update Set… option on the Custom Application form. We use that very same function to move an app into the Store, so our process suffers from the same shortcoming. We should be able to add some extra processing to our workflow, however, that could work around this issue. One of the things that I thought might provide a solution would be to copy the image file attached to the app and attach the copy to the Collaboration Store application record. Then during the installation process, we could copy the image from the application record over to the installed app once the application installation process was completed. That would also provide the opportunity to display the application logo on the application form, which could be done in much the same way as the user photo is displayed on the User Profile form.
Speaking of images, I have also thought from the beginning that each instance in the community should have its own image as well, and that all of the applications shared by that instance should include the providing instance’s image as well as the specific image of the application. This would provide additional visual clues when searching through the apps in the store.
Conditionals and UI Policies
What you can and cannot do with certain records and fields needs to be tied to the original owner of the artifact. For example, the Install button should not appear on version records for your own applications. Conversely, the Publish to Collaboration Store link should not appear on applications that are not your own. And all of the form fields on records that did not originate in your instance should be protected, as you should not be allowed to make changes to records that are not your own. Also, there are certain fields that should never be updated under any circumstances (such as the version number), as these are determined by background processes and should never be editable. There is a lot of work that needs to be done in this area, and it probably should be done before the initial 1.0 version is released to the general public.
Shopping
Version 1.0 was all about getting the mechanics to work, making sure that a scoped app on one instance could be shared, stored, distributed, and installed on some other instance. Once testing establishes that all of those things work as they should, though, it will be time to start looking into setting up a much better experience for locating an app that looks interesting or meets a specific purpose. Images will help create a more visual, catalog shopping type of searching, but so will key words and categories and even marketing tools and programmatic hints such as “People who installed this app also installed these apps:”. User ratings, comments, error reports, and other feedback would also help improve the shopping experience. Also, statistics on how many instances have installed the app and which versions were in use would be useful information when looking at available options. So many possibilities; so little time.
Activity Tracking
From the onset of this project I have always thought that there should be some kind of running log of the activities within each instance, particularly the Host. I set up a table for this early on when I was first setting up the tables for the app, but there is no code in this version to write to this table for any reason. The next major release, if there is such a thing, should really complete this functionality and log every movement of data between the Host and the Clients. Maybe a simple logActivity function could be crafted to accept certain arguments and then all you would have to do to add logging would be to add a single line of code to call that shared function. That sounds like a project, though, but it needs to be done at some point.
Code Consolidation
A while back I created a series of shared function to replace a collection of cloned functions so that I would not clone them all for yet a third time. In my new use case, I utilized the shared functions that were designed to work in all three places, but I was too lazy to go back and refactor the original two sections of code where I had done the initial cloning. I really need to take the time to go back and rework the original and that first copy to have both of them use the redesigned functions that were designed to work for all of the places where I needed to ship instance data, application data, version data, and XML Update Set attachments from one instance to another. It works the way that it is right now, but for future maintenance (and just decent coding standards), that all needs to be addressed.
Additional Artifacts
This initial version allows you to share Scoped Applications between unrelated instances in the same community (Clients of the same Host). That is a good start, but it would also be nice to be able to share single artifacts or global Update Sets or any other component or collection developed on the Now Platform. That was a little more than I was willing to take on at the onset of this project, but now that the sharing of apps between instances seems to be a reality, it would be nice to branch out a little and start adding more stuff that could be shared. Once again, I seem to have more ideas than I do free time. Still, it would be nice to do more than just whole apps.
First Things First
Still, the original goal was to see if we could just make this work, and although it seems like everything is functioning as it should, every little thing needs to be tested out. Hopefully, a few brave souls are actually busy doing that very thing while I type this out, so maybe we will get some much needed feedback relatively soon and see if this thing actually works for anyone other than myself! As always, any feedback of any kind is always appreciated. Thanks in advance to those of you who have actually pulled this down and given things a go.
“A person who tries has an advantage over the person who wishes.” — Utibe Samuel Mbom
Theoretically, all of the artifacts are complete for this initial version of the Collaboration Store, but now that we have crossed that threshold, we need to really shake everything out before we can build that final 1.0 version that will be solid enough for public consumption. Before we do that, though, we should probably back up just a bit and explain the general purpose of the app, and go through a little bit of a user guide for the folks that might be joining this party a little more recently.
The concept for the initial version of the app is pretty simple. It allows developers from one ServiceNow instance to share Scoped Applications with developers on a different instance. There are three primary functions included in the app: 1) the initial set-up process, 2) the application publishing (sharing) process, and 3) the application installation process. One instance in the community needs to be designated as the Host instance, and all other instances are considered Client instances. The Host instance needs to be set up first, as the set-up process for a Client instance requires communication with the identified Host. Once an application has been pushed to the Host, the Host instance then distributes that application to all of the Clients in the community. Once any instance has a version of a shared application, developers can then install that application on their own instance.
Initial Set-up
Once all of the artifacts have been installed, the first thing that you need to do on your instance is to run the set-up process. To enter the set-up process, select Collaboration Store Set-up from the Collaboration Store section of the left-hand nav. You will need to be in the Collaboration Store scope, and if you are not already in that scope, you will be prompted to make the switch.
The form is fairly self-explanatory, but you need to make sure that you have selected the correct Installation Type, and if you are setting up a Client instance, you will want to make sure that you have entered the correct Host instance ID, which is the first element of the URL of the instance, the unique portion that precedes the .service-now.com portion of the instance address. Also, you will want to enter a valid email address to which you have access, as an email will be sent to that address with a verification code, which you will need to enter on the next screen.
Once the email has been validated, the set-up process commences and the final screen of the set-up process appears when the set-up process is complete.
Application Publishing
Once your instance has been set up, you should now be able to publish applications to the store. This is accomplished through a new link that the application has added to the Scoped Application form. A while ago I created a scoped app called Simple Webhook when I was playing around with outbound Webhooks, and we can use that as an example to give this thing a go. To publish the app to the store, pull up the application form and select the Publish to Collaboration Store link down at the bottom of the form.
Clicking on this link will pop up the Publish to Collaboration Store dialog where you will enter the details of this version of the application.
Clicking on the Publish button will then bring up the Export to XML progress bar while the Update Set is converted to an XML file.
Clicking on the Done button will then bring up another dialog box where you can see the progress of all of the other steps involved in creating the Collaboration Store records, attaching the XML file, and sending all of the artifacts over to the Host instance.
Clicking on the Done button here will close the dialog and reload the form, completing the process. Once the last artifact reaches the Host, that will trigger a background process that will distribute the app to all of the other instances in the community. You don’t need to worry about that, though; that all goes on behind the scenes and takes care of itself. If you are testing, however, and hopefully you are (the more, the merrier!), you will want to check all of the other instances in the community to ensure that all of the artifacts arrived safely and in good condition. That’s the whole point of the app, obviously, so we want to make sure that it all works as intended.
Now if things do take a wrong turn somewhere along the line, there is another background job that runs on the Host to check with all of the Clients each day and sends over any missing artifacts to keep everything in sync. That’s another thing that we will want to test out thoroughly, which may require introducing some intentional errors just to see if the daily sync process finds and corrects them.
Application Installation
Once another instance has shared an app, you should be able to install it on your own instance, which is accomplished in this version of the app by navigating to the version record for the version that you want to install and clicking on the Install button. This should kick off a series of events starting with the conversion of the XML file attachment back into an Update Set, Previewing that Update Set, correcting any errors detected in the Preview, then Committing the Update Set and updating the Collaboration Store data to reflect the installation of the app. Once again, using our Simple Webhook app as example, let’s pull up the version record on a different instance and click on that Install button.
After clicking on the Install button, you will first see the XML file being uploaded to the server.
Once the XML file has been uploaded and converted back into an Update Set, you will see the Preview progress bar.
Once the Preview has been completed, you will see the Commit progress bar, which looks very similar to the Preview progress bar.
Once the Commit has been completed, the Collaboration Store records will be updated and then you will be returned to the version record, where the Install button will no longer appear, since this version has now been installed.
Also, if you preview the associated application record, you will see that the Collaboration Store application has been linked to the installed application and the installed application’s version matches the latest version of the app.
Let the Testing Begin!
OK, that’s it, the initial set-up process, the application publishing process, and the application installation process. Seems pretty simple for something that took 55 episodes to complete, but there is a lot going on under the hood behind all of those little pop-up screens. Of course, if you want to do any testing, you will need something to test, so here are the artifacts that you will need to install, in the order in which they need to be installed, if you want to take this out for a spin:
If you have not done so already, you will need to install the most recent version of the snh-form-field, tag, which is needed for the initial set-up widget. You can find that here.
And then once the application has been installed, you can install the Update Set for the additional global components that could not be included in the Scoped Application.
Once you have everything installed, you can go through the initial set-up process and then you should be good to publish and install shared applications. As usual, all feedback is welcome and encouraged. Please pull it down and give it a try, and please let me know what you find, good, bad, or indifferent. If we get any comments, we will take a look at those next time out.
“Real happiness lies in the completion of work using your own brains and skills.” — Soichiro Honda
Last time, we finally got to the point where we were able to actually Commit the Update Set, installing the requested version of the application. That was a major milestone, but we are not done just yet. We still have to update the version and application records with the fact that this version has now been installed. That’s a server-side operation, so we will need to add yet one more client callable function to our installation utilities Script Include. As we did before, we will use the attachment ID to locate the version record and the associated application record and then update them both. We will also need to hunt down the installed application record so that we can link it to the application record in the Collaboration Store database. We can also use data extracted from the version and application records to build a final status message informing the operator that the installation process is now complete.
We’ll call our new function recordInstallation, and start out by creating a response object, defaulting the success property to false, and then grabbing the attachment ID parameter that was passed in the Ajax call.
recordInstallation: function() {
var answer = {success: false};
var sysId = this.getParameter('attachment_id');
if (sysId) {
...
} else {
answer.error = 'Missing required parameter: attachment_id';
}
return JSON.stringify(answer);
}
Assuming that we have a sys_id, we will use it to get the attachment record, and then use the data in the attachment record to fetch the version and application records.
var sysAttGR = new GlideRecord('sys_attachment');
if (sysAttGR.get(sysId)) {
var versionGR = new GlideRecord(sysAttGR.getDisplayValue('table_name'));
if (versionGR.get(sysAttGR.getDisplayValue('table_sys_id'))) {
answer.versionId = versionGR.getUniqueValue();
var applicationGR = versionGR.member_application.getRefRecord();
...
} else {
answer.error = 'Version record not found for sys_id ' + sysAttGR.getDisplayValue('table_sys_id');
}
} else {
answer.error = 'Attachment record not found for sys_id ' + sysId;
}
Once we have the version record and the application record in hand, we will need to hunt down the newly installed system application record.
var sysAppGR = new GlideRecord('sys_app');
sysAppGR.addQuery('scope', applicationGR.getValue('scope'));
sysAppGR.query();
if (sysAppGR.next()) {
...
} else {
answer.error = 'System application record not found for scope ' + applicationGR.getValue('scope');
}
Once we have the system application record, then we know that the application has been installed, and we can format the final status message.
We also need to mark the version as being installed.
versionGR.installed = true;
versionGR.update();
One other thing that we will need to do is to go through any other version records associated with this application and make sure that none of those are any longer marked as installed.
That completes the updates to the Collaboration Store records, so the only thing left to do at this point is to override our initial response object success value with true.
answer.success = true;
Putting it all together, the entire function looks like this:
recordInstallation: function() {
var answer = {success: false};
var sysId = this.getParameter('attachment_id');
if (sysId) {
var sysAttGR = new GlideRecord('sys_attachment');
if (sysAttGR.get(sysId)) {
var versionGR = new GlideRecord(sysAttGR.getDisplayValue('table_name'));
if (versionGR.get(sysAttGR.getDisplayValue('table_sys_id'))) {
answer.versionId = versionGR.getUniqueValue();
var applicationGR = versionGR.member_application.getRefRecord();
var sysAppGR = new GlideRecord('sys_app');
sysAppGR.addQuery('scope', applicationGR.getValue('scope'));
sysAppGR.query();
if (sysAppGR.next()) {
answer.statusMessage = 'Version <strong>' + versionGR.getDisplayValue('version') + '</strong> of application <strong>' + applicationGR.getDisplayValue('name') + '</strong> installed.';
applicationGR.application = sysAppGR.getUniqueValue();
applicationGR.update();
versionGR.installed = true;
versionGR.update();
versionGR.initialize();
versionGR.addQuery('member_application', applicationGR.getUniqueValue());
versionGR.addQuery('sys_id', '!=', answer.versionId);
versionGR.query();
while (versionGR.next()) {
versionGR.installed = false;
versionGR.update();
}
answer.success = true;
} else {
answer.error = 'System application record not found for scope ' + applicationGR.getValue('scope');
}
} else {
answer.error = 'Version record not found for sys_id ' + sysAttGR.getDisplayValue('table_sys_id');
}
} else {
answer.error = 'Attachment record not found for sys_id ' + sysId;
}
} else {
answer.error = 'Missing required parameter: attachment_id';
}
return JSON.stringify(answer);
}
That takes care of the server-side code; now we have to update the Client script in our UI Page to make the GlideAjax call to the function and then return the operator to the original version record where the Install button was first selected. At the end of the Commit process, we referenced an updateStoreData function, but did not create it. We will need to create this function now, and then we can make the call within that new function. Before we do that, though, I wanted to note a slight modification that I made to the HTML for that page to add an id tag to the H4 element that includes our status message. The reason that I wanted to do that was so that the final message would not only replace the wording, but it would also replace the loading image that indicated that there was an ongoing process. Once we get to this point, the processing is over, so I did not want to leave that spinning image up on the screen. Here is the modified version of that portion of the HTML:
Now we can create our new updateStoreData function.
function updateStoreData() {
document.getElementById('status_text').innerHTML = 'Updating Collaboration Store Database ...';
var ga = new GlideAjax('ApplicationInstaller');
ga.addParam('sysparm_name', 'recordInstallation');
ga.addParam('attachment_id', attachmentId);
ga.getXMLAnswer(finalizeInstallation);
}
Now we have referenced a finalizeInstallation function, so we will need to create that as well. This function simply adds that final status message to the page and then returns the operator back to the original version form page.
function finalizeInstallation(answer) {
var result = JSON.parse(answer);
document.getElementById('final_status_text').innerHTML = result.statusMessage;
window.location.href = '/x_11556_col_store_member_application_version.do?sys_id=' + result.versionId;
}
Here is the full Client script for the UI Page from top to bottom.
var dataLossConfirmDialog;
var attachmentId = '';
var updateSetId = '';
var commitInProgress = false;
function onLoad() {
attachmentId = document.getElementById('attachment_id').value;
updateSetId = document.getElementById('remote_update_set_id').value;
if (updateSetId) {
previewRemoteUpdateSet();
}
}
addLoadEvent(function() {
onLoad();
});
function previewRemoteUpdateSet() {
var MESSAGE_KEY_DIALOG_TITLE = "Update Set Preview";
var MESSAGE_KEY_CLOSE_BUTTON = "Close";
var MESSAGE_KEY_CANCEL_BUTTON = "Cancel";
var MESSAGE_KEY_CONFIRMATION = "Confirmation";
var MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE = "Are you sure you want to cancel this update set preview?";
var map = new GwtMessage().getMessages([MESSAGE_KEY_DIALOG_TITLE, MESSAGE_KEY_CLOSE_BUTTON, MESSAGE_KEY_CANCEL_BUTTON, MESSAGE_KEY_CONFIRMATION, MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
var dialogClass = window.GlideModal ? GlideModal : GlideDialogWindow;
var dd = new dialogClass("hierarchical_progress_viewer", false, "40em", "10.5em");
dd.setTitle(map[MESSAGE_KEY_DIALOG_TITLE]);
dd.setPreference('sysparm_ajax_processor', 'UpdateSetPreviewAjax');
dd.setPreference('sysparm_ajax_processor_function', 'preview');
dd.setPreference('sysparm_ajax_processor_sys_id', updateSetId);
dd.setPreference('sysparm_renderer_expanded_levels', '0');
dd.setPreference('sysparm_renderer_hide_drill_down', true);
dd.setPreference('focusTrap', true);
dd.setPreference('sysparm_button_close', map["Close"]);
dd.on("executionStarted", function(response) {
var trackerId = response.responseXML.documentElement.getAttribute("answer");
var cancelBtn = new Element("button", {
'id': 'sysparm_button_cancel',
'type': 'button',
'class': 'btn btn-default',
'style': 'margin-left: 5px; float:right;'
}).update(map[MESSAGE_KEY_CANCEL_BUTTON]);
cancelBtn.onclick = function() {
var dialog = new GlideModal('glide_modal_confirm', true, 300);
dialog.setTitle(map[MESSAGE_KEY_CONFIRMATION]);
dialog.setPreference('body', map[MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
dialog.setPreference('focusTrap', true);
dialog.setPreference('callbackParam', trackerId);
dialog.setPreference('defaultButton', 'ok_button');
dialog.setPreference('onPromptComplete', function(param) {
var cancelBtn2 = $("sysparm_button_cancel");
if (cancelBtn2)
cancelBtn2.disable();
var ajaxHelper = new GlideAjax('UpdateSetPreviewAjax');
ajaxHelper.addParam('sysparm_ajax_processor_function', 'cancelPreview');
ajaxHelper.addParam('sysparm_ajax_processor_tracker_id', param);
ajaxHelper.getXMLAnswer(_handleCancelPreviewResponse);
});
dialog.render();
dialog.on("bodyrendered", function() {
var okBtn = $("ok_button");
if (okBtn) {
okBtn.className += " btn-destructive";
}
});
};
var _handleCancelPreviewResponse = function(answer) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
};
var buttonsPanel = $("buttonsPanel");
if (buttonsPanel)
buttonsPanel.appendChild(cancelBtn);
});
dd.on("executionComplete", function(trackerObj) {
dd.destroy();
checkPreviewResults();
});
dd.render();
}
function checkPreviewResults() {
document.getElementById('status_text').innerHTML = 'Evaluating Preview Results ...';
var ga = new GlideAjax('ApplicationInstaller');
ga.addParam('sysparm_name', 'evaluatePreview');
ga.addParam('remote_update_set_id', updateSetId);
ga.getXMLAnswer(commitUpdateSet);
}
function commitUpdateSet(answer) {
var result = JSON.parse(answer);
var message = '';
if (result.accepted > 0) {
if (result.accepted > 1) {
message += result.accepted + ' Flagged Updates Accepted; ';
} else {
message += 'One Flagged Update Accepted; ';
}
}
if (result.skipped > 0) {
if (result.skipped > 1) {
message += result.skipped + ' Flagged Updates Skipped; ';
} else {
message += 'One Flagged Update Skipped; ';
}
}
message += 'Committing Update Set ...';
document.getElementById('status_text').innerHTML = message;
commitRemoteUpdateSet();
}
function commitRemoteUpdateSet() {
if (commitInProgress) {
return;
}
var ajaxHelper = new GlideAjax('com.glide.update.UpdateSetCommitAjaxProcessor');
ajaxHelper.addParam('sysparm_type', 'validateCommitRemoteUpdateSet');
ajaxHelper.addParam('sysparm_remote_updateset_sys_id', updateSetId);
ajaxHelper.getXMLAnswer(getValidateCommitUpdateSetResponse);
}
function getValidateCommitUpdateSetResponse(answer) {
try {
if (answer == null) {
console.log('validateCommitRemoteUpdateSet answer was null');
return;
}
console.log('validateCommitRemoteUpdateSet answer was ' + answer);
var returnedInfo = answer.split(';');
var sysId = returnedInfo[0];
var encodedQuery = returnedInfo[1];
var delObjList = returnedInfo[2];
if (delObjList !== "NONE") {
console.log('showing data loss confirm dialog');
showDataLossConfirmDialog(sysId, delObjList, encodedQuery);
} else {
console.log('skipping data loss confirm dialog');
runTheCommit(sysId);
}
} catch (e) {
console.log(e);
}
}
function runTheCommit(sysId) {
console.log('running commit on ' + sysId);
commitInProgress = true;
var ajaxHelper = new GlideAjax('com.glide.update.UpdateSetCommitAjaxProcessor');
ajaxHelper.addParam('sysparm_type', 'commitRemoteUpdateSet');
ajaxHelper.addParam('sysparm_remote_updateset_sys_id', sysId);
ajaxHelper.getXMLAnswer(getCommitRemoteUpdateSetResponse);
}
function destroyDialog() {
dataLossConfirmDialog.destroy();
}
function showDataLossConfirmDialog(sysId, delObjList, encodedQuery) {
var dialogClass = typeof GlideModal != 'undefined' ? GlideModal : GlideDialogWindow;
var dlg = new dialogClass('update_set_data_loss_commit_confirm');
dataLossConfirmDialog = dlg;
dlg.setTitle('Confirm Data Loss');
if(delObjList == null) {
dlg.setWidth(300);
} else {
dlg.setWidth(450);
}
dlg.setPreference('sysparm_sys_id', sysId);
dlg.setPreference('sysparm_encodedQuery', encodedQuery);
dlg.setPreference('sysparm_del_obj_list', delObjList);
console.log('rendering data loss confirm dialog');
dlg.render();
}
function getCommitRemoteUpdateSetResponse(answer) {
try {
if (answer == null) {
return;
}
var map = new GwtMessage().getMessages(["Close", "Cancel", "Are you sure you want to cancel this update set?", "Update Set Commit", "Go to Subscription Management"]);
var returnedIds = answer.split(',');
var workerId = returnedIds[0];
var sysId = returnedIds[1];
var shouldRefreshNav = returnedIds[2];
var shouldRefreshApps = returnedIds[3];
var dialogClass = window.GlideModal ? GlideModal : GlideDialogWindow;
var dd = new dialogClass("hierarchical_progress_viewer", false, "40em", "10.5em");
dd.setTitle(map["Update Set Commit"]);
dd.setPreference('sysparm_renderer_execution_id', workerId);
dd.setPreference('sysparm_renderer_expanded_levels', '0');
dd.setPreference('sysparm_renderer_hide_drill_down', true);
dd.setPreference('sysparm_button_subscription', map["Go to Subscription Management"]);
dd.setPreference('sysparm_button_close', map["Close"]);
dd.on("bodyrendered", function(trackerObj) {
var buttonsPanel = $("buttonsPanel");
var table = new Element("table", {cellpadding: 0, cellspacing: 0, width : "100%"});
buttonsCell = table.appendChild(new Element("tr")).appendChild(new Element("td"));
buttonsCell.align = "right";
buttonsPanel.appendChild(table);
var closeBtn = $("sysparm_button_close");
if (closeBtn) {
closeBtn.disable();
}
var cancelBtn = new Element("button");
cancelBtn.id = "sysparm_button_cancel";
cancelBtn.type = "button";
cancelBtn.innerHTML = map["Cancel"];
cancelBtn.onclick = function() {
var response = confirm(map["Are you sure you want to cancel this update set?"]);
if (response != true) {
return;
}
var ajaxHelper = new GlideAjax('UpdateSetCommitAjax');
ajaxHelper.addParam('sysparm_type', 'cancelRemoteUpdateSet');
ajaxHelper.addParam('sysparm_worker_id', workerId);
ajaxHelper.getXMLAnswer(getCancelRemoteUpdateSetResponse);
};
buttonsCell.appendChild(cancelBtn);
});
dd.on("executionComplete", function(trackerObj) {
dd.destroy();
updateStoreData();
});
dd.render();
} catch (e) {
console.log(e);
}
}
function getCancelRemoteUpdateSetResponse(answer) {
if (answer == null) {
return;
}
}
function updateStoreData() {
document.getElementById('status_text').innerHTML = 'Updating Collaboration Store Database ...';
var ga = new GlideAjax('ApplicationInstaller');
ga.addParam('sysparm_name', 'recordInstallation');
ga.addParam('attachment_id', attachmentId);
ga.getXMLAnswer(finalizeInstallation);
}
function finalizeInstallation(answer) {
var result = JSON.parse(answer);
document.getElementById('final_status_text').innerHTML = result.statusMessage;
window.location.href = '/x_11556_col_store_member_application_version.do?sys_id=' + result.versionId;
}
That completes the installation process, the third and final major component of the Collaboration Store application. All that is left now is to release a new Update Set to the testers and see what kinds of bugs we can shake out of this thing before we actually produce an official 1.0 version of the app. If you have not had an opportunity to participate in the testing just yet, now might be a good time to jump in and see what you can find. Feedback of any kind is always welcome. We’ll get more into all of that next time out.
Well, we’re almost there! Last time, we wrapped up the code to handle any possible Preview issues, so now it is time to finally see if we can issue a Commit and actually get the version of the application installed. As we did with the Preview process, we can hunt down the UI Action that handles the Commit and see if we can steal much, if not all, of the code. Here is what I found:
var commitInProgress = false;
function commitRemoteUpdateSet(control) {
if (commitInProgress)
return;
// get remoteUpdateSetId from g_form if invoked on remote update set page
var rusId = typeof g_form != 'undefined' && g_form != null ? g_form.getUniqueValue() : null;
var ajaxHelper = new GlideAjax('com.glide.update.UpdateSetCommitAjaxProcessor');
ajaxHelper.addParam('sysparm_type', 'validateCommitRemoteUpdateSet');
ajaxHelper.addParam('sysparm_remote_updateset_sys_id', rusId);
ajaxHelper.getXMLAnswer(getValidateCommitUpdateSetResponse);
}
function getValidateCommitUpdateSetResponse(answer) {
try {
if (answer == null) {
console.log('validateCommitRemoteUpdateSet answer was null');
return;
}
console.log('validateCommitRemoteUpdateSet answer was ' + answer);
var returnedInfo = answer.split(';');
var sysId = returnedInfo[0];
var encodedQuery = returnedInfo[1];
var delObjList = returnedInfo[2];
if (delObjList !== "NONE") {
console.log('showing data loss confirm dialog');
showDataLossConfirmDialog(sysId, delObjList, encodedQuery);
}
else {
console.log('skipping data loss confirm dialog');
runTheCommit(sysId);
}
} catch (err) {
}
}
function runTheCommit(sysId) {
console.log('running commit on ' + sysId);
commitInProgress = true;
var ajaxHelper = new GlideAjax('com.glide.update.UpdateSetCommitAjaxProcessor');
ajaxHelper.addParam('sysparm_type', 'commitRemoteUpdateSet');
ajaxHelper.addParam('sysparm_remote_updateset_sys_id', sysId);
ajaxHelper.getXMLAnswer(getCommitRemoteUpdateSetResponse);
}
var dataLossConfirmDialog;
function destroyDialog() {
dataLossConfirmDialog.destroy();
}
function showDataLossConfirmDialog(sysId, delObjList, encodedQuery) {
var dialogClass = typeof GlideModal != 'undefined' ? GlideModal : GlideDialogWindow;
var dlg = new dialogClass('update_set_data_loss_commit_confirm');
dataLossConfirmDialog = dlg;
dlg.setTitle('Confirm Data Loss');
if(delObjList == null) {
dlg.setWidth(300);
} else {
dlg.setWidth(450);
}
dlg.setPreference('sysparm_sys_id', sysId);
dlg.setPreference('sysparm_encodedQuery', encodedQuery);
dlg.setPreference('sysparm_del_obj_list', delObjList);
console.log('rendering data loss confirm dialog');
dlg.render();
}
function getCommitRemoteUpdateSetResponse(answer) {
try {
if (answer == null)
return;
var map = new GwtMessage().getMessages(["Close", "Cancel", "Are you sure you want to cancel this update set?", "Update Set Commit", "Go to Subscription Management"]);
var returnedIds = answer.split(',');
var workerId = returnedIds[0];
var sysId = returnedIds[1];
var shouldRefreshNav = returnedIds[2];
var shouldRefreshApps = returnedIds[3];
var dialogClass = window.GlideModal ? GlideModal : GlideDialogWindow;
var dd = new dialogClass("hierarchical_progress_viewer", false, "40em", "10.5em");
dd.setTitle(map["Update Set Commit"]);
dd.setPreference('sysparm_renderer_execution_id', workerId);
dd.setPreference('sysparm_renderer_expanded_levels', '0'); // collapsed root node by default
dd.setPreference('sysparm_renderer_hide_drill_down', true);
dd.setPreference('sysparm_button_subscription', map["Go to Subscription Management"]);
dd.setPreference('sysparm_button_close', map["Close"]);
dd.on("bodyrendered", function(trackerObj) {
var buttonsPanel = $("buttonsPanel");
var table = new Element("table", {cellpadding: 0, cellspacing: 0, width : "100%"});
buttonsCell = table.appendChild(new Element("tr")).appendChild(new Element("td"));
buttonsCell.align = "right";
buttonsPanel.appendChild(table);
var closeBtn = $("sysparm_button_close");
if (closeBtn)
closeBtn.disable();
var cancelBtn = new Element("button");
cancelBtn.id = "sysparm_button_cancel";
cancelBtn.type = "button";
cancelBtn.innerHTML = map["Cancel"];
cancelBtn.onclick = function() {
var response = confirm(map["Are you sure you want to cancel this update set?"]);
if (response != true)
return;
var ajaxHelper = new GlideAjax('UpdateSetCommitAjax');
ajaxHelper.addParam('sysparm_type', 'cancelRemoteUpdateSet');
ajaxHelper.addParam('sysparm_worker_id', workerId);
ajaxHelper.getXMLAnswer(getCancelRemoteUpdateSetResponse);
};
buttonsCell.appendChild(cancelBtn);
});
dd.on("executionComplete", function(trackerObj) {
var subBtn = $("sysparm_button_subscription");
var tableCount = Number(trackerObj.result.custom_table_count)
if (tableCount > 0) {
if (subBtn) {
subBtn.enable();
subBtn.onclick = function() {
window.open(trackerObj.result.inventory_uri);
};
}
} else {
subBtn.hide();
}
var closeBtn = $("sysparm_button_close");
if (closeBtn) {
closeBtn.enable();
closeBtn.onclick = function() {
dd.destroy();
};
}
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.hide();
});
dd.on("beforeclose", function() {
if (shouldRefreshNav)
refreshNav();
var top = getTopWindow();
if (shouldRefreshApps && typeof top.g_application_picker != 'undefined')
top.g_application_picker.fillApplications();
reloadWindow(window); //reload current form after closing the progress viewer dialog
});
dd.render();
} catch (err) {
}
}
function getCancelRemoteUpdateSetResponse(answer) {
if (answer == null)
return;
// Nothing really to do here.
}
Once again, I cannot claim to understand every single thing that is going on here, but that doesn’t mean that I can’t snag the code and see if I can make it work. As with the Preview logic, there is code in there to grab the sys_id of the Remote Update Set from the form. Since our process does not run on that form, that isn’t going to work, but we have already determined the sys_id when we were doing the Preview, so we can rip that line out and use the value that we have already established. Since we are going to need that in more than one function, and there are other global variables present in this script, I decided to make that a global variable as well and not pass it in to each function as an argument. I ended up with the following list of variables and then modified our onLoad function accordingly.
var dataLossConfirmDialog;
var attachmentId = '';
var updateSetId = '';
var commitInProgress = false;
function onLoad() {
attachmentId = document.getElementById('attachment_id').value;
updateSetId = document.getElementById('remote_update_set_id').value;
if (updateSetId) {
previewRemoteUpdateSet();
}
}
I pasted in the rest of the Commit code from the UI Action down at the bottom of the Client script of our UI Page and then deleted those global variables that were embedded amongst the various functions. Then I updated our earlier commitUpdateSet function to update the status message with the results of our earlier review of the Preview results and then launch the Commit.
function commitUpdateSet(answer) {
var result = JSON.parse(answer);
var message = '';
if (result.accepted > 0) {
if (result.accepted > 1) {
message += result.accepted + ' Flagged Updates Accepted; ';
} else {
message += 'One Flagged Update Accepted; ';
}
}
if (result.skipped > 0) {
if (result.skipped > 1) {
message += result.skipped + ' Flagged Updates Skipped; ';
} else {
message += 'One Flagged Update Skipped; ';
}
}
message += 'Committing Update Set ...';
document.getElementById('status_text').innerHTML = message;
commitRemoteUpdateSet();
}
The last thing to do, then, is to modify what happens after the Commit, which in our case will be the updating of the Collaboration Store records to reflect the installation of this version. Once again, we do not want to wait for the operator to hit the Close button, so we can take the same approach that we took with the Preview code and modify the dd.on(“executionComplete” function to be simply this:
Of course, we will have to build an updateStoreData function, which should update the version and application records and then return the operator back to the version record form where all of this started, but that’s a job that we will have to take on in our next installment.
“Long is the road from conception to completion.” — Molière
Last time, we finished up the Update SetPreview process and it looked like all that was left was to code out the Commit process and we would be done with the last major component of this long drawn-out project. Unfortunately, that’s not entirely true. Before we can move on to the Commit process, we have to deal with the fact that the Preview process may have uncovered some issues with the Update Set. In the manual process, these issues are reported to the operator, and the operator is required to deal them all before the Commit option is available. Not only do we need to address that possibility, we also have to add code to update the application and version records to reflect the version that was just installed and to link the newly installed application with the application record. So we have a little more work to do beyond just launching the Commit process before we can declare project completion.
First of all, we need to decide what to do with any Preview issues that may have been detected. Ideally, you would want to give the operator the opportunity to review these issues and make the appropriate decisions based on their knowledge of their instance and the application. However, since we are trying to make this first version as automated as possible, I have decided to have the software make arbitrary decisions about each reported problem, at least for now. In some future version, I may want to pop up a dialog and ask the operator whether they want to do their own review or trust the system to do it for them, but for now, that’s a little more sophisticated than I am ready to tackle. This may not be the best approach, but it is the simplest, and I am trying wrap up the work on this initial version.
My plan is to add yet another client-callable function to our existing ApplicationInstallerScript Include that will hunt down all of the problems and resolve them. The problem records have a field called available_actions that contains a list of all of the actions available for the problem, so I am going to use that as a guide to Accept Remote Update if I can, or Skip Remote Update if I cannot. I also want to keep track of the number of problems found, the number of updates accepted, and the number of updates skipped so that I can report that information back to the caller. In reviewing the code behind the UI Actions that accept and skip updates, I found a call to a global component called GlidePreviewProblemAction, but when I tried to access that component in my scoped Script Include, I got a security violation error. To work around that, I had to add the following new function to our global utilities, where I could make the call without error.
fixRemoteUpdateIssue: function(remUpdGR) {
var resolution = 'accepted';
var ppa = new GlidePreviewProblemAction(gs.action, remUpdGR);
if (remUpdGR.available_actions.contains('43d7d01a97b00100f309124eda2975e4')) {
ppa.ignoreProblem();
} else {
ppa.skipUpdate();
resolution = 'skipped';
}
return resolution;
}
With that out of the way, I was able to put the rest of the code where it belonged, and just called out to the global component for the part that I was unable to do in the scoped component.
evaluatePreview: function() {
var answer = {problems: 0, accepted: 0, skipped: 0};
var sysId = this.getParameter('remote_update_set_id');
if (sysId) {
problemId = [];
var remUpdGR = new GlideRecord('sys_update_preview_problem');
remUpdGR.addQuery('remote_update_set', sysId);
remUpdGR.query();
while (remUpdGR.next()) {
problemId.push(remUpdGR.getUniqueValue());
answer.problems++;
}
var csgu = new global.CollaborationStoreGlobalUtils();
for (var i=0; i<problemId.length; i++) {
remUpdGR.get(problemId[i]);
var resolution = csgu.fixRemoteUpdateIssue(remUpdGR);
if (resolution == 'accepted') {
answer.accepted++;
} else {
answer.skipped++;
}
}
}
return JSON.stringify(answer);
}
Now we just need make the GlideAjax call to that function from the client side before we attempt to launch the Commit process. Right now, when the Preview process is complete, a Close button appears on the progress dialog, and when you click on the Close button, our new UI Page reloads and starts all over again because the script that we lifted from the UI Action on the Update Set form was set up to reload that form. For our purposes, we do not want our own page reloaded, and in fact, we don’t even want a Close button; we just want to move on to the process of reviewing the results of the Preview. The relevant portion of the script that we stole looks like this:
dd.on("executionComplete", function(trackerObj) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
var closeBtn = $("sysparm_button_close");
if (closeBtn) {
closeBtn.onclick = function() {
dd.destroy();
};
}
});
dd.on("beforeclose", function() {
reloadWindow(window);
});
Since we do not want to wait for operator action, we can short-cut this entire operation and just move on as soon as execution has been completed. I replaced all of the above with the following:
Since the Preview process is now complete at this point, and we are now looking at the results, I decided to wrap the original message on the page with a span that had an id attribute so that I could change the message as things moved along. That line of HTML now looks like this:
<span id="status_text">Previewing Uploaded Update Set ...</span>
With that in place, I was able to update the message with the new status before I made the Ajax call to our new Script Include function.
function checkPreviewResults() {
document.getElementById('status_text').innerHTML = 'Evaluating Preview Results ...';
var ga = new GlideAjax('ApplicationInstaller');
ga.addParam('sysparm_name', 'evaluatePreview');
ga.addParam('remote_update_set_id', updateSetId);
ga.getXMLAnswer(commitUpdateSet);
}
function commitUpdateSet(answer) {
alert(answer);
}
I’m not ready to take on the Commit process just yet, so I stubbed out the commitUpdateSet function with a simple alert of the response from our Ajax call. That was enough to let me know that everything was working up to this point, which is what I needed to know before I attempted to move on.
Now that we have dealt with the possibility of Preview problems, we can finally take a look at what it will take to Commit the Update Set. That’s obviously a bit of work, so we’ll leave all of that for our next episode.
Last time, we ended with yet another unresolved fork in the road, whether to launch the Preview process from the upload.do page or to build yet another new page specific to the application installation process. At the time, it seemed as if there were equal merits to either option, but today I have decided that building a new page would be the preferable alternative. For one thing, that keeps the artifacts involved within the scope of our application (our global UI Script to repurpose the upload.do page had to be in the global scope), and it keeps the alterations to upload.do to the bare minimum.
Before we go off and build a new page, though, we will need to figure out how we are going to get there without the involvement of the operator (we want this whole process to be as automatic as possible). Digging through the page source of the original upload.do page, I found something that looks as if it might be relevant to our needs:
Now, the name of this element is sysparm_referring_url, which sounds an awful lot like it would be the URL from which we came; however, this is actually the URL where we end up after the Update Set XML file is uploaded, so I am thinking that if we replaced this value with a link to our own page, maybe we would end up there instead. Only one way to find out …
Those of you following along at home may recall that this value, which appears in the HTML source, actually disappeared somehow before the form was submitted, so I had to add this line of code to our script to put it back:
Assuming that we create a new UI Page for the remainder of the process and that we want to pass to it the attachment ID, we should be able to replace that line with something like this:
document.getElementsByName('sysparm_referring_url')[0].value = 'ui_page.do?sys_id=<sys_id of our new page>&sysparm_id=' + window.location.search.substring(15);
Now all we need to do is create the page, put something on it, and then add the code that we stole from the UI Action that launches the Update SetPreview. After we hacked up the upload.do page, the end result turned out looking like this:
To keep things looking consistent, we can steal some of the HTML from that page and make our new page look something like this:
To make that happen, we can snag most of the HTML from a quick view frame source and then format it and stuff it into a new UI Page called install_application:
That takes care how the page looks. Now we need to deal with how it works. To Preview an uploaded Update Set, you need the Remote Update Set‘s sys_id. We have a URL parameter that contains the sys_id of the Update Set XML file attachment, but that’s not the sys_id that we need at this point. We will have to build a process that uses the attachment sys_id to locate and return the sys_id that we will need. We can just add another function to our existing ApplicationInstallerScript Include.
getRemoteUpdateSetId: function(attachmentId) {
var sysId = '';
var sysAttGR = new GlideRecord('sys_attachment');
if (sysAttGR.get(attachmentId)) {
var versionGR = new GlideRecord(sysAttGR.getDisplayValue('table_name'));
if (versionGR.get(sysAttGR.getDisplayValue('table_sys_id'))) {
var updateSetGR = new GlideRecord('sys_remote_update_set');
updateSetGR.addQuery('application_name', versionGR.getDisplayValue('member_application'));
updateSetGR.addQuery('application_scope', versionGR.getDisplayValue('member_application.scope'));
updateSetGR.addQuery('application_version', versionGR.getDisplayValue('version'));
updateSetGR.addQuery('state', 'loaded');
updateSetGR.query();
if (updateSetGR.next()) {
sysId = updateSetGR.getUniqueValue();
}
}
}
return sysId;
}
Basically, we use the passed attachment record sys_id to get the attachment record, then use data found on the attachment record to get the version record, and then use data found on the version record and associated application record to get the remote update set record, and then pull the sys_id that we need from there. Those of you who have been paying close attention may notice that one of the application record fields being used to find the remote update set is scope. The scope of the application was never included in the original list of data fields for the application record, so I had to go back and add it everywhere in the system where an application record was referenced, modified, or moved between instances. That was a bit of work, and hopefully I have found them all, but I think that was everything.
Anyway, now we have a way to turn an attachment record sys_id into a remote update set record sys_id, so we need to add some code to our UI Page to snag the attachment record sys_id from the URL, use it to get the sys_id that we need, and then stick that value on the page somewhere so that it can be picked up by the client-side code. At the top of the HTML for the page, I added this:
<g2:evaluate jelly="true">
var ai = new ApplicationInstaller();
var attachmentId = gs.action.getGlideURI().get('sysparm_id');
var sysId = ai.getRemoteUpdateSetId(attachmentId);
</g2:evaluate>
Then in the body of the page, just under the text, I added this hidden input element:
That took care of things on the server side. Now we need to build some client-side code that will run when the page is loaded. We can do that with an addLoadEvent like so:
addLoadEvent(function() {
onLoad();
});
Our onLoad function can then grab the value from the hidden field and pass it on to the function that we lifted from the Preview Update SetUI Action earlier (which we need to paste into the client code section of our new UI Page).
function onLoad() {
var sysId = document.getElementById('remote_update_set_id').value;
if (sysId) {
previewRemoteUpdateSet(sysId);
}
}
That’s all there is to that. The entire Client script portion of the new UI Page, including the code that we lifted from the UI Action, now looks like this:
function onLoad() {
var sysId = document.getElementById('remote_update_set_id').value;
if (sysId) {
previewRemoteUpdateSet(sysId);
}
}
addLoadEvent(function() {
onLoad();
});
function previewRemoteUpdateSet(sysId) {
var MESSAGE_KEY_DIALOG_TITLE = "Update Set Preview";
var MESSAGE_KEY_CLOSE_BUTTON = "Close";
var MESSAGE_KEY_CANCEL_BUTTON = "Cancel";
var MESSAGE_KEY_CONFIRMATION = "Confirmation";
var MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE = "Are you sure you want to cancel this update set preview?";
var map = new GwtMessage().getMessages([MESSAGE_KEY_DIALOG_TITLE, MESSAGE_KEY_CLOSE_BUTTON, MESSAGE_KEY_CANCEL_BUTTON, MESSAGE_KEY_CONFIRMATION, MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
var dialogClass = window.GlideModal ? GlideModal : GlideDialogWindow;
var dd = new dialogClass("hierarchical_progress_viewer", false, "40em", "10.5em");
dd.setTitle(map[MESSAGE_KEY_DIALOG_TITLE]);
dd.setPreference('sysparm_ajax_processor', 'UpdateSetPreviewAjax');
dd.setPreference('sysparm_ajax_processor_function', 'preview');
dd.setPreference('sysparm_ajax_processor_sys_id', sysId);
dd.setPreference('sysparm_renderer_expanded_levels', '0');
dd.setPreference('sysparm_renderer_hide_drill_down', true);
dd.setPreference('focusTrap', true);
dd.setPreference('sysparm_button_close', map["Close"]);
dd.on("executionStarted", function(response) {
var trackerId = response.responseXML.documentElement.getAttribute("answer");
var cancelBtn = new Element("button", {
'id': 'sysparm_button_cancel',
'type': 'button',
'class': 'btn btn-default',
'style': 'margin-left: 5px; float:right;'
}).update(map[MESSAGE_KEY_CANCEL_BUTTON]);
cancelBtn.onclick = function() {
var dialog = new GlideModal('glide_modal_confirm', true, 300);
dialog.setTitle(map[MESSAGE_KEY_CONFIRMATION]);
dialog.setPreference('body', map[MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
dialog.setPreference('focusTrap', true);
dialog.setPreference('callbackParam', trackerId);
dialog.setPreference('defaultButton', 'ok_button');
dialog.setPreference('onPromptComplete', function(param) {
var cancelBtn2 = $("sysparm_button_cancel");
if (cancelBtn2)
cancelBtn2.disable();
var ajaxHelper = new GlideAjax('UpdateSetPreviewAjax');
ajaxHelper.addParam('sysparm_ajax_processor_function', 'cancelPreview');
ajaxHelper.addParam('sysparm_ajax_processor_tracker_id', param);
ajaxHelper.getXMLAnswer(_handleCancelPreviewResponse);
});
dialog.render();
dialog.on("bodyrendered", function() {
var okBtn = $("ok_button");
if (okBtn) {
okBtn.className += " btn-destructive";
}
});
};
var _handleCancelPreviewResponse = function(answer) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
};
var buttonsPanel = $("buttonsPanel");
if (buttonsPanel)
buttonsPanel.appendChild(cancelBtn);
});
dd.on("executionComplete", function(trackerObj) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
var closeBtn = $("sysparm_button_close");
if (closeBtn) {
closeBtn.onclick = function() {
dd.destroy();
};
}
});
dd.on("beforeclose", function() {
reloadWindow(window);
});
dd.render();
}
Now all we need to do is pull up the old version record and push that Install button one more time, which I did.
So, there is good news and there is bad news. The good news is that it actually worked! That is to say that clicking on the Install button pulls down the Update Set XML file data, posts it back to the server via the modified upload.do page, and then goes right into previewing the newly created Update Set. That part is very cool, and something that I wasn’t sure that I was going to be able to pull off when I first started thinking about doing this. The bad news is that, once the Preview is complete, the stock code reloads the page and the whole Preview process starts all over again. That’s not good! However, that seems like a minor issue with which we should be able to deal relatively easy. All in all, then, it seems like mostly good news.
Of course, we are still not there yet. Once an Update Set has been Previewed, it sill has to be Committed before the application is actually installed. Rather than continuously reloading the page then, our version of the UI Action code is going to need to launch the Commit process. We should be able to examine the CommitUI Action as we did the PreviewUI Action and steal some more code to make that happen. That sounds like a little bit of work, though, so let’s save all of that for our next installment.
“Time is what keeps everything from happening at once.” — Ray Cummings
Welcome to installment #50 of this seemingly never-ending series! That’s a milestone to which we have never even come close on this site. But then, we have never taken on a project of this magnitude before, either. Still, you would think that we would have been done with this endeavor long before now. That’s the way these things go, though. When you strike out into the darkness with just a vague idea of where you want to go, you never really know where you will end up or how long it will take. There are those who would tell you, though, that it’s all about the journey, not the destination! Still, I try to stay focused on the destination. I think we are getting close.
Last time, we wrapped up the coding on our global UI Script that allowed us to repurpose the upload.do page for installing a version of an application. We never really tested it all the way through, though, so we should probably do that before we attempt to go any further. Just to back up a bit, the way that we try this thing out is to pull up a version record for an application and click on the Install button that we added a few episodes back.
That should launch the upload.do page, and with the added URL parameter for the attachment sys_id, that should trigger our UI Script, which should then turn that page into this:
Meanwhile, the script should call back to the server for the Update Set XML file information, update the form on the page using that information, and then submit the form. After the form has been submitted, the natural process related to the upload.do page takes you here:
So, it looks like it all works, which is good. Unfortunately, the application has still not been installed. From here it is a manual process to first Preview the Update Set, and then Commit it. We don’t really want that to be a manual process, though, so let’s see what we can do to make that all happen without the operator having to click on anything or take any action to move things along. To begin, we should probably take a look how it is done manually, which should help guide us into how we might be able to do it programmatically. If you click on the Update Set in the above screen to bring up the details, you will see a form button, which is just another UI Action, called Preview Update Set.
Using the hamburger menu, we can select Configure -> UI Actions to pull up the list of UI Actions related to this form, and then select the Preview Update Set action and take a peek under the hood. It looks like all of the work is done on the client side with the following script:
function previewRemoteUpdateSet(control) {
var MESSAGE_KEY_DIALOG_TITLE = "Update Set Preview";
var MESSAGE_KEY_CLOSE_BUTTON = "Close";
var MESSAGE_KEY_CANCEL_BUTTON = "Cancel";
var MESSAGE_KEY_CONFIRMATION = "Confirmation";
var MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE = "Are you sure you want to cancel this update set preview?";
var map = new GwtMessage().getMessages([MESSAGE_KEY_DIALOG_TITLE, MESSAGE_KEY_CLOSE_BUTTON, MESSAGE_KEY_CANCEL_BUTTON, MESSAGE_KEY_CONFIRMATION, MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
var sysId = typeof g_form != 'undefined' && g_form != null ? g_form.getUniqueValue() : null;
var dialogClass = window.GlideModal ? GlideModal : GlideDialogWindow;
var dd = new dialogClass("hierarchical_progress_viewer", false, "40em", "10.5em");
dd.setTitle(map[MESSAGE_KEY_DIALOG_TITLE]);
dd.setPreference('sysparm_ajax_processor', 'UpdateSetPreviewAjax');
dd.setPreference('sysparm_ajax_processor_function', 'preview');
dd.setPreference('sysparm_ajax_processor_sys_id', sysId);
dd.setPreference('sysparm_renderer_expanded_levels', '0'); // collapsed root node by default
dd.setPreference('sysparm_renderer_hide_drill_down', true);
dd.setPreference('focusTrap', true);
dd.setPreference('sysparm_button_close', map["Close"]);
// response from UpdateSetPreviewAjax.previewAgain is the progress worker id
dd.on("executionStarted", function(response) {
var trackerId = response.responseXML.documentElement.getAttribute("answer");
var cancelBtn = new Element("button", {
'id': 'sysparm_button_cancel',
'type': 'button',
'class': 'btn btn-default',
'style': 'margin-left: 5px; float:right;'
}).update(map[MESSAGE_KEY_CANCEL_BUTTON]);
cancelBtn.onclick = function() {
var dialog = new GlideModal('glide_modal_confirm', true, 300);
dialog.setTitle(map[MESSAGE_KEY_CONFIRMATION]);
dialog.setPreference('body', map[MESSAGE_KEY_CANCEL_CONFIRM_DIALOG_TILE]);
dialog.setPreference('focusTrap', true);
dialog.setPreference('callbackParam', trackerId);
dialog.setPreference('defaultButton', 'ok_button');
dialog.setPreference('onPromptComplete', function(param) {
var cancelBtn2 = $("sysparm_button_cancel");
if (cancelBtn2)
cancelBtn2.disable();
var ajaxHelper = new GlideAjax('UpdateSetPreviewAjax');
ajaxHelper.addParam('sysparm_ajax_processor_function', 'cancelPreview');
ajaxHelper.addParam('sysparm_ajax_processor_tracker_id', param);
ajaxHelper.getXMLAnswer(_handleCancelPreviewResponse);
});
dialog.render();
dialog.on("bodyrendered", function() {
var okBtn = $("ok_button");
if (okBtn) {
okBtn.className += " btn-destructive";
}
});
};
var _handleCancelPreviewResponse = function(answer) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
};
var buttonsPanel = $("buttonsPanel");
if (buttonsPanel)
buttonsPanel.appendChild(cancelBtn);
});
dd.on("executionComplete", function(trackerObj) {
var cancelBtn = $("sysparm_button_cancel");
if (cancelBtn)
cancelBtn.remove();
var closeBtn = $("sysparm_button_close");
if (closeBtn) {
closeBtn.onclick = function() {
dd.destroy();
};
}
});
dd.on("beforeclose", function() {
reloadWindow(window);
});
dd.render();
}
I’m not going to attempt to pretend that I understand all that is going on here. I will say, though, that it looks to me as if we could steal this entire script and launch it from a location of our own choosing without having to have the operator click on any buttons. The one line that I see that would need to be modified is the one that gets the sys_id of the Update Set.
I think to start with, I would just delete that line entirely and pass the sys_id in as an argument to the function. Right now, a variable called control is passed in to the function, but I don’t see where that is used anywhere, so I think that I would just change this:
function previewRemoteUpdateSet(control) {
… to this:
function previewRemoteUpdateSet(sysId) {
… and see where that might take us. Maybe that will work and maybe it won’t, but you never know until you try. Of course, not everyone is a big proponent of that Let’s pull the lever and see what happens approach; once I was told that the last words spoken on Earth will be something like “Gee, I wonder what this button does.” Still, it’s just my nature to try things and see how it all turns out. But first we have to figure out where we can put our stolen script.
I can see two ways to go here: 1) we can just add it to our hack of the upload.do page and keep everything all in one place, or 2) since the upload.do page has done it’s job at this point and we don’t want to hack up a stock component any more than is absolutely necessary, let’s create a UI Page of our own and put the rest of the process in there where we can control everything and keep it within the scope of the application. There are, as usual, pros and cons for both approaches. I don’t know if one way is any better than the other, but we don’t have to decide right this minute. Let’s save that for our next installment.
“Don’t tell me the sky’s the limit when there are footprints on the moon.” — Paul Brandt
Last time, we got started on the global UI Script that will run on the upload.do page to take over the page and repurpose it for our needs. Our interest is to convert an Update Set XML file back into an actual Update Set so that we can apply the Update Set, installing a shared Scoped Application. The upload.do page will help set us on that path, but we need our script to implement just a few little modifications. We got as far as launching the GlideAjax process which will fetch the Update Set XML file details from the server side, and now we need to build the function that will process the results coming back and do something with them. The “answer” returned will be a JSON string, so we just need to turn that back into an object so that we can extract the values. We can do just that much and verify the results by popping an alert using one of the values that should be found in the resulting object, the name of the XML file.
function submitForm(answer) {
var app = {};
try {
app = JSON.parse(answer);
} catch (e) {
alert('Error parsing JSON response from server: ' + e);
}
alert(app.fileName);
}
There is not much here, but we can push the old Install button on the version page, just to verify that all is well so far.
Although that wasn’t much in the way of code, it did verify that the server side Script Include that we built a while back does seem to work, as well as the Ajax call that we built last time and the JSON parsing that we just added today. At this point, we have built a UI Action that sends us over to the upload.do page, taken over the page for our own purposes, hiding the original content and adding content of our own, called back to the server side for the XML file information, and demonstrated that the XML file information has indeed been transferred over to the client side. Now that we have it in hand, we have to use it to emulate a file on the local system and send that faux file back over to the server side as an element of a form post. This is where things get a little tricky.
While digging around trying to find a way to do this, I came across the DataTransfer object. This object contains a list of File objects, and you can add to the list using the add() method of the items property. These two lines of code create a new DataTransfer object and add a new file to the empty list using the data that we retrieved from the Ajax call.
var fileList = new DataTransfer();
fileList.items.add(new File([app.xml], app.fileName, {type: 'application/xml'}));
Now that we have our “file” in a file list, we can populate the files attribute of the input element using the files attribute of our DataTransfer object.
Now we just have to submit the form and see what happens. Actually, I did that, and nothing happened. It seems that there are a couple of other form fields that also need to be valued. What seems weird to me is that, if you look at the source code for the page, those fields do start out with a value, but somewhere along the line those values were removed before the form was posted, so I had to add a couple more lines to put those values back.
Now we can submit the form, which is just one more line of code.
document.forms[0].submit();
All together, our new submitForm function looks like this:
function submitForm(answer) {
var app = {};
try {
app = JSON.parse(answer);
} catch (e) {
alert('Error parsing JSON response from server: ' + e);
}
var fileList = new DataTransfer();
fileList.items.add(new File([app.xml], app.fileName, {type: 'application/xml'}));
document.getElementById('attachFile').files = fileList.files;
document.getElementsByName('sysparm_referring_url')[0].value = 'sys_remote_update_set_list.do?sysparm_fixed_query=sys_class_name=sys_remote_update_set';
document.getElementsByName('sysparm_target')[0].value = 'sys_remote_update_set';
document.forms[0].submit();
}
And that completes (for now) our new global UI Script. Here is the entire script, including all of the work that we did last time out.
if (window.location.pathname == '/upload.do' && window.location.search.startsWith('?attachment_id=')) {
waitForPageLoad();
}
function waitForPageLoad() {
if (document.getElementById('attachFile')) {
installApplication();
} else {
setTimeout(waitForPageLoad, 100);
}
}
function installApplication() {
var originalContent = document.getElementsByClassName('section-content')[0];
originalContent.style.visibility = 'hidden';
var newContent = document.createElement('div');
newContent.innerHTML = '<h4 style="padding: 30px;"> <img src="/images/loading_anim4.gif" height="18" width="18"> Uploading Update Set XML file ...</h4>';
originalContent.parentNode.insertBefore(newContent, originalContent);
var attachmentId = window.location.search.substring(15);
var ga = new GlideAjax('x_11556_col_store.ApplicationInstaller');
ga.addParam('sysparm_name', 'getXML');
ga.addParam('attachment_id', attachmentId);
ga.getXMLAnswer(submitForm);
}
function submitForm(answer) {
var app = {};
try {
app = JSON.parse(answer);
} catch (e) {
alert('Error parsing JSON response from server: ' + e);
}
var fileList = new DataTransfer();
fileList.items.add(new File([app.xml], app.fileName, {type: 'application/xml'}));
document.getElementById('attachFile').files = fileList.files;
document.getElementsByName('sysparm_referring_url')[0].value = 'sys_remote_update_set_list.do?sysparm_fixed_query=sys_class_name=sys_remote_update_set';
document.getElementsByName('sysparm_target')[0].value = 'sys_remote_update_set';
document.forms[0].submit();
}
At this point, all that we have accomplished is to load the Update Set. We still have not installed anything. The Update Set still has to be Previewed and then Committed before the version is actually installed. The ultimate goal will be for the operator to be able to just click on that Install button and have everything else takes care of itself, including marking the version record as Installed (and any other version records of the app as not installed). Whether or not we can do all of that without human intervention has yet to be determined, but we have at least accomplished that first step of turning the XML file back into an Update Set. Next time, we will see where we can go from here.