Collaboration Store, Part XXXVII

“Stay committed to your decisions, but stay flexible in your approach.”
Tony Robbins

After abandoning my earlier plans to jump into the application publishing process, I started to take a look at the missing error recovery needs for all of these inter-instance interactions. One thing that I had always planned to do at some point was to create an activity log of all transactions between instances. My idea was not to sync the logs across all instances, but to have a record in each instance of the things that went on with that particular instance. On the Host instance, such a log could provide the basis for some form of periodic recovery process that scanned the log for failed transactions and then made another attempt to put the transaction through again. After giving that some thought, though, I decided that a better approach would be to just scan each instance for instances, applications, and versions, and then attempt to push over anything that was missing compared to what was present on the Host. I still want to build an activity log at some point, but I don’t think that I want to use it as a basis for error recovery. I think it would be more straightforward to just compare the lists of records on each instance with the master lists on the Host and then try to correct any deficiencies.

That’s the plan today, anyway, but plans do have a way of changing over time, so who knows how things will come out once I start putting all of the pieces together. Right now, though, this seems like the better way to go.

One little thing that have been wanting to do for some time was to add another field to the version record to indicate the version of ServiceNow that was used to build the Update Set. There is a System Property that contains this information, so I all I really needed to do was to add the field and then add a line of code to the version record creation function to pull that value out of the property and stuff it into the new field. The name of the property is glide.buildtag.last, and the name of the field that I added is built_on.

New built_on field added to the version record

Once I added the new field to the version record, I opened up the ApplicationPublisher Script Include, which creates the version record, and added the following line to the function that builds the record:

versionGR.setValue('built_on', gs.getProperty('glide.buildtag.last'));

I also had to modify the function that sent the version record over to other instances to pass on the new value. Since I copied that code instead of consolidating the logic into a single, reusable function, I had to do that in two places, in the ApplicationPublisher and then again in the CollaborationStoreUtils (I really need to collapse that code into a single set of functions that will for both cases). In both places, I added the following line:

payload.built_on = versionGR.getDisplayValue('built_on');

This was not any kind of a major change, but it was something that I had been meaning to do for a long time, and so while I was in looking at the code, I decided to just go ahead and do it. Next time, we will focus on some real work to start building out some kind of error recovery process so that we can ensure that all of the instances in the community are always kept up to date, even if they miss an update in real time for whatever reason.

Collaboration Store, Part XXXVI

“The definition of flexibility is being constantly open to the fact that you might be on the wrong track.”
Brian Tracy

Although it is long past time to start some serious work on the third major component of this little(?) project, the application installation process, I have been rather hesitant to officially kick that off. Last time, we addressed the last of the reported defects in the first two processes, the initial set-up and the application publishing process. Now, it would seem, would be the time to jump into wrestling with that last remaining primary function and put that to bed before turning our attentions to the list of secondary features that will complete the initial release of the full product. At least, that would appear to be the next logical step.

The reason for my reluctance, however, is that I have taken a cursory look at the various approaches available to accomplish my goal, and quite frankly, I don’t really like any of them. When we converted our Update Set to XML, we were able to fish out enough parts and pieces from the stock product to cobble together a reasonable solution with a minimal amount of questionable hackery. To go in the other direction, to convert the XML back to an Update Set, the only stock component that appears to provide this functionality is bound tightly with the process of uploading a local file from the user’s file system. The /upload.do page, or more specifically, the /sys_upload.do process to which the form on that page is posted, handles both the importing of the XML file and the conversion of that file to an Update Set. There is no way to extract the process that turns the XML into an Update Set, since everything is encapsulated into that one all-encompassing process. For our purposes, we do not have a local file on the user’s machine to upload; our file is an attachment already present on the server, so invoking this process, which seems the only way to go, involves much more than we really need.

To invoke the one and only process that I have found (so far!) to make this conversion, then, we will have to post a multi-part form to /sys_upload.do that includes our XML data along with all of the other fields, headers, and cookies expected by the server-side form processor. On the server side, we should be able to accomplish this with an outbound REST message, or on the client side, we should be able to do this by somehow sending our XML instead of a local file when submitting the form. Each approach has its own merits, but they also each have their own issues, and no matter which way you go, it’s a rather complicated mess.

The Server Side Approach

Posting a multi-part form on the server side is actually relatively simple as far as the form data goes. We can construct a valid body for a standard multipart/form-data POST using our XML data and related information and then send it out using an outbound REST message. That’s the easy part. We just need to add an appropriate Content-Type header, including some random boundary value:

Content-Type: multipart/form-data; boundary=somerandomvalue

Then we can build up the body by including all of the hidden fields on the form, and then add our XML in a file segment that would look something like this:

------------somerandomvalue
Content-Disposition: form-data; name="attachFile"; filename="app_name_v1.0.xml"
Content-Type: application/xml

<... insert XML data here ...>

------------somerandomvalue--

In addition to the XML file component, you would also need to send all of the other expected form field values, some of which are preloaded on the form when it is delivered. To obtain those values, you would have to first issue an HTTP GET request of the /upload.do page and pick those values out of the resulting HTML. This can be accomplished with a little regex magic and the Javascript string .match() method. Here is a simple function to which you can pass the HTML and a pattern to return the value found in the HTML based on the pattern:

function extractValue(html, pattern) {
	var response = '';
	var result = html.match(pattern);
	if (result.length > 1) {
		response = result[1];
	}
	return response;
}

For example, one of the form fields found on the /upload.do page is sysparm_ck. The INPUT element for this hidden field looks like this:

<input name="sysparm_ck" id="sysparm_ck" type="hidden" value="68fa4eee2fa401104425fcecf699b646939f52c6787c23fff22b124fcf58f713235b7478"></input>

To snag the value of that field, you would just pass the HTML for the page and the following pattern to our extractValue function:

id="sysparm_ck" type="hidden" value="(*.?)"

Once you have obtained the value, you can use it to build another “part” in the multi-part form body:

------------somerandomvalue
Content-Disposition: form-data; name="sysparm_ck"

68fa4eee2fa401104425fcecf699b646939f52c6787c23fff22b124fcf58f713235b7478

------------somerandomvalue

All of that is pretty easy to do, and would work great except for one thing: this POST would only be accepted as part of an authenticated session, and cannot just be sent in on its own. Theoretically, we could create an authenticated session by doing a GET of the /login.do page and then a POST of some authoritative user’s credentials, but that would mean knowing and sending the username and password of a powerful user, which is a dangerous thing with which to start getting involved. For that reason, and that reason alone, this does not seem to be a good way to go.

The Client Side Approach

On the client side, you are already involved in an authenticated session, so that’s not any kind of an issue at all. What you do not have is the XML, so to do anything on the client side, we will first need to create some kind of GlideAjax service that will deliver the XML over to the client. Once we have the XML that we would like to upload in place of the normal local file, we will have to perform some kind of magic trick to update the form on the page with our data in the place of a file from the local computer. To do that, we will have to either create our own copy of the /upload.do page or add a global script that will only run on that page, and only if there is some kind of URL parameter indicating that this is one of our processes and not just a normal user-initiated upload. We did this once before with a global script that only ran on the email client, so I know that I can run a conditional script on a stock page if I do not create a page of my own, but the trick will be getting the XML data to be sent back to the server with the rest of the input form.

After nosing around a bit for available options, it appears that you might be able to leverage the DataTransfer object to build a fake file, link it to the form, and then submit the form using something like this:

function uploadXML(xml, fileName) {
	var fileList = new DataTransfer();
	fileList.items.add(new File(xml, fileName));
	document.getElementById('attachFile').files = fileList.files;
	document.forms[0].submit();
}

Of course, there will be a lot more to it than that, as you will need to get a handle on the Update Set created and then try to Preview and Commit it programmatically as well, but this looks like a possibility. Still, you have to move the entire Update Set XML all the way down to the client just to push it all the way back up to the server again, which seems like quite a waste. Plus, with any client-side functionality, there is always the browser compatibility issues that would all need to be tested and resolved. Maybe this would work, but I still don’t like it. It seems like quite a bit of complexity and more than a few opportunities for things to go South. I’m still holding out hope that there is a better way.

Now what?

So … given that I don’t like any of the choices that I have come up with so far, I have decided to set that particular task aside for now in the hopes that a better alternative will come to me before I invest too much effort into a solution with which I am not all that thrilled. There is no shortage of things to do here, so my plan is to just focus on other issues and then circle back to this particular effort when a better idea reveals itself, or I run out of other things to do. Technically, once you have obtained the XML for a particular version from the Host, you can still manually install it by downloading the attachment yourself and importing like any other XML Update Set. That’s not really how I intend all of this to function, but it does work, so it should be OK to set this aside for a time.

Next time, then, instead of forging ahead with this third major component as I had originally planned, I will pick something else out of the pile and we will dig into that instead.

Collaboration Store, Part XXXV

“The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do.”
James Barton

Well, the test results are starting to trickle in, and one of the issues looks rather important, so we need to take a look at that before we jump into our next major effort. The problem with the scoped System Properties came up earlier, and I thought that I had found a way to deal with the issue, but obviously there is still a problem where the initial set-up is concerned. Basically, if you are not in the Collaboration Store scope, then the set-up fails because the updates to the scoped System Properties are not allowed. My idea of moving the update logic to a global component did not resolve this issue, as the problem is related to the active scope of the user rather than the scope of the component.

So, it seems that the answer to the problem is to ensure that the user is in the correct scope before allowing them proceed with the set-up. Fortunately, there is already an area in ServiceNow where that very check is made, and there is even an option in the error message to switch over to the correct scope. You can see that when you bring up a scoped app and you are not in the scope of that application.

Sample error message for incorrect scope on the scoped application form

So it would seem that we could snag the HTML for that message and throw it up in the top of the HTML for the initial set-up widget with some kind of ng-hide so that it only appears if you are in the wrong scope. The first thing that we would need to do, then, is to figure out how to detect the user’s scope and then set up some boolean variable in the widget to indicate whether or not the user is in the correct scope to proceed with the set-up. Looking at the GlideSession API, it seems like the getCurrentApplicationId() method is just what we need. So I added this line to the top of the server script of the widget:

data.validScope = (gs.getCurrentApplicationId() == '5b9c19242f6030104425fcecf699b6ec');

Then, to prevent the user from submitting the form when in the wrong scope, I modified the ng-disabled tag on all of the submit buttons from this:

ng-disabled="!(form1.$valid)"

… to this:

ng-disabled="!(form1.$valid) || !c.data.validScope"

Now all I needed to do was to grab a copy of the HTML source from the application form and paste it in at the top of the HTML for the widget and see if the link would still work. Unfortunately, the link in the existing code relies on the GlideURL object, which is not available in Service Portal widgets, so we are going to have to hack that up a little bit to get around that problem. Here is the existing onclick script:

window.location.href=new GlideURL('change_current_app.do').addParam('app_id', '5b9c19242f6030104425fcecf699b6ec').addParam('referrer', window.location.pathname + window.location.search + window.location.hash).getURL();

Basically, this code just builds a URL, so it seems as if we could just go ahead and build the URL manually and hard-code the link. After doing a little tinkering around to see just what the URL actually was, I came up with this value for our circumstances:

change_current_app.do?app_id=5b9c19242f6030104425fcecf699b6ec&referrer=%24sp.do%3Fid%3Dcollaboration_store_setup

So, my final HTML, including the ng-hide based on my new widget variable, turned out to be this:

<div id="nav_message" class="outputmsg_nav" ng-hide="c.data.validScope">
  <img role="presentation" src="images/icon_nav_info.png">
  <span class="outputmsg_nav_inner">
    &nbsp;
    The <strong>Collaboration Store Set-up</strong> cannot be completed because <strong>Collaboration Store</strong> is not selected in your application picker.
    <a onclick="window.location.href='change_current_app.do?app_id=5b9c19242f6030104425fcecf699b6ec&referrer=%24sp.do%3Fid%3Dcollaboration_store_setup'">
      Switch to <strong>Collaboration Store</strong>
    </a>.
  </span>
</div>

Now all I needed to do was to bring up the set-up widget while in the wrong scope and see how it looked and how the new link worked out once I gave it a try.

Initial set-up screen with error message and link when in the incorrect scope

With the hard-coded link in the onclick function, everything seems to work as intended. The user is placed in the correct scope, the form is reloaded, and the error message goes away. This should finally resolve this annoying issue once and for all (let’s hope!).

The other issue that was reported was that the provider field on the application record was not populated when publishing a new version of an application. It was not clear from the comment whether the missing data was on the Client instance, the Host instance, or both, but I was unable to recreate any of those conditions in any of my testing. I am going to need a little more detail on this one before I can address it, so if anyone encounters this error in any of their testing, please leave a detailed message in the comments so that we can get it resolved.

There is still the potential for more feedback to come, but this particular issue seems to be rather critical, so it’s time to release a new Update Set. While I am at it, I think I will take a different approach on the global components and make an actual Update Set for that as well instead of just exporting the XML for the one global Script Include. This way, I can also include the app’s logo image, which is always missing from the XML generated for a scoped application. Here are the two new Update Sets:

As always, all feedback is welcome, and not just to report issues. If you give this a try and everything actually works, I would love to hear about that as well. Next time out, we will either deal with any more issues to come to light, or we will jump into the next big challenge, which will be to install an app published to the store.

Special Note to Testers

Set-up Process – If you want to test the set-up process, you will need to set up the Host instance first. You cannot set up a Client instance until there is a valid Host instance, as the Client instance set-up process requires access to a valid Host instance. What to look for: check to make sure that all instances in the community have the same list of instances; select Collaboration Store -> Member Organizations to pull up the list of instances in each instance to verify that all instances have the exact same list of all instances in the community.

Application Publishing Process – If you want to test the application publishing process, you will need to go through the set-up process first, and then you can publish an application to the store. This can be done with a single Host instance, but to fully test all of the functionality, you will need at least two Client instances in addition to the Host. What to look for: once you publish the application, check to make sure that all instances in the community have the newly published version of the application, including the XML Update Set attachment; select Collaboration Store -> Member Organizations to pull up the list of instances, then check the Related List of applications under the appropriate instance for the application, and then click on the app to check the Related List of versions under the application.

Application Installation Process – If you want to test the application installation process, you have jumped ahead of the class, as that portion of the app has not yet been developed. However, even though the development of the official installation process has not even been started, if you really want to install an app that has been shared with your instance, you can always download the XML attachment on the version record, import it back into your instance, and go through the normal XML Update Set installation process that you would go through for any other imported Update Set. But again, that’s getting a little ahead of where things stand right at the moment with project’s development.

One More Thing

As mentioned earlier, there is currently no error recovery built into the system at this time. This is definitely something near the top of the we-really-need-to-do-this list, but it is not there right now. What that means is that if you are doing any kind of testing and one or more of the instances in your community is off-line or unavailable for some reason, things will fail and those instances will not get the needed updates. One day we will definitely need to fix that, but for now, if that happens, that’s not a bug in the software to be reported; it’s to be expected until we build in some kind of error recovery into the product.

Also, if you end up testing things and don’t have any issues to report, then by all means, report that, too! You don’t have to have encounter a problem to post your results. If everything worked out as expected, we would definitely love to hear that as well. As always, all feedback of any kind is very much appreciated.

Collaboration Store, Part XXXIV

“The key is not to prioritize what’s on your schedule, but to schedule your priorities.”
Stephen Covey

So far, we have completed the first two of the three primary components of the project, the initial set-up process and the application publication process. The last of the three major pieces will be the process that will allow you to install an application obtained from the store. Before we dive straight into that, though, we should pause to take a quick look at what we have, and what still needs to be done in order to make this a viable product. At this point, you can install all of the prerequisites and then grab the latest Update Set, install it, and go through the set-up process to create either a Host or Client instance. Once you get through all of that, you are ready to publish any local Scoped Application to the store, which will then be shared with all other instances in your Collaboration Store community.

What you cannot do, just yet, is to find an application published to the store by some other instance and install it on your own instance. That’s the missing third leg of the stool that we will need to take on next. But that is not all that is left to be done. Once we get the basics to work, there are quite a number of other things to address before one could consider this to be truly usable. Some things are just annoyances, but others are definite features that you would have to consider essential for a complete product.

Speaking of annoyances, one of the things that I really don’t like is that when you publish an app to XML for distribution, the resulting Update Set XML does not include the app’s logo image. Clearly it is a part of the app, and if you push an app to an internal store and pull it down into another instance, it comes complete with the logo, so why they left that out of the XML is a mystery to me. I don’t like that for a couple of reasons: 1) when you pull down the XML for this app, you do not get the logo, and 2) when we use the XML to publish an app to the store, the logo is missing there as well. I have seen people complain about this, but I have not, as yet, seen a solution. I would really like to address that, both for my own apps as well as for the process that we are using in this one.

Speaking of logos, another feature that I would like to have is to provide the ability for each instance to have its own distinctive logo image, so that everything from that particular instance could be tagged with that image as a way to visually identify where the app originated. That’s not a critical feature, which is why I did not include it initially, but it has always been something that I felt should be a part of the process, particularly when you start thinking about ways to browse the store and find what you are looking for. That’s definitely on the We-will-get-around-to-it-one-day list.

Browsing the store is another thing that will need some attention at some point. Right now, we just want to prove that we can set-up the app, publish an application, and install an application published by someone else. Those are the fundamental elements of the app. But once we get all of that worked out, being able to hunt through the store to find what you want will be another important capability to build out. We’re not done with the fundamentals just yet, so we don’t want to put too much energy into that issue right at the moment, but at some point, we will need to create a user-friendly way to easily find what you need.

That, of course, leads into things like searchable keywords, tags, user ratings, reviews, and the like, but now is not the time to head down that rabbit hole. Still, there are a lot of possibilities here, and this could turn into a life-long project in and of itself. That’s probably not a good thing, though!

Anyway, we won’t get anything done if we don’t focus, so we need to stay on task and figure out the application installation process. Once again, there are several options available, but the nicest one seems to be the process that you go through to install an app from an internal store. That’s basically a one-click operation and then the app is installed. Unfortunately, that particular page is neither Jelly nor AngularJS, so you can’t just peek under the hood and see the magic like you can with so many other things on the Now Platform. Another option would be to hack up a copy of the Import XML Action on the Update Set list page to push in the attached XML from a published app version, but that only takes things so far; you still have to Preview the Update Set, resolve any issues, and then manually issue the Commit. It would be much nicer if we could just push a button and have the app installation process run in the background and notify you when it was completed. Obviously, we have some work to do here to come up with the best way to go about this, and we had better figure that out relatively soon. Next time, if we are not dealing with test results from the last release, we will need to start building this out.