“Ideas are of themselves extraordinarily valuable, but an idea is just an idea. Almost any one can think up an idea. The thing that counts is developing it into a practical product.” — Henry Ford
When we wrapped up the Service Account Management project, we intentionally left out a critical part of the complete life-cycle of a service account, the periodic review of the account to ensure that the account was still needed. We did that intentionally because it was our opinion that it was best to leave that function to a generic third party product that could handle such a requirement for any number of use cases beyond just the management of service accounts. Virtually anything that is created, deployed, or installed for a temporary purpose should be reviewed on occasion to make sure that it is still needed, and if it is determined that it is no longer needed, some action should be take to revoke, deactivate, or uninstall the item for a number of reasons, including security and resource utilization. Regardless of the nature of the item, the process should basically be the same.
To have some generic product that would work for just about anything, there would have to be some kind of registration or set-up process to be used for each specific type of item that you wanted to review. And of course, there would have to be some meaningful name for these instances or use cases and they would need to be stored in some appropriately named table. For our purposes we could refer to these implementations of the product as Reviewed Artifacts, and we could create a table of that name that contained all of the information needed to run the review process for that particular implementation.
In practice, there would be some scheduled job that would run every day and refer to this table to see if there was any work to be done that day, and if there was, process each artifact’s workload in turn, sending out notices to the appropriate individuals informing them of the need to take some action to reaffirm the need for the items in question. Another table could keep track of these runs, and yet another could track the individual items associated with each run. Rather than send multiple notices to a single individual who might be responsible for more than one item, though, it would probably be better to consolidate all of the items for a specific individual onto a single notice, and so it might be better to have a table of notices sent out, and then a subordinate table of the items associated with that notice. In that case, the item table would point to the notice table, the notice table would point to the run table, and the run table would then point to the master configuration record for that particular reviewed artifact.
Upon receiving the notice of action required, you would want the recipient to then indicate whether or not each item on the notice was still required. For that, the notice could provide a link to a page that would display the list of items and provide a series of check boxes for various resolutions. To maximize flexibility, the possible resolutions could be customized for each reviewed artifact, and those options would be configured as part of the set-up for each new reviewed artifact and stored in yet another related table.
Once the recipient made their selections and submitted the response, the system could then update the item records within the system and also send the responses to some configured Script Include or Flow that would take the appropriate actions on the source records based on those responses.
To set all of this up for a new reviewed artifact, then, you would need to provide the source table containing the artifacts to be reviewed, the fields on the table that contain various bits of information such as the recipient of the notice and the description of the item, the frequency of the review, some artifact-specific verbiage for the notices, the options to be provided on the response entry page, and some artifact-specific process to handle the responses. Once we get into things, we may find that we will need other data points as well, but this should get us started.
It seems like a lot, but we will just take things on one piece at a time and see how it goes. Next time out we will get to work and create a Scoped Application and start throwing together some tables.
“Everything ends; you just have to figure out a way to push to the finish line.” — Jesse Itzler
Last time, we wrapped up the work on the example Service Account dashboard, although we did leave off a few potential enhancements that could improve its value. There is always more that could be done, such as the addition of an Admin Perspective showing all of the accounts and requests or an Expiring State showing all of the accounts that are coming up for review. Since this is just an example, we don’t need to invest the time in building all of those ideas out; some things should be left as an exercise for those who would like to pull this down and play around with it.
What we should do now, though, is take a quick step back and see what we have so far and what might be left to do before we can call this good enough to push out. When we first set out to do this, we identified the following items that would need to be developed:
One or more Service Catalog items to create, alter, and terminate accounts
A generic workflow for the catalog item(s)
A type-specific workflow for each type of account in the type table
Some kind of periodic workflow to ensure that the account is still needed.
We have basically created everything on our list except for that last item, but we have also indicated that the process to check back every so often and see if the account was still needed is something that could be handled by a stand-alone generic product that could perform that function for all kinds of things that would benefit from a periodic review. If we assume that we will turn that process over to a third party, then we would seem to have just about everything that we need.
There is one other thing that would be helpful, though, and we neglected to included it on our original list. It would be nice to have some kind of menu item to launch all of these processes that we have built, so let’s put that together real quick and get that out of the way. I am thinking of something like this:
Service Accounts
New Service Account
My Service Accounts
Service Accounts
Service Account Types
The first item would initiate a request for the Service AccountCatalog Item, the second would bring up the dashboard, and the last two would just bring up the list view of our two tables. Those last two would also be limited to admins only and the rest would be open to everyone. Here is the high-level menu entry.
… and here are the four submenu options for this high-level menu item:
Which produces a menu that looks like this:
So that’s about it for this little example project. Again, this is not intended to be a fully functional product that you would simply install and start using. This is just an example with enough working parts to get things started for anyone who might want to try to create something along these lines. Obviously, you would have your own list of types, your own implementation workflows for each type, your own approval structure for each type, and your own language in all of the notices, so it’s not as if someone could build all of that out in a way that would work for everyone. But for anyone would like a set of parts to play with to get things started, here is an Update Set that contains everything that we have put together during this exercise.
“Beginning in itself has no value; it is an end which makes beginning meaningful; we must end what we begun.” — Amit Kalantri
Last time, we added the Requested Item table to our Service Account dashboard so that we could see the pending requests, but we left off with a field name error and the desire to add a few item variables to the table using some Scripted Value Columns. Today, we will fix up that little error, and add some columns to both tables, hopefully wrapping things up, at least for this version of the dashboard.
In our field list for the new table, we had included the field name opened, when in actuality, the correct field name for the opened date/time is opened_at. That’s an easy fix, and now our field list looks like this:
number,opened_at,request.requested_for,stage
While we are in the configuration updating field lists, let’s also add the new link to the original request to the field list for the Service Account table, which will now look like this:
Also, since that new column will be a link to the sc_req_item table, let’s map that table to the ticket page by adding a new entry to the reference map.
That should take care of the errors and oversights. Now let’s take a look at adding some item variables to the pending request view. We put some catalog item variables on an example table not too long ago, so let’s just follow that same approach and maybe steal a little code from that guy so that we don’t end up reinventing an existing wheel. Here is the script that we built for that exercise.
var ScriptedCatalogValueProvider = Class.create();
ScriptedCatalogValueProvider.prototype = {
initialize: function() {
},
questionMap: {
cpu: 'e46305fbc0a8010a01f7d51642fd6737',
memory: 'e463064ac0a8010a01f7d516207cd5ab',
drive: 'e4630669c0a8010a01f7d51690673603',
os: 'e4630688c0a8010a01f7d516f68c1504'
},
getScriptedValue: function(item, config) {
var response = '';
var column = config.name;
if (this.questionMap[column]) {
response = this.getVariableValue(this.questionMap[column], item.sys_id);
}
return response;
},
getVariableValue: function(questionId, itemId) {
var response = '';
var mtomGR = new GlideRecord('sc_item_option_mtom');
mtomGR.addQuery('request_item', itemId);
mtomGR.addQuery('sc_item_option.item_option_new', questionId);
mtomGR.query();
if (mtomGR.next()) {
var value = mtomGR.getDisplayValue('sc_item_option.value');
if (value) {
response = this.getDisplayValue(questionId, value);
}
}
return response;
},
getDisplayValue: function(questionId, value) {
var response = '';
var choiceGR = new GlideRecord('question_choice');
choiceGR.addQuery('question', questionId);
choiceGR.addQuery('value', value);
choiceGR.query();
if (choiceGR.next()) {
response = choiceGR.getDisplayValue('text');
}
return response;
},
type: 'ScriptedCatalogValueProvider'
};
We can make a copy of this script and call ours ServiceAccountDashboardValueProvider. Most of this appears to be salvageable, but we will want to build our own questionMap using the columns that we will want to use for our use case. To find the sys_ids for the variables that we will want to use, we can pull up the Catalog Item to get to the list of variables, and then pull up each variable and use the context menu to snag the sys_id for each one.
Once we gather up all of the sys_ids, we will have a new map that looks like this:
That should be enough to make things work; however, in our case the types of variables involved will return the display value directly, so we do not need to go through that secondary process to look up the display value from the value. We can simply delete that unneeded function and return the value directly in this instance. That will make our new script look like this:
var ServiceAccountDashboardValueProvider = Class.create();
ServiceAccountDashboardValueProvider.prototype = {
initialize: function() {
},
questionMap: {
account_id: '59fe77a4971311100362bfb6f053afcc',
type: 'f98b24a4971711100362bfb6f053afa0',
group: '3d4fbba4971311100362bfb6f053afe3'
},
getScriptedValue: function(item, config) {
var response = '';
var column = config.name;
if (this.questionMap[column]) {
response = this.getVariableValue(this.questionMap[column], item.sys_id);
}
return response;
},
getVariableValue: function(questionId, itemId) {
var response = '';
var mtomGR = new GlideRecord('sc_item_option_mtom');
mtomGR.addQuery('request_item', itemId);
mtomGR.addQuery('sc_item_option.item_option_new', questionId);
mtomGR.query();
if (mtomGR.next()) {
response = mtomGR.getDisplayValue('sc_item_option.value');
}
return response;
},
type: 'ServiceAccountDashboardValueProvider'
};
Now all we need to do is to pull up the dashboard under the new configuration and see how it all looks. First, let’s take a look at the new column that we added for the original request.
There is only data there for the most recent test, but that’s just because that field did not exist on the table until recently. Now let’s click on the Pending state and see how our item variables came out.
Very nice! OK, I think that about does it for this version of the sample dashboard. There is still some work that we could do on the Fulfiller perspective, and it might be nice to add an Admin perspective that showed everything, but since this is just an example of what might be done, I will leave that as an exercise for those who might want to play around with things a bit. Next time, let’s take a look at what now have up to this point, and at what there might be left to do before we can wrap this one up and call it done.
“There are no big problems, there are just a lot of little problems.” — Henry Ford
Last time, we wrapped up the initial table configuration for our Service Account dashboard and tested everything out to make sure that it all worked as intended. We also identified the fact that we need to add a second table to the configuration so that we can see the pending requests that have not yet created a record in the Service Account table.Before we do that, though, I decided that it would be useful to add a link to the original request on the Service Account record so that you could easily pull up the request from the account record.
To populate the field during the creation of the Service Account record, I pulled the Service Account Request Fulfillment flow up in the App Engine Studio and added an entry to drag in the data pill from the original request in the trigger.
With that out of the way, we can return our attention to adding the new table to the dashboard configuration. To do that, we go back to the Content Selector Configuration Editor that we recently updated to correct a few issues related to Scoped Applications. Before we do that, though, let’s pull up the list of Requested Items and build ourselves a filter that we can use to show all of the open items for Service Accounts requested by the current operator.
We are looking for active items requesting the Service Account catalog item requested by the currently logged on user. The filter can be found in the URL under the sysparm_query parameter.
Of course, we have to do a few change alls to get rid of all of the double encoding present in this version, but once we do that we will have a workable filter for the pending state on our newly added table.
Now let’s jump into the editor and add our new table.
For now, let’s assume that we don’t want anything to appear in the Active and Retired states, and we can use the same technique that we used on the original table when we didn’t want anything to appear for that table in the Pending state. We’ll set the field list to simply number, and set the filter to number=0.
For the Pending state, we add a few more relevant fields and use the filter we snagged from the list URL earlier.
We can do a little more with this, but let’s save what we have for now and take it out for a spin, just to make sure that everything is still in working order. Saving the changes should take us to the generated script, which now looks like this.
Now all we need to do is to pull up the dashboard with the modified configuration, click on the Pending state, and take a quick peek.
Well, that’s not too bad. Looks like we screwed up on the field name for the open date, but other than that, things look pretty good. I want to add a few more columns from the catalog item variables anyway, which we can do by configuring some Scripted Value Columns, so let’s fix our little error and deal with those new fields in our next installment.
“On your darkest days do not try to see the end of the tunnel by looking far ahead. Focus only on where you are right now. Then carefully take one step at a time, by placing just one foot in front of the other. Before you know it, you will turn that corner.” — Anthon St. Maarten
Last time, we threw together the beginnings of a configuration script for Service Account dashboard using the Content Selector Configuration Editor. Now that we have a viable script, we need to create Service PortalPage that will utilize that configuration. To begin, we will pull up the list of Portal Pages and click on the New button to create a new page.
We will call our new page Service Account Dashboard and give it an ID of service_account_dashboard. Once we submit the form we can pull it back up and use the link down at the bottom of the form to bring it up in Service Portal Designer. Onto the blank canvas we will drag a 12-wide container, and beneath that one, we will drag in a 3/9 container. Into the upper 12-wide container, we will drag in the Dynamic Service Portal Breadcrumbs widget, and into the 3 portion of the 3/9 container, we will drag in the Content Selector widget. In the 9 portion of the 3/9 container, we will pull in the SNH Data Table from URL Definition widget. Now that we have placed all of the widgets, we will need to edit them, starting with the Content Selector.
Here is where we enter the full name of the configuration script that we created last time. Since this is a Scoped application, we need to include the scope with the name so that it can be successfully located. That’s all there is to configuring that widget, as most of the configuration information is contained in the referenced script. Configuring the Data Table widget is a little more involved.
Here we give it a title of Service Accounts and select an appropriate Glyph image. We check the Use Instance Title checkbox to get our title to show up, and we leave all of the rest of them unchecked. Once we save that and save the page, we should be ready to try it out, which we can do easily enough with the View page in new tab button up in the upper right-hand corner.
So far, so good. The default selection is active Service Accounts from the requester’s perspective, and you can see all of the account records from our failed and successful test submissions. I went ahead and retired one of them so that we could test the Retired state. Let’s click on the Retired button and see how that comes out.
That looks good as well. Now let’s try the Pending state, which should come up empty for the Service Account table, as pending requests have not gotten far enough along in the process to have created the record in that table yet.
Well, that’s not right! But you knew things were going too well at this point and it was about time for something to go horribly wrong. This is just a problem with our Filter, though, and should be easily remedied. We used the filter 1=0, which obviously did not work, so let’s try using an actual field from the table and do something like this in our config file:
filter: 'number=0',
Before we add that to all of the pending configurations, let’s pull up the dashboard again and see how that looks.
That’s better. Of course, to actually see the pending Service Accounts, we will need to add another table to our configuration. We can go back into the Content Selector Configuration Editor to do that, and then go back to the dashboard and check it out. That sounds like a good exercise for our next installment.
Last time, we were about to throw together a little dashboard of Service Account information when we ran into a little problem with the Content Selector Configuration Editor. Actually, it turned out to be a problem with the snh-form-field tag, but now that we have taken the time to fix that, we should be able to get back to where we were and continue on. So let’s get back into the configurator tool and try one more time to create a new configuration script.
Well, that’s much better! Now we can see all of the fields again in the modal pop-up as well as both of the buttons, so things are back to normal with the newer version. After creating the Requester perspective, we go through the process again to create the Fulfiller perspective.
Now, we could have used slightly different names, such as Owner and Provider, but again, this is just a sample of what could be; your mileage may vary. One thing that we did do on the Fulfiller perspective, though, was to add the itil role so that only actual fulfillers would have access to that portion of the dashboard.
Next, we need to add some states, and for our purpose, the states of Active, Retired, and Pending should suffice.
With that out of the way, now we can start completing the Tables section. Clicking on the Add a new Table button in the Requester tab will bring up the modal Table Selector pop-up.
Once the Table has been added, we can fill in all of the rest of the configuration data.
For the Active state, we will use following fields:
For the Retired tab, we will just change the above filter from active=true to active=false. Everything else can remain the same. For accounts in the pending state, there will be no record on the Service Account table just yet, so we can just set the filter to 1=0, which should always find no records. To see the pending accounts, we will need to add another table. We can deal with that later, though, so for now let’s just focus on the Service Account table and then see how it all comes out.
Basically, we go through pretty much the same process for the Fulfiller tab, and once we save all of our input, we end up with the following configuration script.
Now all we need to do is to create a Portal Page that will use the configuration script and we can take it out for a spin. That sounds like a good project for our next installment.
“Inside of every problem lies an opportunity.” — Robert Kiposaki
Last time, we wrapped up all of the work on the Service Account requisition process, although we left a number of things that could have improved the process to some mythical future effort. Unlike the SNH Form Fields or SNH Data Table Widgets, which were intended to be used as is, without the need to touch any of the provided code, the Service Account Management app is more of a sample of what could be done, with the understanding that implementers would want to craft their own account types, notice templates, and fulfillment Flows based on their unique requirements. Since it is just an example, and more of a concept than a product, we don’t need to solve every issue or build out every conceived improvement. We created enough pieces to demonstrate that it works, and that pretty much addresses the intent of the project.
So what’s left? One thing that we will want to do once a Service Account has been delivered will be to check back every so often and ensure that it is still needed. This is actually a common practice for a number IT artifacts, and the applications for such a process go well beyond the realm of Service Accounts. Periodically checking to see if a thing is still needed is something that could be applied to servers deployed for a development project, or access granted to vendors or contractors, or laptops issued to temporary staff, or communications links established with outside entities, or any number of other items that are deployed, granted, or procured for a limited purpose. That actually could be a stand-alone product that one could use out of the box, and one that could potentially have quite a few applications in an IT organization. In fact, for our little exercise here, let us assume that such a product exists, and we won’t bother to build out that capability here. However, one day we might tackle the development of such a product, and if we do, we can then use our Service Account Management app as one example of how such a product might be employed. But that’s not today’s worry.
So, again, what is left for us to do. So far, we have created our Scoped Application, built out all of our database tables, and created a process through which someone can request a new Service Account. If we assume that the periodic process though which these accounts will be reviewed will be handled by a separate product, then what is left for our product?
One thing that would be useful would be series of table views from both the requester’s perspective and the fulfiller’s perspective. Each might want to see a list of active accounts, retired accounts, and pending accounts. This is something for which the SNH Data Table Widgets were designed, so let’s see if we can use that product to produce a single dashboard that would contain all of these interrelated lists.
To begin, we will select Content Selector Configurator from the Tools menu (which is only there because we have previously installed SNH Data Table Widgets).
On the initial screen of the configurator, we will want to click on the Create a new Content Selector Configurator button.
We will call our new script ServiceAccountDashboardConfig and use the Add a new Perspective button to create two perspectives, Requester and Fulfiller. We may want to add a third at some point for an Admin view, but for now, let’s just focus on the two primary user perspectives. Once we click on the Add a new Perspective button, a modal dialog pops up where we can enter the details of our new perspective.
Well, that’s not right! The modal pop-up box is supposed to have several input fields and it is only tall enough to show the first one. There doesn’t seem to be any way to drag down the bottom of the box to reveal the rest of the form, either, so you can’t even get to the buttons that should be down at the bottom. This is probably another Tokyo thing that I am going to have to resolve, so it looks like it is time to set this project aside and dig into this little issue.
OK, well, I will see if I can’t figure out what is going on here, and once I do, we will get back to this in a future installment. Hopefully, this will be a quick break!
“It’s a bad plan that admits of no modification.” — Publilius Syrus
Last time, we wrapped up changes to our primary new Service Account request fulfillment Flow, and our second example Subflow for manually creating a new Active Directory account. Now we need to test everything all over again to make sure that we didn’t break anything, and to see if we finally have a working approach to the manual fulfillment process. Once again, let’s start with the automated example first, as that is the one that was working previous to all of these changes.
… and it looks like that one still works, even with all of the alterations that we just did.
Now let’s try the same thing with the manual fulfillment example.
Under this scenario, the Requested Item will remain open until someone closes the task issued to the fulfillment group.
So let’s hunt down the task, pretend that we created the account manually, and then close it and see what happens.
Now the Requested Item shows completed, which also closes out the Request.
So now both the automated and manual example Subflows appear to do what they are supposed to do, which should complete the work on the Service Account creation process. Before we go, however, we should check the password emails sent out for both, just to make sure that those are working as they should as well. Here is the email sent out for the automated example.
And here is the email for the manual example.
Both look good, so it would appear that we have a successful strategy for fulfilling the requests for new Service Accounts of any type. To create a new type, you would just need to add the new record for the type to the table and then specify the appropriate Subflow on the new type record. This would make the new type appear on the catalog item drop-down list and the primary fulfillment Flow would then launch the Subflow specified in the type record during fulfillment.
As mentioned earlier, there is some work that we could do to streamline this process with a little bit of refactoring, but since it does work, we will leave that for another day at this point and move on to other things. In fact, those other things will be the subject of our next installment.
“Take chances, make mistakes. That’s how you grow.” — Mary Tyler Moore
Last time, we modified and tested the automated example Subflow using our new approach, so now we have to do the same thing with our manual example. Because we are now creating our Catalog Task in the primary Flow, much of the work that was previously done in the earlier version of the Subflow has now been moved to the primary Flow, so there won’t be much left to do in the manual version of the Subflow. In fact, it may even be possible to create a generic Subflow that would handle all manual implementations. For now, though, let’s just create one specifically for our example and we can explore that possibility at some later point.
Under our new approach with two parallel tracks (one for the task creation and the other for launching the type-specific Subflow), we cannot put any post-task-closure logic in our Subflow since the wait logic is in one track and the Subflow is running independently in another. In our original example, once the task closed we pulled the password out of a catalog variable and then removed that value from the variable so it would not be left on the records after processing. We still need to do this, but since it cannot be done in the Subflow, we will have to move these steps to the primary Flow. And since the password was originally sent back in the Subflow outputs, which will continue for all automated implementations, we need to set up a Flow Variable that can be populated from either track to make all of this work for both types.
To create a Flow Variable in the App Engine Studio, pull up the primary Flow and use the ellipses menu in the upper right-hand corner to select Flow Variables.
We will call our new variable password and set the type to String. It would be nice to set the type to Password or some other masked data type, but in the current version, that does not appear to be an option.
Now that we have our variable defined, we can populate it in both parallel branches where appropriate, and then use in the outgoing email instead of using the Subflow outputs. Under the branch that creates the Catalog Task, we can add a new step to pull the password value out of the variables linked to the task.
Now, the one thing that we do not want to do is to overlay a value provided by the other branch running the Subflow, so before we use this value to populate our new variable, we will want to make sure that a value is there. To do that, we add a simple If condition.
Once we know that a password was provided, indicating a manually fulfilled request, we then use that value to populate our Flow Variable.
That should take care of the first parallel branch. Now we need to do something similar for the second branch. After we run our Subflow, we can pull the password value returned by the Subflow and use it to populate our new variable.
Now we just need to go into the step that sends out the email containing the password and change the source of the email body from the Subflow outputs to the new variable.
That should take care of all of changes for the primary Flow. In fact, virtually all of the work that was done in the original manual example Subflow has now been moved to the primary Flow, so there isn’t much left to do in the Subflow. Let’s pull that guy up now and strip out everything that we are now handling in the Flow and see what’s left.
As you can see, the only thing left in the Subflow is to assign the Subflow outputs. This would indicate that we could do yet another redesign and have either a generic manual Subflow, or make the Subflow optional and only include one if the fulfillment is automated. We could add the owner instructions to the type record, fetch both the password and the success/failure from the task, and then even the automated Subflows would have no need for outputs. They would communicate to the primary Flow via the task record. But before we get too far ahead of ourselves, let’s see if all of this works as it is. Now that we have modified the primary Flow, we will need to retest both the automated example and the manual example to see if they both work. That sounds like a good subject for our next installment.
“Challenges are what makes life interesting and overcoming them is what makes life meaningful.” — Joshua J. Marine
Last time, we finished up with the modifications to our primary fulfillment Flow, so now we need to modify one of our example Subflows so that we can test out this new approach to the design. Since we already had the first example working under the old design, let’s start with that one, the one that builds ServiceNowService Accounts. Even though we did not build the Subflows in the App Engine Studio, they do appear there once they are created, so we can edit them while in the studio without having to resort to the older Flow Designer.
Next, we will need to alter the source of the Requested Item record in the first step. Now that we are using the Catalog Task as input, we will have to pull the Requested Item record from the appropriate Catalog Task property.
The remainder of the existing workflow should be OK as it is, but under our new approach, our automated Subflow has one additional responsibility: we need to close the Catalog Task, both for a successful completion and for any kind of failure. Let’s handle the success story first by inserting an Update Record action just before we assign the Subflow outputs.
On the failure branch, we need to do the same thing, with different values, right before we assign the Subflow outputs at the end of that process.
At this point, all we need to do is to Save and Activate the Subflow and take it out for a spin. The easiest way to try it out would be to jump into the Service Catalog and place another order.
Looking at the resulting Requested Item reveals that the Flow and Subflow executed successfully and the account was created.
So the Subflow that was working before is now working again using our new approach. Now we have to modify the manual example that wasn’t working before and see if that guy works under our new approach as well. Let’s jump into that next time out.