“Ideas are of themselves extraordinarily valuable, but an idea is just an idea. Almost any one can think up an idea. The thing that counts is developing it into a practical product.” — Henry Ford
When we wrapped up the Service Account Management project, we intentionally left out a critical part of the complete life-cycle of a service account, the periodic review of the account to ensure that the account was still needed. We did that intentionally because it was our opinion that it was best to leave that function to a generic third party product that could handle such a requirement for any number of use cases beyond just the management of service accounts. Virtually anything that is created, deployed, or installed for a temporary purpose should be reviewed on occasion to make sure that it is still needed, and if it is determined that it is no longer needed, some action should be take to revoke, deactivate, or uninstall the item for a number of reasons, including security and resource utilization. Regardless of the nature of the item, the process should basically be the same.
To have some generic product that would work for just about anything, there would have to be some kind of registration or set-up process to be used for each specific type of item that you wanted to review. And of course, there would have to be some meaningful name for these instances or use cases and they would need to be stored in some appropriately named table. For our purposes we could refer to these implementations of the product as Reviewed Artifacts, and we could create a table of that name that contained all of the information needed to run the review process for that particular implementation.
In practice, there would be some scheduled job that would run every day and refer to this table to see if there was any work to be done that day, and if there was, process each artifact’s workload in turn, sending out notices to the appropriate individuals informing them of the need to take some action to reaffirm the need for the items in question. Another table could keep track of these runs, and yet another could track the individual items associated with each run. Rather than send multiple notices to a single individual who might be responsible for more than one item, though, it would probably be better to consolidate all of the items for a specific individual onto a single notice, and so it might be better to have a table of notices sent out, and then a subordinate table of the items associated with that notice. In that case, the item table would point to the notice table, the notice table would point to the run table, and the run table would then point to the master configuration record for that particular reviewed artifact.
Upon receiving the notice of action required, you would want the recipient to then indicate whether or not each item on the notice was still required. For that, the notice could provide a link to a page that would display the list of items and provide a series of check boxes for various resolutions. To maximize flexibility, the possible resolutions could be customized for each reviewed artifact, and those options would be configured as part of the set-up for each new reviewed artifact and stored in yet another related table.
Once the recipient made their selections and submitted the response, the system could then update the item records within the system and also send the responses to some configured Script Include or Flow that would take the appropriate actions on the source records based on those responses.
To set all of this up for a new reviewed artifact, then, you would need to provide the source table containing the artifacts to be reviewed, the fields on the table that contain various bits of information such as the recipient of the notice and the description of the item, the frequency of the review, some artifact-specific verbiage for the notices, the options to be provided on the response entry page, and some artifact-specific process to handle the responses. Once we get into things, we may find that we will need other data points as well, but this should get us started.
It seems like a lot, but we will just take things on one piece at a time and see how it goes. Next time out we will get to work and create a Scoped Application and start throwing together some tables.
“Everything ends; you just have to figure out a way to push to the finish line.” — Jesse Itzler
Last time, we wrapped up the work on the example Service Account dashboard, although we did leave off a few potential enhancements that could improve its value. There is always more that could be done, such as the addition of an Admin Perspective showing all of the accounts and requests or an Expiring State showing all of the accounts that are coming up for review. Since this is just an example, we don’t need to invest the time in building all of those ideas out; some things should be left as an exercise for those who would like to pull this down and play around with it.
What we should do now, though, is take a quick step back and see what we have so far and what might be left to do before we can call this good enough to push out. When we first set out to do this, we identified the following items that would need to be developed:
One or more Service Catalog items to create, alter, and terminate accounts
A generic workflow for the catalog item(s)
A type-specific workflow for each type of account in the type table
Some kind of periodic workflow to ensure that the account is still needed.
We have basically created everything on our list except for that last item, but we have also indicated that the process to check back every so often and see if the account was still needed is something that could be handled by a stand-alone generic product that could perform that function for all kinds of things that would benefit from a periodic review. If we assume that we will turn that process over to a third party, then we would seem to have just about everything that we need.
There is one other thing that would be helpful, though, and we neglected to included it on our original list. It would be nice to have some kind of menu item to launch all of these processes that we have built, so let’s put that together real quick and get that out of the way. I am thinking of something like this:
Service Accounts
New Service Account
My Service Accounts
Service Accounts
Service Account Types
The first item would initiate a request for the Service AccountCatalog Item, the second would bring up the dashboard, and the last two would just bring up the list view of our two tables. Those last two would also be limited to admins only and the rest would be open to everyone. Here is the high-level menu entry.
… and here are the four submenu options for this high-level menu item:
Which produces a menu that looks like this:
So that’s about it for this little example project. Again, this is not intended to be a fully functional product that you would simply install and start using. This is just an example with enough working parts to get things started for anyone who might want to try to create something along these lines. Obviously, you would have your own list of types, your own implementation workflows for each type, your own approval structure for each type, and your own language in all of the notices, so it’s not as if someone could build all of that out in a way that would work for everyone. But for anyone would like a set of parts to play with to get things started, here is an Update Set that contains everything that we have put together during this exercise.
“Baby steps count, as long as you are going forward. You add them all up, and one day you look back and you’ll be surprised at where you might get to.” — Chris Gardner
Version 2.5 is essentially the exact same bundle as the previous version (2.4.1), with the only change being the inclusion of the corrected configuration editor. Still, it does address the issues related to scoped configuration scripts, so it’s probably worth pulling down and installing it, just to avoid running into those annoying problems one day in the future. There are no new features or components in this new version, but it does now include the latest of everything, so this is the one that you will want.
“Beginning in itself has no value; it is an end which makes beginning meaningful; we must end what we begun.” — Amit Kalantri
Last time, we added the Requested Item table to our Service Account dashboard so that we could see the pending requests, but we left off with a field name error and the desire to add a few item variables to the table using some Scripted Value Columns. Today, we will fix up that little error, and add some columns to both tables, hopefully wrapping things up, at least for this version of the dashboard.
In our field list for the new table, we had included the field name opened, when in actuality, the correct field name for the opened date/time is opened_at. That’s an easy fix, and now our field list looks like this:
number,opened_at,request.requested_for,stage
While we are in the configuration updating field lists, let’s also add the new link to the original request to the field list for the Service Account table, which will now look like this:
Also, since that new column will be a link to the sc_req_item table, let’s map that table to the ticket page by adding a new entry to the reference map.
That should take care of the errors and oversights. Now let’s take a look at adding some item variables to the pending request view. We put some catalog item variables on an example table not too long ago, so let’s just follow that same approach and maybe steal a little code from that guy so that we don’t end up reinventing an existing wheel. Here is the script that we built for that exercise.
var ScriptedCatalogValueProvider = Class.create();
ScriptedCatalogValueProvider.prototype = {
initialize: function() {
},
questionMap: {
cpu: 'e46305fbc0a8010a01f7d51642fd6737',
memory: 'e463064ac0a8010a01f7d516207cd5ab',
drive: 'e4630669c0a8010a01f7d51690673603',
os: 'e4630688c0a8010a01f7d516f68c1504'
},
getScriptedValue: function(item, config) {
var response = '';
var column = config.name;
if (this.questionMap[column]) {
response = this.getVariableValue(this.questionMap[column], item.sys_id);
}
return response;
},
getVariableValue: function(questionId, itemId) {
var response = '';
var mtomGR = new GlideRecord('sc_item_option_mtom');
mtomGR.addQuery('request_item', itemId);
mtomGR.addQuery('sc_item_option.item_option_new', questionId);
mtomGR.query();
if (mtomGR.next()) {
var value = mtomGR.getDisplayValue('sc_item_option.value');
if (value) {
response = this.getDisplayValue(questionId, value);
}
}
return response;
},
getDisplayValue: function(questionId, value) {
var response = '';
var choiceGR = new GlideRecord('question_choice');
choiceGR.addQuery('question', questionId);
choiceGR.addQuery('value', value);
choiceGR.query();
if (choiceGR.next()) {
response = choiceGR.getDisplayValue('text');
}
return response;
},
type: 'ScriptedCatalogValueProvider'
};
We can make a copy of this script and call ours ServiceAccountDashboardValueProvider. Most of this appears to be salvageable, but we will want to build our own questionMap using the columns that we will want to use for our use case. To find the sys_ids for the variables that we will want to use, we can pull up the Catalog Item to get to the list of variables, and then pull up each variable and use the context menu to snag the sys_id for each one.
Once we gather up all of the sys_ids, we will have a new map that looks like this:
That should be enough to make things work; however, in our case the types of variables involved will return the display value directly, so we do not need to go through that secondary process to look up the display value from the value. We can simply delete that unneeded function and return the value directly in this instance. That will make our new script look like this:
var ServiceAccountDashboardValueProvider = Class.create();
ServiceAccountDashboardValueProvider.prototype = {
initialize: function() {
},
questionMap: {
account_id: '59fe77a4971311100362bfb6f053afcc',
type: 'f98b24a4971711100362bfb6f053afa0',
group: '3d4fbba4971311100362bfb6f053afe3'
},
getScriptedValue: function(item, config) {
var response = '';
var column = config.name;
if (this.questionMap[column]) {
response = this.getVariableValue(this.questionMap[column], item.sys_id);
}
return response;
},
getVariableValue: function(questionId, itemId) {
var response = '';
var mtomGR = new GlideRecord('sc_item_option_mtom');
mtomGR.addQuery('request_item', itemId);
mtomGR.addQuery('sc_item_option.item_option_new', questionId);
mtomGR.query();
if (mtomGR.next()) {
response = mtomGR.getDisplayValue('sc_item_option.value');
}
return response;
},
type: 'ServiceAccountDashboardValueProvider'
};
Now all we need to do is to pull up the dashboard under the new configuration and see how it all looks. First, let’s take a look at the new column that we added for the original request.
There is only data there for the most recent test, but that’s just because that field did not exist on the table until recently. Now let’s click on the Pending state and see how our item variables came out.
Very nice! OK, I think that about does it for this version of the sample dashboard. There is still some work that we could do on the Fulfiller perspective, and it might be nice to add an Admin perspective that showed everything, but since this is just an example of what might be done, I will leave that as an exercise for those who might want to play around with things a bit. Next time, let’s take a look at what now have up to this point, and at what there might be left to do before we can wrap this one up and call it done.