<article>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#article09_06_16_1630230</id>
	<title>Ideal, and Actual, IT Performance Metrics?</title>
	<author>timothy</author>
	<datestamp>1245171600000</datestamp>
	<htmltext>An anonymous reader writes <i>"Recently it was revealed that our company measures IT performance by the time it takes to close trouble tickets. I consider IT's primary goal to be as transparent to the user as possible, thus this metric was rather troubling to me. Shouldn't we be focused on reducing calls, rather than simply closing them quickly?

My question is: How is your IT performance measured, and how do you think it should be measured?"</i></htmltext>
<tokenext>An anonymous reader writes " Recently it was revealed that our company measures IT performance by the time it takes to close trouble tickets .
I consider IT 's primary goal to be as transparent to the user as possible , thus this metric was rather troubling to me .
Should n't we be focused on reducing calls , rather than simply closing them quickly ?
My question is : How is your IT performance measured , and how do you think it should be measured ?
"</tokentext>
<sentencetext>An anonymous reader writes "Recently it was revealed that our company measures IT performance by the time it takes to close trouble tickets.
I consider IT's primary goal to be as transparent to the user as possible, thus this metric was rather troubling to me.
Shouldn't we be focused on reducing calls, rather than simply closing them quickly?
My question is: How is your IT performance measured, and how do you think it should be measured?
"</sentencetext>
</article>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350291</id>
	<title>Metrics on help desk tickets</title>
	<author>kilodelta</author>
	<datestamp>1245177360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I've had some experience with this. Here's what ends up happening.
<br> <br>
You setup a system so that a ticket is assigned, the person assigned gets into the ticket and adds comments. Technically they've answered the ticket at that point. The thing that lots of people don't want to realize is that some tickets will go on for months on end.</htmltext>
<tokenext>I 've had some experience with this .
Here 's what ends up happening .
You setup a system so that a ticket is assigned , the person assigned gets into the ticket and adds comments .
Technically they 've answered the ticket at that point .
The thing that lots of people do n't want to realize is that some tickets will go on for months on end .</tokentext>
<sentencetext>I've had some experience with this.
Here's what ends up happening.
You setup a system so that a ticket is assigned, the person assigned gets into the ticket and adds comments.
Technically they've answered the ticket at that point.
The thing that lots of people don't want to realize is that some tickets will go on for months on end.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352069</id>
	<title>:O</title>
	<author>Anonymous</author>
	<datestamp>1245183900000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You should determine performance by how often you're yelled at. If you're not getting yelled at for something, ur doin' it wrong.</p></htmltext>
<tokenext>You should determine performance by how often you 're yelled at .
If you 're not getting yelled at for something , ur doin ' it wrong .</tokentext>
<sentencetext>You should determine performance by how often you're yelled at.
If you're not getting yelled at for something, ur doin' it wrong.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350711</id>
	<title>Re:No cnt++</title>
	<author>Anonymous</author>
	<datestamp>1245178680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Our Motto</p><p>Where not satisfied until you're not satisfied...</p></htmltext>
<tokenext>Our MottoWhere not satisfied until you 're not satisfied.. .</tokentext>
<sentencetext>Our MottoWhere not satisfied until you're not satisfied...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351515</id>
	<title>Re:My two cents</title>
	<author>QuantumRiff</author>
	<datestamp>1245181740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I would combine the metrics for IT and OPerations.

Service Calls: s
Service calls resolved: h

(s/h(IT)) + (s/h(op) = beating the slashdot lame filter!</htmltext>
<tokenext>I would combine the metrics for IT and OPerations .
Service Calls : s Service calls resolved : h ( s/h ( IT ) ) + ( s/h ( op ) = beating the slashdot lame filter !</tokentext>
<sentencetext>I would combine the metrics for IT and OPerations.
Service Calls: s
Service calls resolved: h

(s/h(IT)) + (s/h(op) = beating the slashdot lame filter!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349763</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350311</id>
	<title>Re:count tickets never openend</title>
	<author>molecular</author>
	<datestamp>1245177420000</datestamp>
	<modclass>Redundant</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>True! And my measure of being a good husband is how many affairs I DIDN'T have!</p><p>A) How do you count that?<br>B) Dude, even SKYNET had an IT department.</p></div><p>A) Dont know how to count that, but initial poster asked for both actual \_and\_ <b>ideal</b> metrics</p><p>B) Following a goal even when it's unreachable can still be fruitfull by leading you in the right direction.</p></div>
	</htmltext>
<tokenext>True !
And my measure of being a good husband is how many affairs I DID N'T have ! A ) How do you count that ? B ) Dude , even SKYNET had an IT department.A ) Dont know how to count that , but initial poster asked for both actual \ _and \ _ ideal metricsB ) Following a goal even when it 's unreachable can still be fruitfull by leading you in the right direction .</tokentext>
<sentencetext>True!
And my measure of being a good husband is how many affairs I DIDN'T have!A) How do you count that?B) Dude, even SKYNET had an IT department.A) Dont know how to count that, but initial poster asked for both actual \_and\_ ideal metricsB) Following a goal even when it's unreachable can still be fruitfull by leading you in the right direction.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349859</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353467</id>
	<title>Re:When testing a new blade server install...</title>
	<author>QRDeNameland</author>
	<datestamp>1245146100000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think that a large part of the problem with creating usable IT performance metrics begins with a basic problem with human nature: namely that we tend to notice flaws far more than we notice the absence of flaws.  Things that run smoothly tend to get taken for granted and thus get forgotten, while the squeaky wheel gets the grease.

</p><p>I see this in the quality of most of the big money "enterprise solutions" that I have to support and integrate on my job.  When a software vendor relies on long term support contracts for their revenue, as most enterprise solutions do, there is a disincentive for such a vendor to ensure their software is either easy to use, deploy, or administrate, since such ease erodes the need for support.  Likewise, in my experience, it is difficult to convince corporate IT customers to pay a premium for higher quality solutions, which is what forces the vendors into the support model in the first place.

</p><p>Until the day that there are good, widely-used metrics for assessing the value of when things *don't* go wrong, I suspect the flawed metrics like the submitter's are going to prevail.</p></htmltext>
<tokenext>I think that a large part of the problem with creating usable IT performance metrics begins with a basic problem with human nature : namely that we tend to notice flaws far more than we notice the absence of flaws .
Things that run smoothly tend to get taken for granted and thus get forgotten , while the squeaky wheel gets the grease .
I see this in the quality of most of the big money " enterprise solutions " that I have to support and integrate on my job .
When a software vendor relies on long term support contracts for their revenue , as most enterprise solutions do , there is a disincentive for such a vendor to ensure their software is either easy to use , deploy , or administrate , since such ease erodes the need for support .
Likewise , in my experience , it is difficult to convince corporate IT customers to pay a premium for higher quality solutions , which is what forces the vendors into the support model in the first place .
Until the day that there are good , widely-used metrics for assessing the value of when things * do n't * go wrong , I suspect the flawed metrics like the submitter 's are going to prevail .</tokentext>
<sentencetext>I think that a large part of the problem with creating usable IT performance metrics begins with a basic problem with human nature: namely that we tend to notice flaws far more than we notice the absence of flaws.
Things that run smoothly tend to get taken for granted and thus get forgotten, while the squeaky wheel gets the grease.
I see this in the quality of most of the big money "enterprise solutions" that I have to support and integrate on my job.
When a software vendor relies on long term support contracts for their revenue, as most enterprise solutions do, there is a disincentive for such a vendor to ensure their software is either easy to use, deploy, or administrate, since such ease erodes the need for support.
Likewise, in my experience, it is difficult to convince corporate IT customers to pay a premium for higher quality solutions, which is what forces the vendors into the support model in the first place.
Until the day that there are good, widely-used metrics for assessing the value of when things *don't* go wrong, I suspect the flawed metrics like the submitter's are going to prevail.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350367</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351459</id>
	<title>Re:Sounds good to me.</title>
	<author>RingDev</author>
	<datestamp>1245181500000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I have been recently working on a system to do just this. We already had an audit system that some of our apps were using, but after generating a set of usage reports showing the volume of usage we managed to get some more buy in and it is now being used by almost all of our apps.</p><p>To go even further, we are trying to get access to the help desk's database so that we can generate reports comparing usage to tickets.</p><p>Metrics like these can still pose some risk for mis-interpretation. It'll definately be more useful to view them as a trend line with notations as to deployment dates and business factors that lead to peeks in errors per usage. But the over all trend should be a downward slope as the software matures.</p><p>I don't know if I would directly compare numbers from one app to another, but after a sufficient period of data collection, you should be able to identify goals in error per use performance and determine if one application is ahead or behind the curve.</p><p>The goal in my case is to try to negotiate the replacement of old VB6 applications that have higher error per usage rates than their modern cousins. But I won't have the data to back up my theory for a while yet<nobr> <wbr></nobr>:(</p><p>-Rick</p></htmltext>
<tokenext>I have been recently working on a system to do just this .
We already had an audit system that some of our apps were using , but after generating a set of usage reports showing the volume of usage we managed to get some more buy in and it is now being used by almost all of our apps.To go even further , we are trying to get access to the help desk 's database so that we can generate reports comparing usage to tickets.Metrics like these can still pose some risk for mis-interpretation .
It 'll definately be more useful to view them as a trend line with notations as to deployment dates and business factors that lead to peeks in errors per usage .
But the over all trend should be a downward slope as the software matures.I do n't know if I would directly compare numbers from one app to another , but after a sufficient period of data collection , you should be able to identify goals in error per use performance and determine if one application is ahead or behind the curve.The goal in my case is to try to negotiate the replacement of old VB6 applications that have higher error per usage rates than their modern cousins .
But I wo n't have the data to back up my theory for a while yet : ( -Rick</tokentext>
<sentencetext>I have been recently working on a system to do just this.
We already had an audit system that some of our apps were using, but after generating a set of usage reports showing the volume of usage we managed to get some more buy in and it is now being used by almost all of our apps.To go even further, we are trying to get access to the help desk's database so that we can generate reports comparing usage to tickets.Metrics like these can still pose some risk for mis-interpretation.
It'll definately be more useful to view them as a trend line with notations as to deployment dates and business factors that lead to peeks in errors per usage.
But the over all trend should be a downward slope as the software matures.I don't know if I would directly compare numbers from one app to another, but after a sufficient period of data collection, you should be able to identify goals in error per use performance and determine if one application is ahead or behind the curve.The goal in my case is to try to negotiate the replacement of old VB6 applications that have higher error per usage rates than their modern cousins.
But I won't have the data to back up my theory for a while yet :(-Rick</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350345</id>
	<title>Re:Sounds good to me.</title>
	<author>The Moof</author>
	<datestamp>1245177540000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext>Every time accounting asks "Why are we paying these guys, they don't seem to do anything," you get 5 points.</htmltext>
<tokenext>Every time accounting asks " Why are we paying these guys , they do n't seem to do anything , " you get 5 points .</tokentext>
<sentencetext>Every time accounting asks "Why are we paying these guys, they don't seem to do anything," you get 5 points.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349677</id>
	<title>First metric...</title>
	<author>Anonymous</author>
	<datestamp>1245175440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>...timeliness of the TSP reports</p></htmltext>
<tokenext>...timeliness of the TSP reports</tokentext>
<sentencetext>...timeliness of the TSP reports</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351105</id>
	<title>This is my take on success...</title>
	<author>Anonymous</author>
	<datestamp>1245180180000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Lets all set aside the PEBCAKB attitude that we all have, yes myself included when I hear the same groop  of people complaining about the same stuff I want to throw the largest compaq rack server I can find right at there head.</p><p>all that being said I think we must all look at IT and MIS proformance form 2 sides 1 is up time and the other is user interaction.</p><p>1. Up-time is easy what percentage of the time is the system up or down.  How often does it go down.</p><p>2. Every time there is an interaction between the IT dep. and the end users it means that something went wrong.  Now insted of looking at tickets or phone calls lets look at insted the \% of problems solved in  some resnoble amount of time like say the same day or with in 24 hrs.</p><p>This is how I mesure my personal success.</p><p>On a completely diferent side of this issue is the problem of under funded under paid IT workers. And make no mistake we are almost all under paid.  There is no business that cant bennifit form listning to it's IT workers.  We usaly find problems before they even exist and then when we form a soloution we are told "no it is to expensive wait till it crasshes" then we are told in a panic by someone who can barly turn on a computer " I NEED IT WORKING NOW what do you mean it will be down for 24 - 48 hours waiting for parts.  Cant you just go to bestbuy or walmart and get one...." OMFG if this happens again I will scream...</p><p>O and to all the people who run IT staffs rember if you do it right the first time you will not have the expense of doing it a second time....</p></htmltext>
<tokenext>Lets all set aside the PEBCAKB attitude that we all have , yes myself included when I hear the same groop of people complaining about the same stuff I want to throw the largest compaq rack server I can find right at there head.all that being said I think we must all look at IT and MIS proformance form 2 sides 1 is up time and the other is user interaction.1 .
Up-time is easy what percentage of the time is the system up or down .
How often does it go down.2 .
Every time there is an interaction between the IT dep .
and the end users it means that something went wrong .
Now insted of looking at tickets or phone calls lets look at insted the \ % of problems solved in some resnoble amount of time like say the same day or with in 24 hrs.This is how I mesure my personal success.On a completely diferent side of this issue is the problem of under funded under paid IT workers .
And make no mistake we are almost all under paid .
There is no business that cant bennifit form listning to it 's IT workers .
We usaly find problems before they even exist and then when we form a soloution we are told " no it is to expensive wait till it crasshes " then we are told in a panic by someone who can barly turn on a computer " I NEED IT WORKING NOW what do you mean it will be down for 24 - 48 hours waiting for parts .
Cant you just go to bestbuy or walmart and get one.... " OMFG if this happens again I will scream...O and to all the people who run IT staffs rember if you do it right the first time you will not have the expense of doing it a second time... .</tokentext>
<sentencetext>Lets all set aside the PEBCAKB attitude that we all have, yes myself included when I hear the same groop  of people complaining about the same stuff I want to throw the largest compaq rack server I can find right at there head.all that being said I think we must all look at IT and MIS proformance form 2 sides 1 is up time and the other is user interaction.1.
Up-time is easy what percentage of the time is the system up or down.
How often does it go down.2.
Every time there is an interaction between the IT dep.
and the end users it means that something went wrong.
Now insted of looking at tickets or phone calls lets look at insted the \% of problems solved in  some resnoble amount of time like say the same day or with in 24 hrs.This is how I mesure my personal success.On a completely diferent side of this issue is the problem of under funded under paid IT workers.
And make no mistake we are almost all under paid.
There is no business that cant bennifit form listning to it's IT workers.
We usaly find problems before they even exist and then when we form a soloution we are told "no it is to expensive wait till it crasshes" then we are told in a panic by someone who can barly turn on a computer " I NEED IT WORKING NOW what do you mean it will be down for 24 - 48 hours waiting for parts.
Cant you just go to bestbuy or walmart and get one...." OMFG if this happens again I will scream...O and to all the people who run IT staffs rember if you do it right the first time you will not have the expense of doing it a second time....</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355793</id>
	<title>Re:ITIL</title>
	<author>liamoshan</author>
	<datestamp>1245158160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I thought implementing ITIL just involved renaming the Help Desk to the Service Desk, then carrying on business as usual?</htmltext>
<tokenext>I thought implementing ITIL just involved renaming the Help Desk to the Service Desk , then carrying on business as usual ?</tokentext>
<sentencetext>I thought implementing ITIL just involved renaming the Help Desk to the Service Desk, then carrying on business as usual?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349913</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349925</id>
	<title>Metrics keeping Managers employed since ....</title>
	<author>enterprisearchitect</author>
	<datestamp>1245176220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>80\% of Users are a PIA, 20\% never call the help desk. Metrics can be construed in many different ways to represent a positive or negative interpretation.</htmltext>
<tokenext>80 \ % of Users are a PIA , 20 \ % never call the help desk .
Metrics can be construed in many different ways to represent a positive or negative interpretation .</tokentext>
<sentencetext>80\% of Users are a PIA, 20\% never call the help desk.
Metrics can be construed in many different ways to represent a positive or negative interpretation.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352257</id>
	<title>I've been on the receiving end...</title>
	<author>meburke</author>
	<datestamp>1245184560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I worked for a large dataceneter/hosting company for a while a few years back. One of the most tedious, lengthy troubleshooting processes is e-mail failures, and I sort of specialized in those, therefore I didn't close as many tickets as the other TS guys. On the other hand, once the problem was fixed the customer had no need to call back. Eventually I ended up leaving the company, partly over pay and partly over dissatisfaction with the job. Unfortunately, there was no scoring system that adequately measured my contribution to customer satisfaction, so the company wasn't totally pleased with my performance either.</p><p>Ultimately, the goal of Tech support is to collect data that can be used to correct problems upstream and prevent the customer from ever having to call tech support. That is a very lofty goal, and probably unreachable in reality, but it is useful as an ideal.</p><p>Customer problems caused by features or policies in the company's offering should definitely be corrected by the company. Work-arounds should be made available as soon as the problem is detected and handled, and that information should be shared with everyone.</p><p>These types of problems should be classified as to their importance, difficulty, and lapsed time. A numerical scale can be used to score these problems. If a customer calls back with the same problem, the ticket should be re-opened. This creates an incentive to close a problem completely rather than closing incompletely-solved tickets to rack up a higher closing rate. Since more than one tech may be working on a ticket over multiple shifts, time spent on the ticket ought to be credited, and the score distributed accordingly. Common problems ought to have a troubleshooting tree or decision table for testing and resolution. These tools could be made web-available so the customer can work their own problem or work cohesively with a tech. (Once a problem has been solved, it should not need to be solved again; only administered.)</p><p>Customer tutoring will always be important. This type of tech support should not be scored at all, since customer understanding will vary the closing time of the ticket.</p><p>I propose that this allows a program of incentives to get support techs to be working in the areas they are most effective. A good tutor with good understanding of the product and good language skills should be evaluated on the time spent tutoring, and the troubleshooters should be scored on the points they earn solving a variety of problems. Obviously, some techs are going to figure out how to "Work" the system so they get more points, so there ought to be a peer score applied to determine any bonuses.</p><p>The ultimate goal should be customer satisfaction with the process. (Dell? Quickbooks? Are you LISTENING?)</p><p>The first measure of output ought to be the customer's satisfaction. However, measuring progress requires a SYSTEM. I strongly suggest a system like Kepner-Tregoe. It works well for individuals and teams, progress is easily determined, and even management can analyze the results.</p><p>I recommend, "The New Rational Manager" by Kepner and Tregoe ( <a href="http://www.kepner-tregoe.com/webstore/webstore-Pub-Software-PUB.cfm#RatMan" title="kepner-tregoe.com">http://www.kepner-tregoe.com/webstore/webstore-Pub-Software-PUB.cfm#RatMan</a> [kepner-tregoe.com] ), and, "The Thinkers Toolkit" by Morgan Jones ( <a href="http://www.amazon.com/Thinkers-Toolkit-Powerful-Techniques-Problem/dp/0812928083/ref=sr\_1\_3?ie=UTF8&amp;s=books&amp;qid=1245180924&amp;sr=1-3" title="amazon.com">http://www.amazon.com/Thinkers-Toolkit-Powerful-Techniques-Problem/dp/0812928083/ref=sr\_1\_3?ie=UTF8&amp;s=books&amp;qid=1245180924&amp;sr=1-3</a> [amazon.com] ).</p></htmltext>
<tokenext>I worked for a large dataceneter/hosting company for a while a few years back .
One of the most tedious , lengthy troubleshooting processes is e-mail failures , and I sort of specialized in those , therefore I did n't close as many tickets as the other TS guys .
On the other hand , once the problem was fixed the customer had no need to call back .
Eventually I ended up leaving the company , partly over pay and partly over dissatisfaction with the job .
Unfortunately , there was no scoring system that adequately measured my contribution to customer satisfaction , so the company was n't totally pleased with my performance either.Ultimately , the goal of Tech support is to collect data that can be used to correct problems upstream and prevent the customer from ever having to call tech support .
That is a very lofty goal , and probably unreachable in reality , but it is useful as an ideal.Customer problems caused by features or policies in the company 's offering should definitely be corrected by the company .
Work-arounds should be made available as soon as the problem is detected and handled , and that information should be shared with everyone.These types of problems should be classified as to their importance , difficulty , and lapsed time .
A numerical scale can be used to score these problems .
If a customer calls back with the same problem , the ticket should be re-opened .
This creates an incentive to close a problem completely rather than closing incompletely-solved tickets to rack up a higher closing rate .
Since more than one tech may be working on a ticket over multiple shifts , time spent on the ticket ought to be credited , and the score distributed accordingly .
Common problems ought to have a troubleshooting tree or decision table for testing and resolution .
These tools could be made web-available so the customer can work their own problem or work cohesively with a tech .
( Once a problem has been solved , it should not need to be solved again ; only administered .
) Customer tutoring will always be important .
This type of tech support should not be scored at all , since customer understanding will vary the closing time of the ticket.I propose that this allows a program of incentives to get support techs to be working in the areas they are most effective .
A good tutor with good understanding of the product and good language skills should be evaluated on the time spent tutoring , and the troubleshooters should be scored on the points they earn solving a variety of problems .
Obviously , some techs are going to figure out how to " Work " the system so they get more points , so there ought to be a peer score applied to determine any bonuses.The ultimate goal should be customer satisfaction with the process .
( Dell ? Quickbooks ?
Are you LISTENING ?
) The first measure of output ought to be the customer 's satisfaction .
However , measuring progress requires a SYSTEM .
I strongly suggest a system like Kepner-Tregoe .
It works well for individuals and teams , progress is easily determined , and even management can analyze the results.I recommend , " The New Rational Manager " by Kepner and Tregoe ( http : //www.kepner-tregoe.com/webstore/webstore-Pub-Software-PUB.cfm # RatMan [ kepner-tregoe.com ] ) , and , " The Thinkers Toolkit " by Morgan Jones ( http : //www.amazon.com/Thinkers-Toolkit-Powerful-Techniques-Problem/dp/0812928083/ref = sr \ _1 \ _3 ? ie = UTF8&amp;s = books&amp;qid = 1245180924&amp;sr = 1-3 [ amazon.com ] ) .</tokentext>
<sentencetext>I worked for a large dataceneter/hosting company for a while a few years back.
One of the most tedious, lengthy troubleshooting processes is e-mail failures, and I sort of specialized in those, therefore I didn't close as many tickets as the other TS guys.
On the other hand, once the problem was fixed the customer had no need to call back.
Eventually I ended up leaving the company, partly over pay and partly over dissatisfaction with the job.
Unfortunately, there was no scoring system that adequately measured my contribution to customer satisfaction, so the company wasn't totally pleased with my performance either.Ultimately, the goal of Tech support is to collect data that can be used to correct problems upstream and prevent the customer from ever having to call tech support.
That is a very lofty goal, and probably unreachable in reality, but it is useful as an ideal.Customer problems caused by features or policies in the company's offering should definitely be corrected by the company.
Work-arounds should be made available as soon as the problem is detected and handled, and that information should be shared with everyone.These types of problems should be classified as to their importance, difficulty, and lapsed time.
A numerical scale can be used to score these problems.
If a customer calls back with the same problem, the ticket should be re-opened.
This creates an incentive to close a problem completely rather than closing incompletely-solved tickets to rack up a higher closing rate.
Since more than one tech may be working on a ticket over multiple shifts, time spent on the ticket ought to be credited, and the score distributed accordingly.
Common problems ought to have a troubleshooting tree or decision table for testing and resolution.
These tools could be made web-available so the customer can work their own problem or work cohesively with a tech.
(Once a problem has been solved, it should not need to be solved again; only administered.
)Customer tutoring will always be important.
This type of tech support should not be scored at all, since customer understanding will vary the closing time of the ticket.I propose that this allows a program of incentives to get support techs to be working in the areas they are most effective.
A good tutor with good understanding of the product and good language skills should be evaluated on the time spent tutoring, and the troubleshooters should be scored on the points they earn solving a variety of problems.
Obviously, some techs are going to figure out how to "Work" the system so they get more points, so there ought to be a peer score applied to determine any bonuses.The ultimate goal should be customer satisfaction with the process.
(Dell? Quickbooks?
Are you LISTENING?
)The first measure of output ought to be the customer's satisfaction.
However, measuring progress requires a SYSTEM.
I strongly suggest a system like Kepner-Tregoe.
It works well for individuals and teams, progress is easily determined, and even management can analyze the results.I recommend, "The New Rational Manager" by Kepner and Tregoe ( http://www.kepner-tregoe.com/webstore/webstore-Pub-Software-PUB.cfm#RatMan [kepner-tregoe.com] ), and, "The Thinkers Toolkit" by Morgan Jones ( http://www.amazon.com/Thinkers-Toolkit-Powerful-Techniques-Problem/dp/0812928083/ref=sr\_1\_3?ie=UTF8&amp;s=books&amp;qid=1245180924&amp;sr=1-3 [amazon.com] ).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352401</id>
	<title>customer?</title>
	<author>The Beezer</author>
	<datestamp>1245185100000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You make many excellent points, yet I have to strongly disagree with this statement:<br> <br>
<b>"everyone in the company is treated like a customer"</b> <br>
<br>
Unless you provide IT services to someone outside of your company, you're not working with customers, you're working with colleagues (simple test - how much is the person you're talking to paying for service?)  This is a very different dynamic as a customer has less responsibility than a coworker - other people in your company have to follow policies and adhere to guidelines that customers don't.  There's no real difference between someone asking for a new computer when their current system is perfectly fine and someone asking HR for an equal amount of money, yet the second request gets laughed out of the building.  The real customer is the person who can make the decision whether to outsource IT or not.<br>
<br>
Fixing this perception would get us a long way towards a better relationship between IT and non-IT parts of the business.</htmltext>
<tokenext>You make many excellent points , yet I have to strongly disagree with this statement : " everyone in the company is treated like a customer " Unless you provide IT services to someone outside of your company , you 're not working with customers , you 're working with colleagues ( simple test - how much is the person you 're talking to paying for service ?
) This is a very different dynamic as a customer has less responsibility than a coworker - other people in your company have to follow policies and adhere to guidelines that customers do n't .
There 's no real difference between someone asking for a new computer when their current system is perfectly fine and someone asking HR for an equal amount of money , yet the second request gets laughed out of the building .
The real customer is the person who can make the decision whether to outsource IT or not .
Fixing this perception would get us a long way towards a better relationship between IT and non-IT parts of the business .</tokentext>
<sentencetext>You make many excellent points, yet I have to strongly disagree with this statement: 
"everyone in the company is treated like a customer" 

Unless you provide IT services to someone outside of your company, you're not working with customers, you're working with colleagues (simple test - how much is the person you're talking to paying for service?
)  This is a very different dynamic as a customer has less responsibility than a coworker - other people in your company have to follow policies and adhere to guidelines that customers don't.
There's no real difference between someone asking for a new computer when their current system is perfectly fine and someone asking HR for an equal amount of money, yet the second request gets laughed out of the building.
The real customer is the person who can make the decision whether to outsource IT or not.
Fixing this perception would get us a long way towards a better relationship between IT and non-IT parts of the business.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350573</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349857</id>
	<title>Develop an SLA w/ your company</title>
	<author>goldspider</author>
	<datestamp>1245176040000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>In my department, we have an agreement with the rest of the company outlining the level of service that must be performed within a pre-determined amount of time, based on incident priority.  With the right tools, it's fairly easy to track the percentage of incidents resolved within the terms of the SLA.</p></htmltext>
<tokenext>In my department , we have an agreement with the rest of the company outlining the level of service that must be performed within a pre-determined amount of time , based on incident priority .
With the right tools , it 's fairly easy to track the percentage of incidents resolved within the terms of the SLA .</tokentext>
<sentencetext>In my department, we have an agreement with the rest of the company outlining the level of service that must be performed within a pre-determined amount of time, based on incident priority.
With the right tools, it's fairly easy to track the percentage of incidents resolved within the terms of the SLA.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28357369</id>
	<title>My company doesn't measure</title>
	<author>xael</author>
	<datestamp>1245170160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>At my company, IT Performance is not measured at all!<nobr> <wbr></nobr>:D</htmltext>
<tokenext>At my company , IT Performance is not measured at all !
: D</tokentext>
<sentencetext>At my company, IT Performance is not measured at all!
:D</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28356201</id>
	<title>metrics</title>
	<author>Anonymous</author>
	<datestamp>1245160920000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>My company has multiple IT departments and all are allowed to come up with their own metrics, so I have seen a few.  The most brilliant metric I have seen is to force all customers to attach a dollar value to every request (how much do you estimate the company will benefit from doing this).  The items with the largest value are done first and then the values of completed items are added up to show how much money IT is making/saving for the company.  Of course these numbers very loosely based on reality to begin with and then when customers figure out that larger numbers gets their thing done first you can guess what happens.  They never have problems getting funded, although someone might figure it out when they start making/saving more then he gross income of the company.</p></htmltext>
<tokenext>My company has multiple IT departments and all are allowed to come up with their own metrics , so I have seen a few .
The most brilliant metric I have seen is to force all customers to attach a dollar value to every request ( how much do you estimate the company will benefit from doing this ) .
The items with the largest value are done first and then the values of completed items are added up to show how much money IT is making/saving for the company .
Of course these numbers very loosely based on reality to begin with and then when customers figure out that larger numbers gets their thing done first you can guess what happens .
They never have problems getting funded , although someone might figure it out when they start making/saving more then he gross income of the company .</tokentext>
<sentencetext>My company has multiple IT departments and all are allowed to come up with their own metrics, so I have seen a few.
The most brilliant metric I have seen is to force all customers to attach a dollar value to every request (how much do you estimate the company will benefit from doing this).
The items with the largest value are done first and then the values of completed items are added up to show how much money IT is making/saving for the company.
Of course these numbers very loosely based on reality to begin with and then when customers figure out that larger numbers gets their thing done first you can guess what happens.
They never have problems getting funded, although someone might figure it out when they start making/saving more then he gross income of the company.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351543</id>
	<title>Play the game...</title>
	<author>gravyface</author>
	<datestamp>1245181860000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>and take a page from sales: do customer satisfaction surveys... short, 5-10 questions tops, conducted via telephone by your helpdesk staff.  People rarely have the guts to put someone (or their peers) down in person.  They'd much rather do it anonymously or through email (preferably indirectly through a manager or two).<br>Have the helpdesk smile while they're on the phone (yes, you really can tell if someone is really smiling over the phone), make sure the questions are light and to your benefit, with no techno-babble, and a simple 4-choice value for each one.  Order pizza afterwards and have a pow-wow with your survey callers and get a feel for how it went.  Thank them, promise nothing, and submit your (guaranteed) high scores in a pretty PDF to your overlords.<br>???<br>Bonus.</p></htmltext>
<tokenext>and take a page from sales : do customer satisfaction surveys... short , 5-10 questions tops , conducted via telephone by your helpdesk staff .
People rarely have the guts to put someone ( or their peers ) down in person .
They 'd much rather do it anonymously or through email ( preferably indirectly through a manager or two ) .Have the helpdesk smile while they 're on the phone ( yes , you really can tell if someone is really smiling over the phone ) , make sure the questions are light and to your benefit , with no techno-babble , and a simple 4-choice value for each one .
Order pizza afterwards and have a pow-wow with your survey callers and get a feel for how it went .
Thank them , promise nothing , and submit your ( guaranteed ) high scores in a pretty PDF to your overlords. ? ?
? Bonus .</tokentext>
<sentencetext>and take a page from sales: do customer satisfaction surveys... short, 5-10 questions tops, conducted via telephone by your helpdesk staff.
People rarely have the guts to put someone (or their peers) down in person.
They'd much rather do it anonymously or through email (preferably indirectly through a manager or two).Have the helpdesk smile while they're on the phone (yes, you really can tell if someone is really smiling over the phone), make sure the questions are light and to your benefit, with no techno-babble, and a simple 4-choice value for each one.
Order pizza afterwards and have a pow-wow with your survey callers and get a feel for how it went.
Thank them, promise nothing, and submit your (guaranteed) high scores in a pretty PDF to your overlords.??
?Bonus.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349931</id>
	<title>Worked at a place like this</title>
	<author>Publikwerks</author>
	<datestamp>1245176220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>At my former employer, customers would call the national helpdesk, who were rated by their time on a call. Let me tell you, the type of customer service you get from that environment is crap. They would have the customer reboot their machine, and if that didn't work, they would escalate the call to a state level operations center that could dispatch technicians (where I worked). They were, for the most part, useless. They made the customers angry, and really served no purpose other than a filter.</htmltext>
<tokenext>At my former employer , customers would call the national helpdesk , who were rated by their time on a call .
Let me tell you , the type of customer service you get from that environment is crap .
They would have the customer reboot their machine , and if that did n't work , they would escalate the call to a state level operations center that could dispatch technicians ( where I worked ) .
They were , for the most part , useless .
They made the customers angry , and really served no purpose other than a filter .</tokentext>
<sentencetext>At my former employer, customers would call the national helpdesk, who were rated by their time on a call.
Let me tell you, the type of customer service you get from that environment is crap.
They would have the customer reboot their machine, and if that didn't work, they would escalate the call to a state level operations center that could dispatch technicians (where I worked).
They were, for the most part, useless.
They made the customers angry, and really served no purpose other than a filter.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637</id>
	<title>No cnt++</title>
	<author>korbin\_dallas</author>
	<datestamp>1245175380000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>I thought IT got paid for the number of times they said 'No' to us during the day.</p><p>go figure.</p></htmltext>
<tokenext>I thought IT got paid for the number of times they said 'No ' to us during the day.go figure .</tokentext>
<sentencetext>I thought IT got paid for the number of times they said 'No' to us during the day.go figure.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350727</id>
	<title>You're, right mostly</title>
	<author>steve buttgereit</author>
	<datestamp>1245178680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If they're just monitoring how quickly tickets get closed and faster is always better, then your observation of the utility of the metric is spot on.  Otherwise, productivity is a valid measure and how fast someone turns around work is important.</p><p>Corporate IT performance, and support/operations performance can <em>include</em> a timeliness measure, but the interpretation and use of the metric matter.  So, for instance, the correct use of such a measure would be to see if a technician is turning work around quickly enough <em>relative</em> to others doing the same sort of work at a given level of quality.  YOU CANNOT USE THIS MEASURE WITHOUT CORRESPONDING QUALITY MEASURE(S)... it becomes meaningless in the absence of those controls much to the OP's point.</p><p>Still, productivity measures are real and when you know, and control for, the factors that go along with that then you can make use of such a measure to effectively bring value to your organization.</p><p>Many lesser IT managers will, though, just look at 'how fast' or 'how many'.  The last company I was with just looked at help desk tickets opened and closed, for instance, to judge help desk success.  The numbers closed went up and the numbers opened dropped... therefore success right?  Nope.  This overly simplistic view, with no real quality measure, actually meant that it was so hard to get the help desk to do anything that users just stopped opening them and found 'out-of-band' ways of getting help.  Those that were closed weren't necessarily complete, but again, users didn't get any value out of challenging so just gave up.  So what our operations management was seeing as success was actually a more accurate measure of their failure!</p><p>A simple random sampling of tickets with automated 'how did we do?' surveys could have collected this information and provided some meaning to the productivity measures... as well as if people were closing things too fast or too slow while maintaining quality.</p></htmltext>
<tokenext>If they 're just monitoring how quickly tickets get closed and faster is always better , then your observation of the utility of the metric is spot on .
Otherwise , productivity is a valid measure and how fast someone turns around work is important.Corporate IT performance , and support/operations performance can include a timeliness measure , but the interpretation and use of the metric matter .
So , for instance , the correct use of such a measure would be to see if a technician is turning work around quickly enough relative to others doing the same sort of work at a given level of quality .
YOU CAN NOT USE THIS MEASURE WITHOUT CORRESPONDING QUALITY MEASURE ( S ) ... it becomes meaningless in the absence of those controls much to the OP 's point.Still , productivity measures are real and when you know , and control for , the factors that go along with that then you can make use of such a measure to effectively bring value to your organization.Many lesser IT managers will , though , just look at 'how fast ' or 'how many' .
The last company I was with just looked at help desk tickets opened and closed , for instance , to judge help desk success .
The numbers closed went up and the numbers opened dropped... therefore success right ?
Nope. This overly simplistic view , with no real quality measure , actually meant that it was so hard to get the help desk to do anything that users just stopped opening them and found 'out-of-band ' ways of getting help .
Those that were closed were n't necessarily complete , but again , users did n't get any value out of challenging so just gave up .
So what our operations management was seeing as success was actually a more accurate measure of their failure ! A simple random sampling of tickets with automated 'how did we do ?
' surveys could have collected this information and provided some meaning to the productivity measures... as well as if people were closing things too fast or too slow while maintaining quality .</tokentext>
<sentencetext>If they're just monitoring how quickly tickets get closed and faster is always better, then your observation of the utility of the metric is spot on.
Otherwise, productivity is a valid measure and how fast someone turns around work is important.Corporate IT performance, and support/operations performance can include a timeliness measure, but the interpretation and use of the metric matter.
So, for instance, the correct use of such a measure would be to see if a technician is turning work around quickly enough relative to others doing the same sort of work at a given level of quality.
YOU CANNOT USE THIS MEASURE WITHOUT CORRESPONDING QUALITY MEASURE(S)... it becomes meaningless in the absence of those controls much to the OP's point.Still, productivity measures are real and when you know, and control for, the factors that go along with that then you can make use of such a measure to effectively bring value to your organization.Many lesser IT managers will, though, just look at 'how fast' or 'how many'.
The last company I was with just looked at help desk tickets opened and closed, for instance, to judge help desk success.
The numbers closed went up and the numbers opened dropped... therefore success right?
Nope.  This overly simplistic view, with no real quality measure, actually meant that it was so hard to get the help desk to do anything that users just stopped opening them and found 'out-of-band' ways of getting help.
Those that were closed weren't necessarily complete, but again, users didn't get any value out of challenging so just gave up.
So what our operations management was seeing as success was actually a more accurate measure of their failure!A simple random sampling of tickets with automated 'how did we do?
' surveys could have collected this information and provided some meaning to the productivity measures... as well as if people were closing things too fast or too slow while maintaining quality.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351259</id>
	<title>Make the numbers or lose your job!</title>
	<author>Anonymous</author>
	<datestamp>1245180840000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>In the past, I worked as as a contractor in the internal (for employees only) helpdesk for a very large and famous company. For much of the time I was there, the count of tickets resolved per week was the primary measure of performance and stated by management as the deciding factor in who stayed and who went in frequent RIFs.</p><p>I don't believe the emphasis on speed contributed to better service. it was fairly obvious that any ticket that didn't look like an "easy resolve" was treated like a hot potato by the vast majority of techs, which is wrong, but understandable, since we lived and died by the numbers (I was among those who "died", eventually).</p></htmltext>
<tokenext>In the past , I worked as as a contractor in the internal ( for employees only ) helpdesk for a very large and famous company .
For much of the time I was there , the count of tickets resolved per week was the primary measure of performance and stated by management as the deciding factor in who stayed and who went in frequent RIFs.I do n't believe the emphasis on speed contributed to better service .
it was fairly obvious that any ticket that did n't look like an " easy resolve " was treated like a hot potato by the vast majority of techs , which is wrong , but understandable , since we lived and died by the numbers ( I was among those who " died " , eventually ) .</tokentext>
<sentencetext>In the past, I worked as as a contractor in the internal (for employees only) helpdesk for a very large and famous company.
For much of the time I was there, the count of tickets resolved per week was the primary measure of performance and stated by management as the deciding factor in who stayed and who went in frequent RIFs.I don't believe the emphasis on speed contributed to better service.
it was fairly obvious that any ticket that didn't look like an "easy resolve" was treated like a hot potato by the vast majority of techs, which is wrong, but understandable, since we lived and died by the numbers (I was among those who "died", eventually).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353609</id>
	<title>Re:ITIL</title>
	<author>Anonymous</author>
	<datestamp>1245146580000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>ITIL is a an overhead. You have to pay consultants or train up staff and create a new layer of management.  The new managers create new processes and metrics and seem to find that whimsical report generation is of the essence. Most of these managers do not have a secondary education in science or math, and thus cannot appreciate the fallacy or futility of metrics applied to humans.</p></htmltext>
<tokenext>ITIL is a an overhead .
You have to pay consultants or train up staff and create a new layer of management .
The new managers create new processes and metrics and seem to find that whimsical report generation is of the essence .
Most of these managers do not have a secondary education in science or math , and thus can not appreciate the fallacy or futility of metrics applied to humans .</tokentext>
<sentencetext>ITIL is a an overhead.
You have to pay consultants or train up staff and create a new layer of management.
The new managers create new processes and metrics and seem to find that whimsical report generation is of the essence.
Most of these managers do not have a secondary education in science or math, and thus cannot appreciate the fallacy or futility of metrics applied to humans.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349913</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350951</id>
	<title>Re:This is why the IT department is always cut fir</title>
	<author>Anonymous</author>
	<datestamp>1245179520000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Measure in happiness of all parties.  If either side, the sending or receiving side are unhappy, they either leave or get fired.  It's up to management to understand and establish the balance of limits and expectations to push.</p></htmltext>
<tokenext>Measure in happiness of all parties .
If either side , the sending or receiving side are unhappy , they either leave or get fired .
It 's up to management to understand and establish the balance of limits and expectations to push .</tokentext>
<sentencetext>Measure in happiness of all parties.
If either side, the sending or receiving side are unhappy, they either leave or get fired.
It's up to management to understand and establish the balance of limits and expectations to push.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350077</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349829</id>
	<title>Well..</title>
	<author>hyfe</author>
	<datestamp>1245175920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Any metric is, at best, indicative. You can spend all day designing a better metric and by the end, you're still not going to get anything better than, well, indicative.</p><p>As with all data-analysis, make sure that whoever's using these numbers know how bad they are. If we're dealing with reports and decisions, make sure that there's a short explanatory comment by somebody in the know about to which degree you feel that these numbers are representative  (example : overall performance is improved, but averages are scewed by a large of number complicated bugs on New Product).</p><p>Oh, and if the people making decisions are MBA's unable to read a single short sentence, you're screwed either way. Then you just have to roll with it<nobr> <wbr></nobr>:)</p></htmltext>
<tokenext>Any metric is , at best , indicative .
You can spend all day designing a better metric and by the end , you 're still not going to get anything better than , well , indicative.As with all data-analysis , make sure that whoever 's using these numbers know how bad they are .
If we 're dealing with reports and decisions , make sure that there 's a short explanatory comment by somebody in the know about to which degree you feel that these numbers are representative ( example : overall performance is improved , but averages are scewed by a large of number complicated bugs on New Product ) .Oh , and if the people making decisions are MBA 's unable to read a single short sentence , you 're screwed either way .
Then you just have to roll with it : )</tokentext>
<sentencetext>Any metric is, at best, indicative.
You can spend all day designing a better metric and by the end, you're still not going to get anything better than, well, indicative.As with all data-analysis, make sure that whoever's using these numbers know how bad they are.
If we're dealing with reports and decisions, make sure that there's a short explanatory comment by somebody in the know about to which degree you feel that these numbers are representative  (example : overall performance is improved, but averages are scewed by a large of number complicated bugs on New Product).Oh, and if the people making decisions are MBA's unable to read a single short sentence, you're screwed either way.
Then you just have to roll with it :)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352611</id>
	<title>Our way isn't best but it's not bad</title>
	<author>wagr</author>
	<datestamp>1245142800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Here, our performance is measure with this hierarchy:</p><p>First, if the number of calls exceeds total man-service hours times 10, call the situation "swamped" and ignore the rest.  I.e. average call is expected to take 5 minutes with a little time for overhead.</p><p>Second is percentage of calls that are answered live (i.e. the users didn't have to leave a message and we had to get back to them) followed by the percentages of calls that are returned within: 30 minutes, 1 hour, 2 hours, 4 hours, and took longer than 4 hours.</p><p>Third is number of tickets still open at the end of the day.</p></htmltext>
<tokenext>Here , our performance is measure with this hierarchy : First , if the number of calls exceeds total man-service hours times 10 , call the situation " swamped " and ignore the rest .
I.e. average call is expected to take 5 minutes with a little time for overhead.Second is percentage of calls that are answered live ( i.e .
the users did n't have to leave a message and we had to get back to them ) followed by the percentages of calls that are returned within : 30 minutes , 1 hour , 2 hours , 4 hours , and took longer than 4 hours.Third is number of tickets still open at the end of the day .</tokentext>
<sentencetext>Here, our performance is measure with this hierarchy:First, if the number of calls exceeds total man-service hours times 10, call the situation "swamped" and ignore the rest.
I.e. average call is expected to take 5 minutes with a little time for overhead.Second is percentage of calls that are answered live (i.e.
the users didn't have to leave a message and we had to get back to them) followed by the percentages of calls that are returned within: 30 minutes, 1 hour, 2 hours, 4 hours, and took longer than 4 hours.Third is number of tickets still open at the end of the day.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349711</id>
	<title>Ah the age old quantify IT...</title>
	<author>Anonymous</author>
	<datestamp>1245175620000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Well here's a newsflash you can't quantify most IT jobs.  We are an ever changing backbone to the business in most cases.  Metrics are meaningless to us.  If you'd like to have a way of evaluating IT then set goals.  Salespeople use metrics as a way of increasing sales and ridding themselves of dead weight.  As an IT person you can be on fire one week and dead the next and it's all the same.  So to answer the OP, take a picture of your ass and give it to the person that originally put the thought in your head that we need to justify what we do or how quickly we do it.</p></htmltext>
<tokenext>Well here 's a newsflash you ca n't quantify most IT jobs .
We are an ever changing backbone to the business in most cases .
Metrics are meaningless to us .
If you 'd like to have a way of evaluating IT then set goals .
Salespeople use metrics as a way of increasing sales and ridding themselves of dead weight .
As an IT person you can be on fire one week and dead the next and it 's all the same .
So to answer the OP , take a picture of your ass and give it to the person that originally put the thought in your head that we need to justify what we do or how quickly we do it .</tokentext>
<sentencetext>Well here's a newsflash you can't quantify most IT jobs.
We are an ever changing backbone to the business in most cases.
Metrics are meaningless to us.
If you'd like to have a way of evaluating IT then set goals.
Salespeople use metrics as a way of increasing sales and ridding themselves of dead weight.
As an IT person you can be on fire one week and dead the next and it's all the same.
So to answer the OP, take a picture of your ass and give it to the person that originally put the thought in your head that we need to justify what we do or how quickly we do it.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350393</id>
	<title>Dilbert does it best</title>
	<author>Anonymous</author>
	<datestamp>1245177660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>http://shutupandreboot.net/Funny\_Stuff.html</p></htmltext>
<tokenext>http : //shutupandreboot.net/Funny \ _Stuff.html</tokentext>
<sentencetext>http://shutupandreboot.net/Funny\_Stuff.html</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350747</id>
	<title>Re:Tracking invisibility</title>
	<author>Repossessed</author>
	<datestamp>1245178800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If its customer facing support sure.  But for internal help desk, tracking customer satisfaction leads to a lot of wasted time smoothing the egos of pissed off managers, or trying to get someone to accept company password policy without pissing them off, instead of a simple 'I'm sorry you're not allowed to do that, take it up with the VP/CTO/whoever actually set that given policy'.</p></htmltext>
<tokenext>If its customer facing support sure .
But for internal help desk , tracking customer satisfaction leads to a lot of wasted time smoothing the egos of pissed off managers , or trying to get someone to accept company password policy without pissing them off , instead of a simple 'I 'm sorry you 're not allowed to do that , take it up with the VP/CTO/whoever actually set that given policy' .</tokentext>
<sentencetext>If its customer facing support sure.
But for internal help desk, tracking customer satisfaction leads to a lot of wasted time smoothing the egos of pissed off managers, or trying to get someone to accept company password policy without pissing them off, instead of a simple 'I'm sorry you're not allowed to do that, take it up with the VP/CTO/whoever actually set that given policy'.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349709</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349839</id>
	<title>now that you know...</title>
	<author>Anonymous</author>
	<datestamp>1245175980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>you need to engineer easily fixable problems... up goes your rating... whee!!!</p></htmltext>
<tokenext>you need to engineer easily fixable problems... up goes your rating.. .
whee ! ! !</tokentext>
<sentencetext>you need to engineer easily fixable problems... up goes your rating...
whee!!!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355451</id>
	<title>You need something more comprehensive</title>
	<author>Anonymous</author>
	<datestamp>1245156240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You should have a lot more to measure than just the time it takes to close a ticket.</p><p>Your metrics should be used to measure individual performance as well as pinpoint needed training--and has the added value of highlighting the needy/silly users, too.</p><p>Our company measured:<br>--time to respond (as in how busy we were; do we need to hire more people?)<br>--type of problem (as a measure of how long it should take tech to resolve)<br>--time to resolve (as in does the tech know his stuff)<br>--number of repeat issues (are we really fixing the problem, i.e., providing a permanent solution vs. masking the symptoms; do we need more training on this issue?)<br>--number of repeat calls from the same user (are we blowing him off? is user too stupid to understand the solution?)</p><p>Metrics, if used correctly, are a great tool for understanding an employee's strengths and weaknesses. Also for getting a handle on how well you are communicating with your users.</p></htmltext>
<tokenext>You should have a lot more to measure than just the time it takes to close a ticket.Your metrics should be used to measure individual performance as well as pinpoint needed training--and has the added value of highlighting the needy/silly users , too.Our company measured : --time to respond ( as in how busy we were ; do we need to hire more people ?
) --type of problem ( as a measure of how long it should take tech to resolve ) --time to resolve ( as in does the tech know his stuff ) --number of repeat issues ( are we really fixing the problem , i.e. , providing a permanent solution vs. masking the symptoms ; do we need more training on this issue ?
) --number of repeat calls from the same user ( are we blowing him off ?
is user too stupid to understand the solution ?
) Metrics , if used correctly , are a great tool for understanding an employee 's strengths and weaknesses .
Also for getting a handle on how well you are communicating with your users .</tokentext>
<sentencetext>You should have a lot more to measure than just the time it takes to close a ticket.Your metrics should be used to measure individual performance as well as pinpoint needed training--and has the added value of highlighting the needy/silly users, too.Our company measured:--time to respond (as in how busy we were; do we need to hire more people?
)--type of problem (as a measure of how long it should take tech to resolve)--time to resolve (as in does the tech know his stuff)--number of repeat issues (are we really fixing the problem, i.e., providing a permanent solution vs. masking the symptoms; do we need more training on this issue?
)--number of repeat calls from the same user (are we blowing him off?
is user too stupid to understand the solution?
)Metrics, if used correctly, are a great tool for understanding an employee's strengths and weaknesses.
Also for getting a handle on how well you are communicating with your users.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351165</id>
	<title>ticket system</title>
	<author>Anonymous</author>
	<datestamp>1245180360000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>We have had this problem as well at our company.  We recently started using zendesk as a way to track our trouble tickets.  This allows for easy web based tracking of what we work on and you can set up quality milestones and see how you measure against it.</p><p>We look for accuracy of solution, responded to in a timely manner (fixing things can take weeks sometimes), communication was kept updated, etc.  The biggest thing we've noticed is since we update our tickets and the system notifies the user they feel more in the loop on what's being done and therefore are generally happy.  I think most IT depts have problems keeping users informed throughout a process beucase we are too busy putting out fires and fixing things.</p></htmltext>
<tokenext>We have had this problem as well at our company .
We recently started using zendesk as a way to track our trouble tickets .
This allows for easy web based tracking of what we work on and you can set up quality milestones and see how you measure against it.We look for accuracy of solution , responded to in a timely manner ( fixing things can take weeks sometimes ) , communication was kept updated , etc .
The biggest thing we 've noticed is since we update our tickets and the system notifies the user they feel more in the loop on what 's being done and therefore are generally happy .
I think most IT depts have problems keeping users informed throughout a process beucase we are too busy putting out fires and fixing things .</tokentext>
<sentencetext>We have had this problem as well at our company.
We recently started using zendesk as a way to track our trouble tickets.
This allows for easy web based tracking of what we work on and you can set up quality milestones and see how you measure against it.We look for accuracy of solution, responded to in a timely manner (fixing things can take weeks sometimes), communication was kept updated, etc.
The biggest thing we've noticed is since we update our tickets and the system notifies the user they feel more in the loop on what's being done and therefore are generally happy.
I think most IT depts have problems keeping users informed throughout a process beucase we are too busy putting out fires and fixing things.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349749</id>
	<title>Another view?</title>
	<author>ITJC68</author>
	<datestamp>1245175740000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>As someone in the support field the company I work for looks at the # of steps in the ticket to resolve the issue, the overall time in resolving the issue and the complexity of the issue. The goal is always to reduce the amount of calls and steps but call complexity also must be measured in the overall metric.</htmltext>
<tokenext>As someone in the support field the company I work for looks at the # of steps in the ticket to resolve the issue , the overall time in resolving the issue and the complexity of the issue .
The goal is always to reduce the amount of calls and steps but call complexity also must be measured in the overall metric .</tokentext>
<sentencetext>As someone in the support field the company I work for looks at the # of steps in the ticket to resolve the issue, the overall time in resolving the issue and the complexity of the issue.
The goal is always to reduce the amount of calls and steps but call complexity also must be measured in the overall metric.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349721</id>
	<title>Not QUITE the stupidest metric I can think of....</title>
	<author>gestalt\_n\_pepper</author>
	<datestamp>1245175620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>But it's close. Of course, closed tickets are something a manager can measure. Needless to say, it measures nothing meaningful.

For example, I tell a customer to reboot. Close the ticket. That takes little time and closed the ticket fast. In fact, I can improve my metrics by telling that same person to do this ever 4 hours for several years. OR, I can get up, go to their desk, and solve the problem permanently. It takes longer, making my metrics look bad, but in reality-land (a land far, far away from management land), that person is doing productive work longer and more efficiently because the interruption and downtime have been removed.</htmltext>
<tokenext>But it 's close .
Of course , closed tickets are something a manager can measure .
Needless to say , it measures nothing meaningful .
For example , I tell a customer to reboot .
Close the ticket .
That takes little time and closed the ticket fast .
In fact , I can improve my metrics by telling that same person to do this ever 4 hours for several years .
OR , I can get up , go to their desk , and solve the problem permanently .
It takes longer , making my metrics look bad , but in reality-land ( a land far , far away from management land ) , that person is doing productive work longer and more efficiently because the interruption and downtime have been removed .</tokentext>
<sentencetext>But it's close.
Of course, closed tickets are something a manager can measure.
Needless to say, it measures nothing meaningful.
For example, I tell a customer to reboot.
Close the ticket.
That takes little time and closed the ticket fast.
In fact, I can improve my metrics by telling that same person to do this ever 4 hours for several years.
OR, I can get up, go to their desk, and solve the problem permanently.
It takes longer, making my metrics look bad, but in reality-land (a land far, far away from management land), that person is doing productive work longer and more efficiently because the interruption and downtime have been removed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28360297</id>
	<title>Re:count tickets never openend</title>
	<author>Anonymous</author>
	<datestamp>1245246660000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>An IT-department, IMHO, should be working on making itself obsolete.</p></div><p>Exactly!</p><p>Not just departments, but individual people and roles.  I have always made it my goal to make any role I am in redundant.  It's great to work on all the hard problems and automate, streamline or otherwise do away with all of the mundane work making the job so simple it can be handed to a less skilled, less experienced or less novelty seeking person to do my job for less money.  The company wins because they get the same job done cheaper.  I win because I get to do interesting stuff only as long as it's interesting then give it to someone else to run with.  Sometimes there are losers when people in a similar role might find they have been streamlined out of a job but frankly I think if you are standing still you are going backwards.</p><p>It doesn't always work, some companies have no appetite for improvement.  Or should I say some managers or entire management lines have no interest in either being shown up on how they haven't optimised the existing staff output or have no interest in reducing headcount because it'd shrink their empire.</p></div>
	</htmltext>
<tokenext>An IT-department , IMHO , should be working on making itself obsolete.Exactly ! Not just departments , but individual people and roles .
I have always made it my goal to make any role I am in redundant .
It 's great to work on all the hard problems and automate , streamline or otherwise do away with all of the mundane work making the job so simple it can be handed to a less skilled , less experienced or less novelty seeking person to do my job for less money .
The company wins because they get the same job done cheaper .
I win because I get to do interesting stuff only as long as it 's interesting then give it to someone else to run with .
Sometimes there are losers when people in a similar role might find they have been streamlined out of a job but frankly I think if you are standing still you are going backwards.It does n't always work , some companies have no appetite for improvement .
Or should I say some managers or entire management lines have no interest in either being shown up on how they have n't optimised the existing staff output or have no interest in reducing headcount because it 'd shrink their empire .</tokentext>
<sentencetext>An IT-department, IMHO, should be working on making itself obsolete.Exactly!Not just departments, but individual people and roles.
I have always made it my goal to make any role I am in redundant.
It's great to work on all the hard problems and automate, streamline or otherwise do away with all of the mundane work making the job so simple it can be handed to a less skilled, less experienced or less novelty seeking person to do my job for less money.
The company wins because they get the same job done cheaper.
I win because I get to do interesting stuff only as long as it's interesting then give it to someone else to run with.
Sometimes there are losers when people in a similar role might find they have been streamlined out of a job but frankly I think if you are standing still you are going backwards.It doesn't always work, some companies have no appetite for improvement.
Or should I say some managers or entire management lines have no interest in either being shown up on how they haven't optimised the existing staff output or have no interest in reducing headcount because it'd shrink their empire.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350013</id>
	<title>Metrics = Manager is getting a bonus</title>
	<author>Anonymous</author>
	<datestamp>1245176460000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>Whenever I see a metric that measures quantity instead of quality, that tells me the manager gets a bonus.  Hopefully, you're getting a piece of that bonus.</p></htmltext>
<tokenext>Whenever I see a metric that measures quantity instead of quality , that tells me the manager gets a bonus .
Hopefully , you 're getting a piece of that bonus .</tokentext>
<sentencetext>Whenever I see a metric that measures quantity instead of quality, that tells me the manager gets a bonus.
Hopefully, you're getting a piece of that bonus.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353319</id>
	<title>Only two measures matter</title>
	<author>nightsweat</author>
	<datestamp>1245145500000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>$ spent/user all inclusive, and user satisfaction surveys.<br>
You should be low on the $/user and a  little above middling on satisfaction.  If you're too high on satisfaction, you're likely overstaffed and could get the first number down.<br>
<br>

That's how the business is going to look at it unless your'e generating revenue.  Then it's all about ROI.</htmltext>
<tokenext>$ spent/user all inclusive , and user satisfaction surveys .
You should be low on the $ /user and a little above middling on satisfaction .
If you 're too high on satisfaction , you 're likely overstaffed and could get the first number down .
That 's how the business is going to look at it unless your'e generating revenue .
Then it 's all about ROI .</tokentext>
<sentencetext>$ spent/user all inclusive, and user satisfaction surveys.
You should be low on the $/user and a  little above middling on satisfaction.
If you're too high on satisfaction, you're likely overstaffed and could get the first number down.
That's how the business is going to look at it unless your'e generating revenue.
Then it's all about ROI.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350663</id>
	<title>So, why are you here?</title>
	<author>No-Cool-Nickname</author>
	<datestamp>1245178500000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Why are you here?<br>It ain't the money.  A network engineer, yours truly -- I rake in about, what,  85 grand a year?  You can't buy a decent sports car for that.<br>It ain't sex.  Hey, being here won't get you laid.  Oh, you're a dental hygienist?  I'm a Cisco Certified Internetwork Expert.  -Hello?!<br>What about fame?  Our failures are known.  Our successes...are not.  That's the company motto.  You save the world, they send you to some windowless office, give you a little lemonade and cookies, and show you your medal.  You don't even get to take it home.<br>So it ain't money, it ain't sex, it ain't fame.<br>What is it?<br>I say we are all here in this room because we believe.  We believe in technology, and we choose technology.  We believe in right and wrong, and we choose right.  Our cause is just.  Our enemies...everywhere.  They're all around us.  Some scary stuff out there.<br>Which brings us here... to the server farm.  You have all just stepped through the looking glass.  What you see, what you hear -- nothing is what it seems.</p><p>(paraphrased from 'The Recruit')</p></htmltext>
<tokenext>Why are you here ? It ai n't the money .
A network engineer , yours truly -- I rake in about , what , 85 grand a year ?
You ca n't buy a decent sports car for that.It ai n't sex .
Hey , being here wo n't get you laid .
Oh , you 're a dental hygienist ?
I 'm a Cisco Certified Internetwork Expert .
-Hello ? ! What about fame ?
Our failures are known .
Our successes...are not .
That 's the company motto .
You save the world , they send you to some windowless office , give you a little lemonade and cookies , and show you your medal .
You do n't even get to take it home.So it ai n't money , it ai n't sex , it ai n't fame.What is it ? I say we are all here in this room because we believe .
We believe in technology , and we choose technology .
We believe in right and wrong , and we choose right .
Our cause is just .
Our enemies...everywhere .
They 're all around us .
Some scary stuff out there.Which brings us here... to the server farm .
You have all just stepped through the looking glass .
What you see , what you hear -- nothing is what it seems .
( paraphrased from 'The Recruit ' )</tokentext>
<sentencetext>Why are you here?It ain't the money.
A network engineer, yours truly -- I rake in about, what,  85 grand a year?
You can't buy a decent sports car for that.It ain't sex.
Hey, being here won't get you laid.
Oh, you're a dental hygienist?
I'm a Cisco Certified Internetwork Expert.
-Hello?!What about fame?
Our failures are known.
Our successes...are not.
That's the company motto.
You save the world, they send you to some windowless office, give you a little lemonade and cookies, and show you your medal.
You don't even get to take it home.So it ain't money, it ain't sex, it ain't fame.What is it?I say we are all here in this room because we believe.
We believe in technology, and we choose technology.
We believe in right and wrong, and we choose right.
Our cause is just.
Our enemies...everywhere.
They're all around us.
Some scary stuff out there.Which brings us here... to the server farm.
You have all just stepped through the looking glass.
What you see, what you hear -- nothing is what it seems.
(paraphrased from 'The Recruit')</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352325</id>
	<title>Work to the measurements</title>
	<author>bwcbwc</author>
	<datestamp>1245184800000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>If teams are being measured on how quickly they close tickets as the only metric, that is what they will target in their efforts. So you will end up with a bunch of dissatisfied customers  who had their tickets closed without getting their problem solved. This metric is the biggest reason call center service is so lousy for so many companies.</p><p>If your company wants to measure turnaround time, a less-direct approach is better. Something like number of tickets closed in a month with separate categories for tickets based on whether they had to be escalated or not.  In addition to this, you need to measure the number of repeat calls from customers. There's always a few cranks that call over every little thing, but if you have a large number of customers calling back within a few days of their ticket closing, problems aren't getting solved the first time. This is better than just "reducing calls". A large number of calls is more likely to indicate a problem in design or production of the product rather than a problem in the call center. But repeat calls, or</p><p>A customer satisfaction survey is also useful. In general you'll only get a self-selected response from the customers who are extremely happy or extremely dissatisfied, but even the ratio between the number of responses in those two groups tells you a lot. And if you get comments back from the customers, that's even better.</p><p>A large percentage (too large) of people spend time trying to game any system. This can range legal activities like taking SAT prep classes to lobbying the government for favorable laws  to illegal acts like adulterating toothpaste with ethylene glycol to reduce costs or shoplifting and returning merchandise for credit.</p><p>So think of ways someone could circumvent your metrics to boost their numbers without providing the desired customer service. For example, the repeat calls metric could be gamed if a call center operator doesn't notify (or blocks auto-notification) the customer that their first ticket has been closed. That could delay the customer calling back for status until the metric time had passed. You'd have to check the logs on the customer record to see if their email address was erased or if there was some other activity that shows a scam. Before you reward your "outstanding" employees, you need to do some cross-checking of the metrics to make sure they're real.</p></htmltext>
<tokenext>If teams are being measured on how quickly they close tickets as the only metric , that is what they will target in their efforts .
So you will end up with a bunch of dissatisfied customers who had their tickets closed without getting their problem solved .
This metric is the biggest reason call center service is so lousy for so many companies.If your company wants to measure turnaround time , a less-direct approach is better .
Something like number of tickets closed in a month with separate categories for tickets based on whether they had to be escalated or not .
In addition to this , you need to measure the number of repeat calls from customers .
There 's always a few cranks that call over every little thing , but if you have a large number of customers calling back within a few days of their ticket closing , problems are n't getting solved the first time .
This is better than just " reducing calls " .
A large number of calls is more likely to indicate a problem in design or production of the product rather than a problem in the call center .
But repeat calls , orA customer satisfaction survey is also useful .
In general you 'll only get a self-selected response from the customers who are extremely happy or extremely dissatisfied , but even the ratio between the number of responses in those two groups tells you a lot .
And if you get comments back from the customers , that 's even better.A large percentage ( too large ) of people spend time trying to game any system .
This can range legal activities like taking SAT prep classes to lobbying the government for favorable laws to illegal acts like adulterating toothpaste with ethylene glycol to reduce costs or shoplifting and returning merchandise for credit.So think of ways someone could circumvent your metrics to boost their numbers without providing the desired customer service .
For example , the repeat calls metric could be gamed if a call center operator does n't notify ( or blocks auto-notification ) the customer that their first ticket has been closed .
That could delay the customer calling back for status until the metric time had passed .
You 'd have to check the logs on the customer record to see if their email address was erased or if there was some other activity that shows a scam .
Before you reward your " outstanding " employees , you need to do some cross-checking of the metrics to make sure they 're real .</tokentext>
<sentencetext>If teams are being measured on how quickly they close tickets as the only metric, that is what they will target in their efforts.
So you will end up with a bunch of dissatisfied customers  who had their tickets closed without getting their problem solved.
This metric is the biggest reason call center service is so lousy for so many companies.If your company wants to measure turnaround time, a less-direct approach is better.
Something like number of tickets closed in a month with separate categories for tickets based on whether they had to be escalated or not.
In addition to this, you need to measure the number of repeat calls from customers.
There's always a few cranks that call over every little thing, but if you have a large number of customers calling back within a few days of their ticket closing, problems aren't getting solved the first time.
This is better than just "reducing calls".
A large number of calls is more likely to indicate a problem in design or production of the product rather than a problem in the call center.
But repeat calls, orA customer satisfaction survey is also useful.
In general you'll only get a self-selected response from the customers who are extremely happy or extremely dissatisfied, but even the ratio between the number of responses in those two groups tells you a lot.
And if you get comments back from the customers, that's even better.A large percentage (too large) of people spend time trying to game any system.
This can range legal activities like taking SAT prep classes to lobbying the government for favorable laws  to illegal acts like adulterating toothpaste with ethylene glycol to reduce costs or shoplifting and returning merchandise for credit.So think of ways someone could circumvent your metrics to boost their numbers without providing the desired customer service.
For example, the repeat calls metric could be gamed if a call center operator doesn't notify (or blocks auto-notification) the customer that their first ticket has been closed.
That could delay the customer calling back for status until the metric time had passed.
You'd have to check the logs on the customer record to see if their email address was erased or if there was some other activity that shows a scam.
Before you reward your "outstanding" employees, you need to do some cross-checking of the metrics to make sure they're real.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350525</id>
	<title>Mixture of Customer Service and Time Metrics</title>
	<author>Anonymous</author>
	<datestamp>1245178080000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Customer service should be primary and just use SLA/Time metrics to make sure people aren't goofing off. People are more happy to have support from someone that is slow, communicates and is positive than someone who treats them like garbage and is quick.</p></htmltext>
<tokenext>Customer service should be primary and just use SLA/Time metrics to make sure people are n't goofing off .
People are more happy to have support from someone that is slow , communicates and is positive than someone who treats them like garbage and is quick .</tokentext>
<sentencetext>Customer service should be primary and just use SLA/Time metrics to make sure people aren't goofing off.
People are more happy to have support from someone that is slow, communicates and is positive than someone who treats them like garbage and is quick.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350433</id>
	<title>Proposed Metric</title>
	<author>pak9rabid</author>
	<datestamp>1245177840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>When your only IT guy goes on vacation for a week+, measure on a scale from 1 to 10 how much he/she was missed.</htmltext>
<tokenext>When your only IT guy goes on vacation for a week + , measure on a scale from 1 to 10 how much he/she was missed .</tokentext>
<sentencetext>When your only IT guy goes on vacation for a week+, measure on a scale from 1 to 10 how much he/she was missed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352173</id>
	<title>It's not just performance!</title>
	<author>Hallmarc</author>
	<datestamp>1245184320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>"Performance" can be viewed as creating value.  But at what cost?  In the IT world, cost and value are often measured in incommensurate units.  Once you get a handle on cost you can start to tackle value.
I recommend starting at <a href="http://www.usenix.org/event/lisa05/tech/couch.html" title="usenix.org" rel="nofollow">http://www.usenix.org/event/lisa05/tech/couch.html</a> [usenix.org]</htmltext>
<tokenext>" Performance " can be viewed as creating value .
But at what cost ?
In the IT world , cost and value are often measured in incommensurate units .
Once you get a handle on cost you can start to tackle value .
I recommend starting at http : //www.usenix.org/event/lisa05/tech/couch.html [ usenix.org ]</tokentext>
<sentencetext>"Performance" can be viewed as creating value.
But at what cost?
In the IT world, cost and value are often measured in incommensurate units.
Once you get a handle on cost you can start to tackle value.
I recommend starting at http://www.usenix.org/event/lisa05/tech/couch.html [usenix.org]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351877</id>
	<title>Re:Sounds good to me.</title>
	<author>S7urm</author>
	<datestamp>1245183000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Unfortunetly, those "Heroics" are what gives us job stability. Management and the common user has no way to understand that preventative measurements have saved their 4ss from the proverbial fires for long time.</p><p>As you refine your work flows, and "work to make your department obsolete" you may actually <b> succeed </b> and that is NOT a good thing when you have a family to feed. Thus we allow things to slip through to show our fellows in the company that we are in fact <b> doing something </b></p><p>Pretty sad, but you all know it's true, and just ask anyone you know in Production Maintenance on if they perform their own preventative maintenance, see how many nano-seconds it takes for them to chuckle</p></htmltext>
<tokenext>Unfortunetly , those " Heroics " are what gives us job stability .
Management and the common user has no way to understand that preventative measurements have saved their 4ss from the proverbial fires for long time.As you refine your work flows , and " work to make your department obsolete " you may actually succeed and that is NOT a good thing when you have a family to feed .
Thus we allow things to slip through to show our fellows in the company that we are in fact doing something Pretty sad , but you all know it 's true , and just ask anyone you know in Production Maintenance on if they perform their own preventative maintenance , see how many nano-seconds it takes for them to chuckle</tokentext>
<sentencetext>Unfortunetly, those "Heroics" are what gives us job stability.
Management and the common user has no way to understand that preventative measurements have saved their 4ss from the proverbial fires for long time.As you refine your work flows, and "work to make your department obsolete" you may actually  succeed  and that is NOT a good thing when you have a family to feed.
Thus we allow things to slip through to show our fellows in the company that we are in fact  doing something Pretty sad, but you all know it's true, and just ask anyone you know in Production Maintenance on if they perform their own preventative maintenance, see how many nano-seconds it takes for them to chuckle</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350957</id>
	<title>Re:This is why the IT department is always cut fir</title>
	<author>Feyshtey</author>
	<datestamp>1245179580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Excellent points.
<br> <br>
There's the IT pro that will be able to identify the actual root complaint of an unknowledgable customer, fix the issue, and move on to the next case. And then there's the guy in [insert off-shore labor country] who costs 1/6th the wage who will run the customer around in circles for half an hour, make them irrate, tell them the issue is due to an unsupported configuration, and then disconnect them trying to transfer them to another department. While the latter case has 'closed' a case in slightly more time than the former at 1/6th the wage cost, no issue has actually been resolved and the end user's productivity is a fraction of what it could have been with more appropriate funding in the IT staff.
<br> <br>
There's also the interpretation of the metrics.
Our IT group is measured in a number of ways. One of them is the uptime of the systems we support. We manage only the servers that host the applications of the end users (code developers). The applications are beyond our scope because they are constantly in flux by the devs, and everyone knows it would be an unrealistic hope for us to manage those applications. But if the devs push out corrupt or unstable code and their application is offline for half a day, it is reported as downtime at the end of the month. The reality is that we've met our obligations (and in fact, exceeded them). The machines were always online and stable. The code the devs put their sucked, but the infrastructure was perfectly stable. But the interpretation by management is that the application was offline so we failed.
<br> <br>
From there you could debate the cost differential between purchasing new hardware or purchasing extended warranties and service agreements on outdated equipment, and how that impacts the level of service possible for an IT department. We spend thousands of dollars per year per unit to continue warranties on 5year-old+ hardware, where we could instead spend that much and get brand new hardware that would be under warranty for free for several years to come. Accounting says we can't afford new hardware (that would be more stable, more reliable, more powerful, more manageable, more cost-effective), but we can spend even more on just the warranties purcahsed yearly for the old crap.
<br> <br>
In the end, far too many IT departments are managed by people who have no clue about technical issues and who work from all-inclusive statements about the best-case scenarios in IT. The metrics they require you to provide (which take an appreciable percentage of your weekly man-hours to produce) will be misinterpreted (rarely in your favor) and be largely irrelevant to the actual function of your IT department. You can try to explain how the metrics actual give creedance to your beleifs on how the funding for the department should be reallocated for the sake of efficiency, but<nobr> <wbr></nobr>....
<br> <br>
Unfortunately in the mind of the management and accounting teams, the alternative would be to allow the black magic voodoo in the basement to continue without absolute (and faulty) quantification. That cant be allowed. Those IT freaks would start sacraficing chickens and making bonfires out of bundles of cash.</htmltext>
<tokenext>Excellent points .
There 's the IT pro that will be able to identify the actual root complaint of an unknowledgable customer , fix the issue , and move on to the next case .
And then there 's the guy in [ insert off-shore labor country ] who costs 1/6th the wage who will run the customer around in circles for half an hour , make them irrate , tell them the issue is due to an unsupported configuration , and then disconnect them trying to transfer them to another department .
While the latter case has 'closed ' a case in slightly more time than the former at 1/6th the wage cost , no issue has actually been resolved and the end user 's productivity is a fraction of what it could have been with more appropriate funding in the IT staff .
There 's also the interpretation of the metrics .
Our IT group is measured in a number of ways .
One of them is the uptime of the systems we support .
We manage only the servers that host the applications of the end users ( code developers ) .
The applications are beyond our scope because they are constantly in flux by the devs , and everyone knows it would be an unrealistic hope for us to manage those applications .
But if the devs push out corrupt or unstable code and their application is offline for half a day , it is reported as downtime at the end of the month .
The reality is that we 've met our obligations ( and in fact , exceeded them ) .
The machines were always online and stable .
The code the devs put their sucked , but the infrastructure was perfectly stable .
But the interpretation by management is that the application was offline so we failed .
From there you could debate the cost differential between purchasing new hardware or purchasing extended warranties and service agreements on outdated equipment , and how that impacts the level of service possible for an IT department .
We spend thousands of dollars per year per unit to continue warranties on 5year-old + hardware , where we could instead spend that much and get brand new hardware that would be under warranty for free for several years to come .
Accounting says we ca n't afford new hardware ( that would be more stable , more reliable , more powerful , more manageable , more cost-effective ) , but we can spend even more on just the warranties purcahsed yearly for the old crap .
In the end , far too many IT departments are managed by people who have no clue about technical issues and who work from all-inclusive statements about the best-case scenarios in IT .
The metrics they require you to provide ( which take an appreciable percentage of your weekly man-hours to produce ) will be misinterpreted ( rarely in your favor ) and be largely irrelevant to the actual function of your IT department .
You can try to explain how the metrics actual give creedance to your beleifs on how the funding for the department should be reallocated for the sake of efficiency , but ... . Unfortunately in the mind of the management and accounting teams , the alternative would be to allow the black magic voodoo in the basement to continue without absolute ( and faulty ) quantification .
That cant be allowed .
Those IT freaks would start sacraficing chickens and making bonfires out of bundles of cash .</tokentext>
<sentencetext>Excellent points.
There's the IT pro that will be able to identify the actual root complaint of an unknowledgable customer, fix the issue, and move on to the next case.
And then there's the guy in [insert off-shore labor country] who costs 1/6th the wage who will run the customer around in circles for half an hour, make them irrate, tell them the issue is due to an unsupported configuration, and then disconnect them trying to transfer them to another department.
While the latter case has 'closed' a case in slightly more time than the former at 1/6th the wage cost, no issue has actually been resolved and the end user's productivity is a fraction of what it could have been with more appropriate funding in the IT staff.
There's also the interpretation of the metrics.
Our IT group is measured in a number of ways.
One of them is the uptime of the systems we support.
We manage only the servers that host the applications of the end users (code developers).
The applications are beyond our scope because they are constantly in flux by the devs, and everyone knows it would be an unrealistic hope for us to manage those applications.
But if the devs push out corrupt or unstable code and their application is offline for half a day, it is reported as downtime at the end of the month.
The reality is that we've met our obligations (and in fact, exceeded them).
The machines were always online and stable.
The code the devs put their sucked, but the infrastructure was perfectly stable.
But the interpretation by management is that the application was offline so we failed.
From there you could debate the cost differential between purchasing new hardware or purchasing extended warranties and service agreements on outdated equipment, and how that impacts the level of service possible for an IT department.
We spend thousands of dollars per year per unit to continue warranties on 5year-old+ hardware, where we could instead spend that much and get brand new hardware that would be under warranty for free for several years to come.
Accounting says we can't afford new hardware (that would be more stable, more reliable, more powerful, more manageable, more cost-effective), but we can spend even more on just the warranties purcahsed yearly for the old crap.
In the end, far too many IT departments are managed by people who have no clue about technical issues and who work from all-inclusive statements about the best-case scenarios in IT.
The metrics they require you to provide (which take an appreciable percentage of your weekly man-hours to produce) will be misinterpreted (rarely in your favor) and be largely irrelevant to the actual function of your IT department.
You can try to explain how the metrics actual give creedance to your beleifs on how the funding for the department should be reallocated for the sake of efficiency, but ....
 
Unfortunately in the mind of the management and accounting teams, the alternative would be to allow the black magic voodoo in the basement to continue without absolute (and faulty) quantification.
That cant be allowed.
Those IT freaks would start sacraficing chickens and making bonfires out of bundles of cash.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350077</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355333</id>
	<title>Re:Not good to count number of tickets</title>
	<author>Jaime2</author>
	<datestamp>1245155580000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I do some maintenance development.  It takes me ten times as long to fix, test, and deploy an application than it does to fix the data corruption that results from the crash that the bug causes.  However, by our metrics, one hundred data fixes per month is guaranteed to keep me under my SLA no matter how many other problem I solve slowly.  If I ever choose to fix the issue, my SLA compliance will go into the toilet both because fixing it takes longer than the SLA and because I no longer get to log easy fixes.  Some organizations fix the first problem by allowing fixes under a process with a different SLA, but no one can fix the second problem.  I simply don't want this problem fixed ever.<br>
<br>
If I were doing a good job, the problems that come up should be all be first-timers.  Any issue should be root-caused and fixed before it becomes recurring.  This implies that a healthy organization will have longer ticket close times than a broken organization.  Anybody who's problem resolution times are going down has given up on fixing the underlying cause.<br>
<br>
Actual example -- my company is now putting in password reset self-service.  This is a huge win for the whole company (except for the security guys, they lost that argument).  However, this is likely to increase the average ticket close time for level 1.</htmltext>
<tokenext>I do some maintenance development .
It takes me ten times as long to fix , test , and deploy an application than it does to fix the data corruption that results from the crash that the bug causes .
However , by our metrics , one hundred data fixes per month is guaranteed to keep me under my SLA no matter how many other problem I solve slowly .
If I ever choose to fix the issue , my SLA compliance will go into the toilet both because fixing it takes longer than the SLA and because I no longer get to log easy fixes .
Some organizations fix the first problem by allowing fixes under a process with a different SLA , but no one can fix the second problem .
I simply do n't want this problem fixed ever .
If I were doing a good job , the problems that come up should be all be first-timers .
Any issue should be root-caused and fixed before it becomes recurring .
This implies that a healthy organization will have longer ticket close times than a broken organization .
Anybody who 's problem resolution times are going down has given up on fixing the underlying cause .
Actual example -- my company is now putting in password reset self-service .
This is a huge win for the whole company ( except for the security guys , they lost that argument ) .
However , this is likely to increase the average ticket close time for level 1 .</tokentext>
<sentencetext>I do some maintenance development.
It takes me ten times as long to fix, test, and deploy an application than it does to fix the data corruption that results from the crash that the bug causes.
However, by our metrics, one hundred data fixes per month is guaranteed to keep me under my SLA no matter how many other problem I solve slowly.
If I ever choose to fix the issue, my SLA compliance will go into the toilet both because fixing it takes longer than the SLA and because I no longer get to log easy fixes.
Some organizations fix the first problem by allowing fixes under a process with a different SLA, but no one can fix the second problem.
I simply don't want this problem fixed ever.
If I were doing a good job, the problems that come up should be all be first-timers.
Any issue should be root-caused and fixed before it becomes recurring.
This implies that a healthy organization will have longer ticket close times than a broken organization.
Anybody who's problem resolution times are going down has given up on fixing the underlying cause.
Actual example -- my company is now putting in password reset self-service.
This is a huge win for the whole company (except for the security guys, they lost that argument).
However, this is likely to increase the average ticket close time for level 1.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349861</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349693</id>
	<title>Sliding Average</title>
	<author>PeteLarson</author>
	<datestamp>1245175560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I think the focus should ultimately be on reducing calls.  So, perhaps, you're doing really well if the average calls per week continues a downward trend each week.</p><p>However, since many IT departments are actually split into different subdivisions, how can you measure the group that just takes calls, addresses issues, and closes tickets.  It may be their ONLY job to close tickets/issues.  They may have exceedingly little control over any underlying problems.  So, to measure their performance, perhaps number of issues closed is not entirely wrong.  But, managers of this group should be evaluated over time.  Any recurring issues should be brought up as potential bugs or user training or just needing general improvement to the system, whatever that might mean.</p></htmltext>
<tokenext>I think the focus should ultimately be on reducing calls .
So , perhaps , you 're doing really well if the average calls per week continues a downward trend each week.However , since many IT departments are actually split into different subdivisions , how can you measure the group that just takes calls , addresses issues , and closes tickets .
It may be their ONLY job to close tickets/issues .
They may have exceedingly little control over any underlying problems .
So , to measure their performance , perhaps number of issues closed is not entirely wrong .
But , managers of this group should be evaluated over time .
Any recurring issues should be brought up as potential bugs or user training or just needing general improvement to the system , whatever that might mean .</tokentext>
<sentencetext>I think the focus should ultimately be on reducing calls.
So, perhaps, you're doing really well if the average calls per week continues a downward trend each week.However, since many IT departments are actually split into different subdivisions, how can you measure the group that just takes calls, addresses issues, and closes tickets.
It may be their ONLY job to close tickets/issues.
They may have exceedingly little control over any underlying problems.
So, to measure their performance, perhaps number of issues closed is not entirely wrong.
But, managers of this group should be evaluated over time.
Any recurring issues should be brought up as potential bugs or user training or just needing general improvement to the system, whatever that might mean.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350785</id>
	<title>Re:Time to close tickets is 1 factor, not the ONLY</title>
	<author>Anonymous</author>
	<datestamp>1245178980000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>You should not have closed the ticket until you confirmed the customer could print yellow.  You should have "resolved" the ticket.  Only the customer can "close" the ticket by confirming resolution was effective.</p></htmltext>
<tokenext>You should not have closed the ticket until you confirmed the customer could print yellow .
You should have " resolved " the ticket .
Only the customer can " close " the ticket by confirming resolution was effective .</tokentext>
<sentencetext>You should not have closed the ticket until you confirmed the customer could print yellow.
You should have "resolved" the ticket.
Only the customer can "close" the ticket by confirming resolution was effective.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350157</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351973</id>
	<title>Anonymous Coward</title>
	<author>Anonymous</author>
	<datestamp>1245183420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Life in IT....</p><p>When things are going well.......</p><p>Business:  "What the hell are we paying all these IT people for?  I don't see them doing a damn thing.  get rid of them.."</p><p>When things aren't going so well......</p><p>Business: "What are we paying all these IT people for?  Why didn't they prevent this????"</p></htmltext>
<tokenext>Life in IT....When things are going well.......Business : " What the hell are we paying all these IT people for ?
I do n't see them doing a damn thing .
get rid of them.. " When things are n't going so well......Business : " What are we paying all these IT people for ?
Why did n't they prevent this ? ? ? ?
"</tokentext>
<sentencetext>Life in IT....When things are going well.......Business:  "What the hell are we paying all these IT people for?
I don't see them doing a damn thing.
get rid of them.."When things aren't going so well......Business: "What are we paying all these IT people for?
Why didn't they prevent this????
"</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354837</id>
	<title>Re:I think it should be measured...</title>
	<author>gmhowell</author>
	<datestamp>1245152940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ok, but can we score it like golf?</p></htmltext>
<tokenext>Ok , but can we score it like golf ?</tokentext>
<sentencetext>Ok, but can we score it like golf?</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349659</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351901</id>
	<title>Commits</title>
	<author>Anonymous</author>
	<datestamp>1245183120000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I work for a company that evaluates programmer performance based on number of lines committed and total number of commits. I was actually reprimanded for not committing enough because, unlike my fellow programmers I didn't commit after every save. I have since learned better. Enter, enter, enter. Commit. Space, space, space. Commit...</p></htmltext>
<tokenext>I work for a company that evaluates programmer performance based on number of lines committed and total number of commits .
I was actually reprimanded for not committing enough because , unlike my fellow programmers I did n't commit after every save .
I have since learned better .
Enter , enter , enter .
Commit. Space , space , space .
Commit.. .</tokentext>
<sentencetext>I work for a company that evaluates programmer performance based on number of lines committed and total number of commits.
I was actually reprimanded for not committing enough because, unlike my fellow programmers I didn't commit after every save.
I have since learned better.
Enter, enter, enter.
Commit. Space, space, space.
Commit...</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350313</id>
	<title>From a business point of view...?</title>
	<author>BlueKitties</author>
	<datestamp>1245177420000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext>Bouncing customers is a good way to keep them from calling back -- grandma is much more likely to phone up 'lil Tim for computer advice if she knows the hotline tech is going to bounce her to ten different places; where I work, we get a good bit of troubleshooting work because the customers hate calling the hotlines provided by the manufacturer. Sadly, annoying your customers is a good way to keep them from calling back, and as long as your product is good enough people will still pay-up. E.g. I'm screwed into Suddenlink where I live. After being promised $85.01 TV/Net, I got a $100.00 bill because of hidden fees. Guess what -- I'm screwed into paying, because the only alternative (Cox) was bought out by Suddenlink.</htmltext>
<tokenext>Bouncing customers is a good way to keep them from calling back -- grandma is much more likely to phone up 'lil Tim for computer advice if she knows the hotline tech is going to bounce her to ten different places ; where I work , we get a good bit of troubleshooting work because the customers hate calling the hotlines provided by the manufacturer .
Sadly , annoying your customers is a good way to keep them from calling back , and as long as your product is good enough people will still pay-up .
E.g. I 'm screwed into Suddenlink where I live .
After being promised $ 85.01 TV/Net , I got a $ 100.00 bill because of hidden fees .
Guess what -- I 'm screwed into paying , because the only alternative ( Cox ) was bought out by Suddenlink .</tokentext>
<sentencetext>Bouncing customers is a good way to keep them from calling back -- grandma is much more likely to phone up 'lil Tim for computer advice if she knows the hotline tech is going to bounce her to ten different places; where I work, we get a good bit of troubleshooting work because the customers hate calling the hotlines provided by the manufacturer.
Sadly, annoying your customers is a good way to keep them from calling back, and as long as your product is good enough people will still pay-up.
E.g. I'm screwed into Suddenlink where I live.
After being promised $85.01 TV/Net, I got a $100.00 bill because of hidden fees.
Guess what -- I'm screwed into paying, because the only alternative (Cox) was bought out by Suddenlink.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350723</id>
	<title>Anonymous Coward.</title>
	<author>Anonymous</author>
	<datestamp>1245178680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>help desks are generally not measured by how many cases they close.  Here are the normal metrics that help desks use for preformance "this is a 30,000 foot level mind you"</p><p>1.  Costs - how many buts in seats you have<br>2.  volume - how many incidents you receive<br>3.  Aging - how long does it take you to respond to incidents<br>4.  Customer satisfaction - how happy are your customers.</p><p>If you can keep 1 and 2, and 3 down, number 4 typically takes care of itself.</p><p>A good IT department will deploy a team of problem management specialists that will work problems instead of incidents.  That is what keeps the calls from coming in, in the first place.  And goes back to number 1 and 2 on the list.</p></htmltext>
<tokenext>help desks are generally not measured by how many cases they close .
Here are the normal metrics that help desks use for preformance " this is a 30,000 foot level mind you " 1 .
Costs - how many buts in seats you have2 .
volume - how many incidents you receive3 .
Aging - how long does it take you to respond to incidents4 .
Customer satisfaction - how happy are your customers.If you can keep 1 and 2 , and 3 down , number 4 typically takes care of itself.A good IT department will deploy a team of problem management specialists that will work problems instead of incidents .
That is what keeps the calls from coming in , in the first place .
And goes back to number 1 and 2 on the list .</tokentext>
<sentencetext>help desks are generally not measured by how many cases they close.
Here are the normal metrics that help desks use for preformance "this is a 30,000 foot level mind you"1.
Costs - how many buts in seats you have2.
volume - how many incidents you receive3.
Aging - how long does it take you to respond to incidents4.
Customer satisfaction - how happy are your customers.If you can keep 1 and 2, and 3 down, number 4 typically takes care of itself.A good IT department will deploy a team of problem management specialists that will work problems instead of incidents.
That is what keeps the calls from coming in, in the first place.
And goes back to number 1 and 2 on the list.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349631</id>
	<title>When testing a new blade server install...</title>
	<author>Anonymous</author>
	<datestamp>1245175320000</datestamp>
	<modclass>Funny</modclass>
	<modscore>4</modscore>
	<htmltext><p>We usually try to measure how many libraries of congress we can get to the new blade server in under 5 minutes.</p><p>our best is 12.</p></htmltext>
<tokenext>We usually try to measure how many libraries of congress we can get to the new blade server in under 5 minutes.our best is 12 .</tokentext>
<sentencetext>We usually try to measure how many libraries of congress we can get to the new blade server in under 5 minutes.our best is 12.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28357249</id>
	<title>WTF/MIN</title>
	<author>Anonymous</author>
	<datestamp>1245169140000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>A good metric is counting the wtf per minute that the customer shouts on the telephone. Less is better.</p></htmltext>
<tokenext>A good metric is counting the wtf per minute that the customer shouts on the telephone .
Less is better .</tokentext>
<sentencetext>A good metric is counting the wtf per minute that the customer shouts on the telephone.
Less is better.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353699</id>
	<title>Some companies are better than others.</title>
	<author>Anonymous</author>
	<datestamp>1245146940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I am blessed to work in the IT dept of a company that "gets" it.  There are metrics thrown about, but at the end of the day the only metric that carries any weight is:  Did everything that needed to get done, get done?</p></htmltext>
<tokenext>I am blessed to work in the IT dept of a company that " gets " it .
There are metrics thrown about , but at the end of the day the only metric that carries any weight is : Did everything that needed to get done , get done ?</tokentext>
<sentencetext>I am blessed to work in the IT dept of a company that "gets" it.
There are metrics thrown about, but at the end of the day the only metric that carries any weight is:  Did everything that needed to get done, get done?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351911</id>
	<title>Re:count tickets never openend</title>
	<author>Darinbob</author>
	<datestamp>1245183120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Too many IT departments are either outsourced or treated like they're outsourced.  That is they must have metrics to justify their existence.  A metric of the number of bugs that were never opened won't work because you can't measure that and charge based that number.  The "perfect" IT department that never has any open problems because they anticipate everyone's needs in advance and make no mistakes has no way to bill the rest of the company.</htmltext>
<tokenext>Too many IT departments are either outsourced or treated like they 're outsourced .
That is they must have metrics to justify their existence .
A metric of the number of bugs that were never opened wo n't work because you ca n't measure that and charge based that number .
The " perfect " IT department that never has any open problems because they anticipate everyone 's needs in advance and make no mistakes has no way to bill the rest of the company .</tokentext>
<sentencetext>Too many IT departments are either outsourced or treated like they're outsourced.
That is they must have metrics to justify their existence.
A metric of the number of bugs that were never opened won't work because you can't measure that and charge based that number.
The "perfect" IT department that never has any open problems because they anticipate everyone's needs in advance and make no mistakes has no way to bill the rest of the company.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</id>
	<title>count tickets never openend</title>
	<author>molecular</author>
	<datestamp>1245175440000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext><p>I think poster has a point.<br>A nice metric might be the count of tickets that are never opened.<br>An IT-department, IMHO, should be working on making itself obsolete.</p></htmltext>
<tokenext>I think poster has a point.A nice metric might be the count of tickets that are never opened.An IT-department , IMHO , should be working on making itself obsolete .</tokentext>
<sentencetext>I think poster has a point.A nice metric might be the count of tickets that are never opened.An IT-department, IMHO, should be working on making itself obsolete.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350107</id>
	<title>Re:Sounds good to me.</title>
	<author>molecular</author>
	<datestamp>1245176820000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext><p><div class="quote"><p>For example, for every fax successfully sent via the fax server without IT intervention, the IT department gets one point.</p><p>For every fax that needs IT intervention to be sent, the IT department loses one point.</p></div><p>I like this idea, because it has the side-effect of forcing managment to define in writing and exactly the services the IT-department / infrastructure is actually supposed to provide and also forces them to define some metrics and mechanism to measure this. This enables the IT-department to respond to inappropriate requests in a nicely formal way. Also the managment can prioritize on this to help IT fend off the odd jerk that thinks their particular problem is the most important in the world and should be taken care of ASAP. Such a system would also provide transparency to managment and users as to wtf these IT-jerks are doing all day and why.</p></div>
	</htmltext>
<tokenext>For example , for every fax successfully sent via the fax server without IT intervention , the IT department gets one point.For every fax that needs IT intervention to be sent , the IT department loses one point.I like this idea , because it has the side-effect of forcing managment to define in writing and exactly the services the IT-department / infrastructure is actually supposed to provide and also forces them to define some metrics and mechanism to measure this .
This enables the IT-department to respond to inappropriate requests in a nicely formal way .
Also the managment can prioritize on this to help IT fend off the odd jerk that thinks their particular problem is the most important in the world and should be taken care of ASAP .
Such a system would also provide transparency to managment and users as to wtf these IT-jerks are doing all day and why .</tokentext>
<sentencetext>For example, for every fax successfully sent via the fax server without IT intervention, the IT department gets one point.For every fax that needs IT intervention to be sent, the IT department loses one point.I like this idea, because it has the side-effect of forcing managment to define in writing and exactly the services the IT-department / infrastructure is actually supposed to provide and also forces them to define some metrics and mechanism to measure this.
This enables the IT-department to respond to inappropriate requests in a nicely formal way.
Also the managment can prioritize on this to help IT fend off the odd jerk that thinks their particular problem is the most important in the world and should be taken care of ASAP.
Such a system would also provide transparency to managment and users as to wtf these IT-jerks are doing all day and why.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354687</id>
	<title>Re:count tickets never openend</title>
	<author>syousef</author>
	<datestamp>1245152040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>An IT-department, IMHO, should be working on making itself obsolete.</p></div><p>Possibly the most unrealistic thing I've ever heard here, and that's saying something. NO GROUP of people is going to work on putting themselves out of a job!</p></div>
	</htmltext>
<tokenext>An IT-department , IMHO , should be working on making itself obsolete.Possibly the most unrealistic thing I 've ever heard here , and that 's saying something .
NO GROUP of people is going to work on putting themselves out of a job !</tokentext>
<sentencetext>An IT-department, IMHO, should be working on making itself obsolete.Possibly the most unrealistic thing I've ever heard here, and that's saying something.
NO GROUP of people is going to work on putting themselves out of a job!
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350679</id>
	<title>Re:Sliding Average</title>
	<author>elrous0</author>
	<datestamp>1245178560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>That's a bad metric. My IT dept. gets very few calls. But it's because people have realized that it's pointless to call them for help. Telling someone to call the help desk there is a form of sarcasm.</htmltext>
<tokenext>That 's a bad metric .
My IT dept .
gets very few calls .
But it 's because people have realized that it 's pointless to call them for help .
Telling someone to call the help desk there is a form of sarcasm .</tokentext>
<sentencetext>That's a bad metric.
My IT dept.
gets very few calls.
But it's because people have realized that it's pointless to call them for help.
Telling someone to call the help desk there is a form of sarcasm.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349693</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355281</id>
	<title>I conspire with my users</title>
	<author>BenEnglishAtHome</author>
	<datestamp>1245155340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>My users resent the fact that their bosses don't know how to measure their competence, just like my IT co-workers.  Over time (It's taken me most of a decade.) I've played on my customers shared irritation at the PHBs and convinced them to conspire with me to game the system.  They call me first, not the help desk.  I fix their problem.  Then, they open a ticket, I instantly assign it to myself, document, and close.  Voila - super-fast closure.</p><p>OK, this is an overstatement.  I mostly work the tickets I'm assigned.  But in nearly every case where I get a chance (those "meet in the hallway" requests for help, mostly) I'll try to run the procedures I outlined in my first paragraph.</p></htmltext>
<tokenext>My users resent the fact that their bosses do n't know how to measure their competence , just like my IT co-workers .
Over time ( It 's taken me most of a decade .
) I 've played on my customers shared irritation at the PHBs and convinced them to conspire with me to game the system .
They call me first , not the help desk .
I fix their problem .
Then , they open a ticket , I instantly assign it to myself , document , and close .
Voila - super-fast closure.OK , this is an overstatement .
I mostly work the tickets I 'm assigned .
But in nearly every case where I get a chance ( those " meet in the hallway " requests for help , mostly ) I 'll try to run the procedures I outlined in my first paragraph .</tokentext>
<sentencetext>My users resent the fact that their bosses don't know how to measure their competence, just like my IT co-workers.
Over time (It's taken me most of a decade.
) I've played on my customers shared irritation at the PHBs and convinced them to conspire with me to game the system.
They call me first, not the help desk.
I fix their problem.
Then, they open a ticket, I instantly assign it to myself, document, and close.
Voila - super-fast closure.OK, this is an overstatement.
I mostly work the tickets I'm assigned.
But in nearly every case where I get a chance (those "meet in the hallway" requests for help, mostly) I'll try to run the procedures I outlined in my first paragraph.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350733</id>
	<title>Time to close is a bad single metric, but...</title>
	<author>Anonymous</author>
	<datestamp>1245178740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>I used to work at a smallish ISP -- about $8m revenue/year -- with an industry reputation for well above average customer support.</p><p>The way we measured worker productivity was a combination of a few metrics:<br>(1) Tickets Closed -- How many tickets were closed in a time period.<br>(2) Customer Satisfaction -- How satisfied the customers were with the employee's work.<br>(3) Difficulty -- How difficult the tickets were.</p><p>A manager would also randomly sample each of the employee's tickets and make sure that the "difficulty" claimed by the employee was reasonable.</p><p>These metrics together give you a rough estimate of output in a support environment.  Tickets closed will be impacted by the difficulty of the tickets you accept.  The guy who takes a long time to solve hard problems is just as valuable as the guy who solves many easy problems -- you just need to make sure you have the right mix of both.  And for actual quality of work, nobody is more likely to discover and bring to your attention poor quality work than your customer.</p></htmltext>
<tokenext>I used to work at a smallish ISP -- about $ 8m revenue/year -- with an industry reputation for well above average customer support.The way we measured worker productivity was a combination of a few metrics : ( 1 ) Tickets Closed -- How many tickets were closed in a time period .
( 2 ) Customer Satisfaction -- How satisfied the customers were with the employee 's work .
( 3 ) Difficulty -- How difficult the tickets were.A manager would also randomly sample each of the employee 's tickets and make sure that the " difficulty " claimed by the employee was reasonable.These metrics together give you a rough estimate of output in a support environment .
Tickets closed will be impacted by the difficulty of the tickets you accept .
The guy who takes a long time to solve hard problems is just as valuable as the guy who solves many easy problems -- you just need to make sure you have the right mix of both .
And for actual quality of work , nobody is more likely to discover and bring to your attention poor quality work than your customer .</tokentext>
<sentencetext>I used to work at a smallish ISP -- about $8m revenue/year -- with an industry reputation for well above average customer support.The way we measured worker productivity was a combination of a few metrics:(1) Tickets Closed -- How many tickets were closed in a time period.
(2) Customer Satisfaction -- How satisfied the customers were with the employee's work.
(3) Difficulty -- How difficult the tickets were.A manager would also randomly sample each of the employee's tickets and make sure that the "difficulty" claimed by the employee was reasonable.These metrics together give you a rough estimate of output in a support environment.
Tickets closed will be impacted by the difficulty of the tickets you accept.
The guy who takes a long time to solve hard problems is just as valuable as the guy who solves many easy problems -- you just need to make sure you have the right mix of both.
And for actual quality of work, nobody is more likely to discover and bring to your attention poor quality work than your customer.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350401</id>
	<title>Re:obvious</title>
	<author>CarpetShark</author>
	<datestamp>1245177660000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Customer Satisfaction</p></div></blockquote><p>I'm not even sure that concept exists in the minds of IT service users.  Certainly not in the minds of those service users who are afraid of technology, don't understand it, and blame the admins when they press delete and something gets deleted.</p><blockquote><div><p>pro-active problem solving</p></div></blockquote><p>In some areas of IT, "pro-active" equates to breaking things that aren't broken.</p></div>
	</htmltext>
<tokenext>Customer SatisfactionI 'm not even sure that concept exists in the minds of IT service users .
Certainly not in the minds of those service users who are afraid of technology , do n't understand it , and blame the admins when they press delete and something gets deleted.pro-active problem solvingIn some areas of IT , " pro-active " equates to breaking things that are n't broken .</tokentext>
<sentencetext>Customer SatisfactionI'm not even sure that concept exists in the minds of IT service users.
Certainly not in the minds of those service users who are afraid of technology, don't understand it, and blame the admins when they press delete and something gets deleted.pro-active problem solvingIn some areas of IT, "pro-active" equates to breaking things that aren't broken.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349651</id>
	<title>obvious</title>
	<author>spikedvodka</author>
	<datestamp>1245175380000</datestamp>
	<modclass>Funny</modclass>
	<modscore>2</modscore>
	<htmltext><p>Customer Satisfaction, and pro-active problem solving</p></htmltext>
<tokenext>Customer Satisfaction , and pro-active problem solving</tokentext>
<sentencetext>Customer Satisfaction, and pro-active problem solving</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28361127</id>
	<title>ACD's are good metrics</title>
	<author>drdeath1</author>
	<datestamp>1245251640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>I have worked at a tier one place for over 2 years now and have to say you always want to cut down on return calls etc but by definition it is not tier 1 job to do this, it is there job to gather information , document information , and try and resolve any issues you can with in a short period of time, the reason for this is because tier 1 is the primary point of contact for pretty much the whole network. There simply has to be time limits in place to insure work flow.Although they don't deploy one at my current job i am a big fan of the ACD systems as a metric for performance and not closed tickets. If there is a recurring issue or problem it is tier 2's job to resolve it. When this breakdown or line in the hierarchy is crossed the team cannot function at maximum efficiency. So my point is if you cant fix something in the allotted time you are given it needs to be escalated if not (exceeding time limits consistently) you are causing a huge obstacle for not only your team but the departments work flow as well.</htmltext>
<tokenext>I have worked at a tier one place for over 2 years now and have to say you always want to cut down on return calls etc but by definition it is not tier 1 job to do this , it is there job to gather information , document information , and try and resolve any issues you can with in a short period of time , the reason for this is because tier 1 is the primary point of contact for pretty much the whole network .
There simply has to be time limits in place to insure work flow.Although they do n't deploy one at my current job i am a big fan of the ACD systems as a metric for performance and not closed tickets .
If there is a recurring issue or problem it is tier 2 's job to resolve it .
When this breakdown or line in the hierarchy is crossed the team can not function at maximum efficiency .
So my point is if you cant fix something in the allotted time you are given it needs to be escalated if not ( exceeding time limits consistently ) you are causing a huge obstacle for not only your team but the departments work flow as well .</tokentext>
<sentencetext>I have worked at a tier one place for over 2 years now and have to say you always want to cut down on return calls etc but by definition it is not tier 1 job to do this, it is there job to gather information , document information , and try and resolve any issues you can with in a short period of time, the reason for this is because tier 1 is the primary point of contact for pretty much the whole network.
There simply has to be time limits in place to insure work flow.Although they don't deploy one at my current job i am a big fan of the ACD systems as a metric for performance and not closed tickets.
If there is a recurring issue or problem it is tier 2's job to resolve it.
When this breakdown or line in the hierarchy is crossed the team cannot function at maximum efficiency.
So my point is if you cant fix something in the allotted time you are given it needs to be escalated if not (exceeding time limits consistently) you are causing a huge obstacle for not only your team but the departments work flow as well.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351421</id>
	<title>Re:Stupid metrics</title>
	<author>javelinco</author>
	<datestamp>1245181380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Call times are a good place to start.  They point out likely problems.  The issue here was that the manager didn't realize the purpose of the metric - and used it incorrectly.  As I'm sure you know, the flag should cause the manager to monitor your calls.  If your calls are good, and under five minutes, then a note should be added, that removes you from the flagging for awhile.  That way the manager can concentrate on the other issues until they have time to check you out again - and they SHOULD check you out again, if you continue to be flagged.  And that should be transparent to you, unless you are screwing up.

It's simple, but that doesn't mean you are working for managers that are well trained on their tools - no more than some of your coworkers were trained on theirs.</htmltext>
<tokenext>Call times are a good place to start .
They point out likely problems .
The issue here was that the manager did n't realize the purpose of the metric - and used it incorrectly .
As I 'm sure you know , the flag should cause the manager to monitor your calls .
If your calls are good , and under five minutes , then a note should be added , that removes you from the flagging for awhile .
That way the manager can concentrate on the other issues until they have time to check you out again - and they SHOULD check you out again , if you continue to be flagged .
And that should be transparent to you , unless you are screwing up .
It 's simple , but that does n't mean you are working for managers that are well trained on their tools - no more than some of your coworkers were trained on theirs .</tokentext>
<sentencetext>Call times are a good place to start.
They point out likely problems.
The issue here was that the manager didn't realize the purpose of the metric - and used it incorrectly.
As I'm sure you know, the flag should cause the manager to monitor your calls.
If your calls are good, and under five minutes, then a note should be added, that removes you from the flagging for awhile.
That way the manager can concentrate on the other issues until they have time to check you out again - and they SHOULD check you out again, if you continue to be flagged.
And that should be transparent to you, unless you are screwing up.
It's simple, but that doesn't mean you are working for managers that are well trained on their tools - no more than some of your coworkers were trained on theirs.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350715</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349927</id>
	<title>Depends on the position</title>
	<author>loteck</author>
	<datestamp>1245176220000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>For a level 1-2 support position, metrics should probably be focused on how efficiently tickets are resolved.

Once you get into administrative functions and management, those positions should be the ones focusing on increasing stability and reducing the amount of tickets submitted in the first place. Hopefully your managers and administrators are being assessed for their success in those areas, just as the junior staff are being assessed for how efficiently they can process the issues.</htmltext>
<tokenext>For a level 1-2 support position , metrics should probably be focused on how efficiently tickets are resolved .
Once you get into administrative functions and management , those positions should be the ones focusing on increasing stability and reducing the amount of tickets submitted in the first place .
Hopefully your managers and administrators are being assessed for their success in those areas , just as the junior staff are being assessed for how efficiently they can process the issues .</tokentext>
<sentencetext>For a level 1-2 support position, metrics should probably be focused on how efficiently tickets are resolved.
Once you get into administrative functions and management, those positions should be the ones focusing on increasing stability and reducing the amount of tickets submitted in the first place.
Hopefully your managers and administrators are being assessed for their success in those areas, just as the junior staff are being assessed for how efficiently they can process the issues.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349835</id>
	<title>Lots of different metrics</title>
	<author>z4ce</author>
	<datestamp>1245175980000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Time to resolution is a perfectly acceptable metric for a help desk department. For services some of the important metrics are availability (including performance!), mean-time-between-failure, number of new releases, percent of successful releases, etc. For specific processes you should have metrics for example for how long the new employee on-boarding process takes, or how long it takes to bring additional capacity online, etc.</p><p>I recommend you talk to someone that are experts in IT Business Service Management. If you're in the US one of my previous employers (www.maryville.com) could help you.</p></htmltext>
<tokenext>Time to resolution is a perfectly acceptable metric for a help desk department .
For services some of the important metrics are availability ( including performance !
) , mean-time-between-failure , number of new releases , percent of successful releases , etc .
For specific processes you should have metrics for example for how long the new employee on-boarding process takes , or how long it takes to bring additional capacity online , etc.I recommend you talk to someone that are experts in IT Business Service Management .
If you 're in the US one of my previous employers ( www.maryville.com ) could help you .</tokentext>
<sentencetext>Time to resolution is a perfectly acceptable metric for a help desk department.
For services some of the important metrics are availability (including performance!
), mean-time-between-failure, number of new releases, percent of successful releases, etc.
For specific processes you should have metrics for example for how long the new employee on-boarding process takes, or how long it takes to bring additional capacity online, etc.I recommend you talk to someone that are experts in IT Business Service Management.
If you're in the US one of my previous employers (www.maryville.com) could help you.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350411</id>
	<title>Its Quality not Quantity</title>
	<author>DarthVain</author>
	<datestamp>1245177720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I will admit I am not sure what would make the best IT metric of service. However I can tell you without a shadow of a doubt what does NOT make a good metric, and how many tickets you close is one of them.</p><p>I think my organization must use that metric for evaluation. When I call, I get a ticket. Then they generate a ticket, that they created a ticket, and send me a ticket. Then nothing happens for a long time. Then after I get tired of waiting, I call. Another ticket is generated about the first ticket. Eventually someone will look at it, and say oh we are not responsible for that, would you like use to make a ticket to flag this problem? Great. In the end months later someone may or may not call you about the final ticket that is essentially about not being able to help you at all, ask if you wish the ticket removed. Otherwise it is assumed that it has been taken care of after awhile, and thus satisfying all the rest of the tickets in some sort of orgasmic cascading ticket extravaganza. Then at year end they see they have close 8 Billion tickets, congratulate each other on a  job well done, pats on back and bonuses for everyone. Hazzah!</p><p>That has been my experience so far anyway.</p></htmltext>
<tokenext>I will admit I am not sure what would make the best IT metric of service .
However I can tell you without a shadow of a doubt what does NOT make a good metric , and how many tickets you close is one of them.I think my organization must use that metric for evaluation .
When I call , I get a ticket .
Then they generate a ticket , that they created a ticket , and send me a ticket .
Then nothing happens for a long time .
Then after I get tired of waiting , I call .
Another ticket is generated about the first ticket .
Eventually someone will look at it , and say oh we are not responsible for that , would you like use to make a ticket to flag this problem ?
Great. In the end months later someone may or may not call you about the final ticket that is essentially about not being able to help you at all , ask if you wish the ticket removed .
Otherwise it is assumed that it has been taken care of after awhile , and thus satisfying all the rest of the tickets in some sort of orgasmic cascading ticket extravaganza .
Then at year end they see they have close 8 Billion tickets , congratulate each other on a job well done , pats on back and bonuses for everyone .
Hazzah ! That has been my experience so far anyway .</tokentext>
<sentencetext>I will admit I am not sure what would make the best IT metric of service.
However I can tell you without a shadow of a doubt what does NOT make a good metric, and how many tickets you close is one of them.I think my organization must use that metric for evaluation.
When I call, I get a ticket.
Then they generate a ticket, that they created a ticket, and send me a ticket.
Then nothing happens for a long time.
Then after I get tired of waiting, I call.
Another ticket is generated about the first ticket.
Eventually someone will look at it, and say oh we are not responsible for that, would you like use to make a ticket to flag this problem?
Great. In the end months later someone may or may not call you about the final ticket that is essentially about not being able to help you at all, ask if you wish the ticket removed.
Otherwise it is assumed that it has been taken care of after awhile, and thus satisfying all the rest of the tickets in some sort of orgasmic cascading ticket extravaganza.
Then at year end they see they have close 8 Billion tickets, congratulate each other on a  job well done, pats on back and bonuses for everyone.
Hazzah!That has been my experience so far anyway.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353691</id>
	<title>ITIL is great, but....</title>
	<author>NeutronCowboy</author>
	<datestamp>1245146940000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Ultimately, no amount of metrics is going to save you from idiotic bosses who don't understand what the metrics actually measure, and who try to game the system. Couple of examples:<br>* Number of time customer spends on hold: you take the hit for an understaffed department.<br>* Length of Calls: you will be forced through a script that offloads the actual work to another department.<br>* Duration of time tickets are open: you'll get hit every time a customer leaves a ticket open, which is basically always.<br>* Duration of time that you work on a ticket: you'll be forced to again offload to either a different department, or provide some lack hack that will break in about 2 days.<br>* Customer satisfaction as measured by surveys: damn near nobody replies to them. Not to mention that there's no standard for what is good and excellent. You'll get hit by the prick customer who thinks that debugging his app on the fly is par for the course.<br>* Customer satisfaction as measured by renewal of licenses: you'll be at the mercy of the account managers, and at the mercy of the overall economy.</p><p>And on and on. For every single metric that you come up with, I'll show you a real-life example of how it was abused by an incompetent/malicious front-line drone, manager or executive.</p><p>Here's the only thing that'll work: settle on a metric. Get everyone - drone, manager, executive to agree on what the shortcomings are and how the metric can be gamed. Then, when it comes to review, make sure that the spreadsheet is accompanied by a discussion on what the data means, how it came about and what the root causes behind it are.</p><p>Yes, it's - almost - a pipe dream. But as much as I've seen perfectly valid metrics being ruined, I've seen sucky metrics be used to their full capability in turning a department around.</p><p>The take away is that collecting metrics is the easy part. The hard part is what to do with them.</p></htmltext>
<tokenext>Ultimately , no amount of metrics is going to save you from idiotic bosses who do n't understand what the metrics actually measure , and who try to game the system .
Couple of examples : * Number of time customer spends on hold : you take the hit for an understaffed department .
* Length of Calls : you will be forced through a script that offloads the actual work to another department .
* Duration of time tickets are open : you 'll get hit every time a customer leaves a ticket open , which is basically always .
* Duration of time that you work on a ticket : you 'll be forced to again offload to either a different department , or provide some lack hack that will break in about 2 days .
* Customer satisfaction as measured by surveys : damn near nobody replies to them .
Not to mention that there 's no standard for what is good and excellent .
You 'll get hit by the prick customer who thinks that debugging his app on the fly is par for the course .
* Customer satisfaction as measured by renewal of licenses : you 'll be at the mercy of the account managers , and at the mercy of the overall economy.And on and on .
For every single metric that you come up with , I 'll show you a real-life example of how it was abused by an incompetent/malicious front-line drone , manager or executive.Here 's the only thing that 'll work : settle on a metric .
Get everyone - drone , manager , executive to agree on what the shortcomings are and how the metric can be gamed .
Then , when it comes to review , make sure that the spreadsheet is accompanied by a discussion on what the data means , how it came about and what the root causes behind it are.Yes , it 's - almost - a pipe dream .
But as much as I 've seen perfectly valid metrics being ruined , I 've seen sucky metrics be used to their full capability in turning a department around.The take away is that collecting metrics is the easy part .
The hard part is what to do with them .</tokentext>
<sentencetext>Ultimately, no amount of metrics is going to save you from idiotic bosses who don't understand what the metrics actually measure, and who try to game the system.
Couple of examples:* Number of time customer spends on hold: you take the hit for an understaffed department.
* Length of Calls: you will be forced through a script that offloads the actual work to another department.
* Duration of time tickets are open: you'll get hit every time a customer leaves a ticket open, which is basically always.
* Duration of time that you work on a ticket: you'll be forced to again offload to either a different department, or provide some lack hack that will break in about 2 days.
* Customer satisfaction as measured by surveys: damn near nobody replies to them.
Not to mention that there's no standard for what is good and excellent.
You'll get hit by the prick customer who thinks that debugging his app on the fly is par for the course.
* Customer satisfaction as measured by renewal of licenses: you'll be at the mercy of the account managers, and at the mercy of the overall economy.And on and on.
For every single metric that you come up with, I'll show you a real-life example of how it was abused by an incompetent/malicious front-line drone, manager or executive.Here's the only thing that'll work: settle on a metric.
Get everyone - drone, manager, executive to agree on what the shortcomings are and how the metric can be gamed.
Then, when it comes to review, make sure that the spreadsheet is accompanied by a discussion on what the data means, how it came about and what the root causes behind it are.Yes, it's - almost - a pipe dream.
But as much as I've seen perfectly valid metrics being ruined, I've seen sucky metrics be used to their full capability in turning a department around.The take away is that collecting metrics is the easy part.
The hard part is what to do with them.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349913</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349913</id>
	<title>ITIL</title>
	<author>prakslash</author>
	<datestamp>1245176220000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>4</modscore>
	<htmltext><i>Shouldn't we be focused on reducing calls, rather than simply closing them quickly?</i> <br>
We should be focussed on both.
<br> <br>
<i> My question is: How is your IT performance measured, and how do you think it should be measured?</i> <br>
<a href="http://en.wikipedia.org/wiki/ITIL" title="wikipedia.org">ITIL</a> [wikipedia.org] principles are a great starting point. <br>
Examples are using Key Performance Indicators (KPIs) such as at the bottom of this <a href="http://www.itlibrary.org/index.php?page=Service\_Desk" title="itlibrary.org">page</a> [itlibrary.org] and this <a href="http://www.itlibrary.org/index.php?page=Incident\_Management" title="itlibrary.org">page</a> [itlibrary.org].</htmltext>
<tokenext>Should n't we be focused on reducing calls , rather than simply closing them quickly ?
We should be focussed on both .
My question is : How is your IT performance measured , and how do you think it should be measured ?
ITIL [ wikipedia.org ] principles are a great starting point .
Examples are using Key Performance Indicators ( KPIs ) such as at the bottom of this page [ itlibrary.org ] and this page [ itlibrary.org ] .</tokentext>
<sentencetext>Shouldn't we be focused on reducing calls, rather than simply closing them quickly?
We should be focussed on both.
My question is: How is your IT performance measured, and how do you think it should be measured?
ITIL [wikipedia.org] principles are a great starting point.
Examples are using Key Performance Indicators (KPIs) such as at the bottom of this page [itlibrary.org] and this page [itlibrary.org].</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351521</id>
	<title>Not a metric</title>
	<author>Anonymous</author>
	<datestamp>1245181740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>There is no metric to measure competent, responsible people.  Managers should stop trying to do this.</p></htmltext>
<tokenext>There is no metric to measure competent , responsible people .
Managers should stop trying to do this .</tokentext>
<sentencetext>There is no metric to measure competent, responsible people.
Managers should stop trying to do this.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349859</id>
	<title>Re:count tickets never openend</title>
	<author>starglider29a</author>
	<datestamp>1245176040000</datestamp>
	<modclass>Funny</modclass>
	<modscore>3</modscore>
	<htmltext>True! And my measure of being a good husband is how many affairs I DIDN'T have!<br> <br>
A) How do you count that?
B) Dude, even SKYNET had an IT department.<br> <br>
"Yeah, uh, hi... my directive is to nuke Redmond/PaloAlto (pick one), but... heh heh... I can't find the launch codes... could you reset my... oh, wait. Here they are. The sticky note fell off my monitor."</htmltext>
<tokenext>True !
And my measure of being a good husband is how many affairs I DID N'T have !
A ) How do you count that ?
B ) Dude , even SKYNET had an IT department .
" Yeah , uh , hi... my directive is to nuke Redmond/PaloAlto ( pick one ) , but... heh heh... I ca n't find the launch codes... could you reset my... oh , wait .
Here they are .
The sticky note fell off my monitor .
"</tokentext>
<sentencetext>True!
And my measure of being a good husband is how many affairs I DIDN'T have!
A) How do you count that?
B) Dude, even SKYNET had an IT department.
"Yeah, uh, hi... my directive is to nuke Redmond/PaloAlto (pick one), but... heh heh... I can't find the launch codes... could you reset my... oh, wait.
Here they are.
The sticky note fell off my monitor.
"</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350595</id>
	<title>Some factors to quantify quality IT</title>
	<author>jcwynholds</author>
	<datestamp>1245178320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Some metrics to consider:</p><ul>
<li>overall cost of IT dept.</li><li>Per capita trouble tickets *</li><li>time to close tickets *</li><li>cost per closed ticket</li><li>cost savings vs. contractors / hired guns</li></ul><p>If you are good, then the last one will justify your existence.</p><p>* = Might also prove cluelessness of user base</p></htmltext>
<tokenext>Some metrics to consider : overall cost of IT dept.Per capita trouble tickets * time to close tickets * cost per closed ticketcost savings vs. contractors / hired gunsIf you are good , then the last one will justify your existence .
* = Might also prove cluelessness of user base</tokentext>
<sentencetext>Some metrics to consider:
overall cost of IT dept.Per capita trouble tickets *time to close tickets *cost per closed ticketcost savings vs. contractors / hired gunsIf you are good, then the last one will justify your existence.
* = Might also prove cluelessness of user base</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351969</id>
	<title>Re:I think it should be measured...</title>
	<author>Anonymous</author>
	<datestamp>1245183420000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>...by the number of callers left alive at the end of the day.</p></div><p>Weird, we have been using that as our failure rate metric.</p></div>
	</htmltext>
<tokenext>...by the number of callers left alive at the end of the day.Weird , we have been using that as our failure rate metric .</tokentext>
<sentencetext>...by the number of callers left alive at the end of the day.Weird, we have been using that as our failure rate metric.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349659</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28356633</id>
	<title>metrics, what's that</title>
	<author>tuomoks</author>
	<datestamp>1245163920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Performance == money OR at least it used to be that way. Seriously, there is so much wrong in "performance" in IT today that it isn't even funny! I myself miss the days when IT was for profit, a profit center with own budget, autonomy, etc as other business units in any company / corporation. It really changed when "kids" came to this business, all they wanted was eight to five, a paycheck, a managers blessing for their existence, a carrot once or twice a year, you know? Some of them carried grades from economy schools, had degrees in statistics, even had courses in speaking and were able to convince the middle management that instead of positive buddgets the paper metrics were the way to promotions, etc - the top management really didn't and doesn't have time to look details so anything what looks good must be good?</p><p>Seen these "performance metrics", "performance reports", "performance evaluations" (by managers who once a year need to know what their subordinates do - very weird?), "performance statistics" (you know, statistics don't lie!), and so on over years - have seen the results, I'd give about (at most) five years to any organization / department which starts that way, then there will be reorganization, termination, whatever - seen that about 20 times in small and large corporations over 40 years in IT.</p><p>Amazingly, not so much in IT (excluding very few) but in other organizations which are still profit centers, they still are going strong?</p></htmltext>
<tokenext>Performance = = money OR at least it used to be that way .
Seriously , there is so much wrong in " performance " in IT today that it is n't even funny !
I myself miss the days when IT was for profit , a profit center with own budget , autonomy , etc as other business units in any company / corporation .
It really changed when " kids " came to this business , all they wanted was eight to five , a paycheck , a managers blessing for their existence , a carrot once or twice a year , you know ?
Some of them carried grades from economy schools , had degrees in statistics , even had courses in speaking and were able to convince the middle management that instead of positive buddgets the paper metrics were the way to promotions , etc - the top management really did n't and does n't have time to look details so anything what looks good must be good ? Seen these " performance metrics " , " performance reports " , " performance evaluations " ( by managers who once a year need to know what their subordinates do - very weird ?
) , " performance statistics " ( you know , statistics do n't lie !
) , and so on over years - have seen the results , I 'd give about ( at most ) five years to any organization / department which starts that way , then there will be reorganization , termination , whatever - seen that about 20 times in small and large corporations over 40 years in IT.Amazingly , not so much in IT ( excluding very few ) but in other organizations which are still profit centers , they still are going strong ?</tokentext>
<sentencetext>Performance == money OR at least it used to be that way.
Seriously, there is so much wrong in "performance" in IT today that it isn't even funny!
I myself miss the days when IT was for profit, a profit center with own budget, autonomy, etc as other business units in any company / corporation.
It really changed when "kids" came to this business, all they wanted was eight to five, a paycheck, a managers blessing for their existence, a carrot once or twice a year, you know?
Some of them carried grades from economy schools, had degrees in statistics, even had courses in speaking and were able to convince the middle management that instead of positive buddgets the paper metrics were the way to promotions, etc - the top management really didn't and doesn't have time to look details so anything what looks good must be good?Seen these "performance metrics", "performance reports", "performance evaluations" (by managers who once a year need to know what their subordinates do - very weird?
), "performance statistics" (you know, statistics don't lie!
), and so on over years - have seen the results, I'd give about (at most) five years to any organization / department which starts that way, then there will be reorganization, termination, whatever - seen that about 20 times in small and large corporations over 40 years in IT.Amazingly, not so much in IT (excluding very few) but in other organizations which are still profit centers, they still are going strong?</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355481</id>
	<title>Tech support  song</title>
	<author>innocent\_white\_lamb</author>
	<datestamp>1245156360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Hello, tech support:</p><p><a href="http://www.prometheus-music.com/audio/techsupt.mp3" title="prometheus-music.com" rel="nofollow">http://www.prometheus-music.com/audio/techsupt.mp3</a> [prometheus-music.com]</p></htmltext>
<tokenext>Hello , tech support : http : //www.prometheus-music.com/audio/techsupt.mp3 [ prometheus-music.com ]</tokentext>
<sentencetext>Hello, tech support:http://www.prometheus-music.com/audio/techsupt.mp3 [prometheus-music.com]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350057</id>
	<title>By productivity gains</title>
	<author>Anonymous</author>
	<datestamp>1245176640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>in other departments when IT introduces new systems to them.</p><p>No, it's not an easy metric to obtain there, but IT is not a simple discipline to categorise.  As an example, ask someone to give a hard metric on how much useful information they know (to 2 decimal places).</p></htmltext>
<tokenext>in other departments when IT introduces new systems to them.No , it 's not an easy metric to obtain there , but IT is not a simple discipline to categorise .
As an example , ask someone to give a hard metric on how much useful information they know ( to 2 decimal places ) .</tokentext>
<sentencetext>in other departments when IT introduces new systems to them.No, it's not an easy metric to obtain there, but IT is not a simple discipline to categorise.
As an example, ask someone to give a hard metric on how much useful information they know (to 2 decimal places).</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351979</id>
	<title>Re:My two cents</title>
	<author>Propaganda13</author>
	<datestamp>1245183480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>100 calls<br>99 calls for password reset - resolved<br>1 call for the main database being corrupted - unresolved</p><p>Servers and network up 100\%</p><p>Work getting done 0\%</p><p>------------</p><p>Even weighting calls doesn't always work. IT could lower the number of calls for password resets by telling them to write their password on a post-it note.</p><p>I've had bosses request "Metrics". I asked them what they wanted and was told some nice charts that show we're doing our job.</p></htmltext>
<tokenext>100 calls99 calls for password reset - resolved1 call for the main database being corrupted - unresolvedServers and network up 100 \ % Work getting done 0 \ % ------------Even weighting calls does n't always work .
IT could lower the number of calls for password resets by telling them to write their password on a post-it note.I 've had bosses request " Metrics " .
I asked them what they wanted and was told some nice charts that show we 're doing our job .</tokentext>
<sentencetext>100 calls99 calls for password reset - resolved1 call for the main database being corrupted - unresolvedServers and network up 100\%Work getting done 0\%------------Even weighting calls doesn't always work.
IT could lower the number of calls for password resets by telling them to write their password on a post-it note.I've had bosses request "Metrics".
I asked them what they wanted and was told some nice charts that show we're doing our job.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349763</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351647</id>
	<title>Basic List</title>
	<author>Tdawgless</author>
	<datestamp>1245182160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Customer Satisfaction<br>
Mean / Average Time to Response<br>
Mean / Average Time to Close<br>
Mean / Average Number of Updates to Ticket<br>
Servers per Headcount<br>
Ticket Responses per Headcount<br>
Ticket Closes per Headcount<br>
Process Failures<br>
Cost per Server<br>
<br> <br>
All of these are meant to make you look for a problem and solve it whether it's a problem with a policy, process, procedure, or sometimes(but lastly considered) a staff member.

Keep a continous process improvement attitude and make sure you include the front line people on your CPI team.</htmltext>
<tokenext>Customer Satisfaction Mean / Average Time to Response Mean / Average Time to Close Mean / Average Number of Updates to Ticket Servers per Headcount Ticket Responses per Headcount Ticket Closes per Headcount Process Failures Cost per Server All of these are meant to make you look for a problem and solve it whether it 's a problem with a policy , process , procedure , or sometimes ( but lastly considered ) a staff member .
Keep a continous process improvement attitude and make sure you include the front line people on your CPI team .</tokentext>
<sentencetext>Customer Satisfaction
Mean / Average Time to Response
Mean / Average Time to Close
Mean / Average Number of Updates to Ticket
Servers per Headcount
Ticket Responses per Headcount
Ticket Closes per Headcount
Process Failures
Cost per Server
 
All of these are meant to make you look for a problem and solve it whether it's a problem with a policy, process, procedure, or sometimes(but lastly considered) a staff member.
Keep a continous process improvement attitude and make sure you include the front line people on your CPI team.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354583</id>
	<title>Easy</title>
	<author>jhfry</author>
	<datestamp>1245151380000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The best metric in most organizations, where IT is in a operations support role, is Quality of service.  The easiest way to measure quality is via polling.</p><p>A previous position worked had a policy, which was well supported by upper management, that REQUIRED the completion of a satisfaction survey at the end of every trouble call, as well as monthly satisfaction surveys.<br>- A typical help desk call was followed by a 30 second phone survey<br>- A desk side visit was followed by a ~1min web based survey<br>- A project was followed with an ~5 minute web survey<br>- A large project that involved a project manager was followed by a 15 minute interview with those people who helped define the scope of the project.</p><p>All surveys were issued and managed by a team outside of the IT structure (though they were IT management types).</p><p>The great thing about this system was that it was easy to determine who deserved accolades and who didn't do their jobs... at the help desk level, they fired about 70\% of new hires within a month or so because the customer simply didn't like their personalities or they didn't communicate well, but if you stayed on you were paid well and treated exceptionally.  At the desktop support level, similar story, and finally the monthly surveys allowed overall satisfaction to be gaged.</p><p>We IT folk were also surveyed about each other, and our customers... which I felt was the most important thing.  A customer who we regularly rated as polite, patient, and somehow exceptional would often be rewarded by our management (appreciation lunches, software for their personal use, USB drives for personal use, etc.)<nobr> <wbr></nobr>... so the customers were rarely unfair.  My boss even gave a weekend at a local trendy hotel to one customer for simply giving valuable feedback in the form of a monthly survey comment; he wrote a page detailing how our support saved the company tens of thousands of dollars because IT showed him how to link data from an SQL database into Excel where he was able to better analyze it... this was done by a desktop guy who happened to work late at someones desk and was walking through the building turning off lights.  Needless to say, the desktop guy was handsomely rewarded for simply giving a damn.</p><p>I realize that it sounds idealistic, and in many ways it was.  Sure time was still an issue, as users would deduct for a slow response, but it was only part of the story.  Most importantly was that everyone was motivated to be respectful because we all knew that any negative trends would be caught and questioned.</p><p>As I understand it, it was hardest to determine if we were too large as a department... sure the customer will be happy if they have their own dedicated IT person... but it's just not cost effective.  I was not privy to how this was measured, but I imagine they had their ways.</p></htmltext>
<tokenext>The best metric in most organizations , where IT is in a operations support role , is Quality of service .
The easiest way to measure quality is via polling.A previous position worked had a policy , which was well supported by upper management , that REQUIRED the completion of a satisfaction survey at the end of every trouble call , as well as monthly satisfaction surveys.- A typical help desk call was followed by a 30 second phone survey- A desk side visit was followed by a ~ 1min web based survey- A project was followed with an ~ 5 minute web survey- A large project that involved a project manager was followed by a 15 minute interview with those people who helped define the scope of the project.All surveys were issued and managed by a team outside of the IT structure ( though they were IT management types ) .The great thing about this system was that it was easy to determine who deserved accolades and who did n't do their jobs... at the help desk level , they fired about 70 \ % of new hires within a month or so because the customer simply did n't like their personalities or they did n't communicate well , but if you stayed on you were paid well and treated exceptionally .
At the desktop support level , similar story , and finally the monthly surveys allowed overall satisfaction to be gaged.We IT folk were also surveyed about each other , and our customers... which I felt was the most important thing .
A customer who we regularly rated as polite , patient , and somehow exceptional would often be rewarded by our management ( appreciation lunches , software for their personal use , USB drives for personal use , etc .
) ... so the customers were rarely unfair .
My boss even gave a weekend at a local trendy hotel to one customer for simply giving valuable feedback in the form of a monthly survey comment ; he wrote a page detailing how our support saved the company tens of thousands of dollars because IT showed him how to link data from an SQL database into Excel where he was able to better analyze it... this was done by a desktop guy who happened to work late at someones desk and was walking through the building turning off lights .
Needless to say , the desktop guy was handsomely rewarded for simply giving a damn.I realize that it sounds idealistic , and in many ways it was .
Sure time was still an issue , as users would deduct for a slow response , but it was only part of the story .
Most importantly was that everyone was motivated to be respectful because we all knew that any negative trends would be caught and questioned.As I understand it , it was hardest to determine if we were too large as a department... sure the customer will be happy if they have their own dedicated IT person... but it 's just not cost effective .
I was not privy to how this was measured , but I imagine they had their ways .</tokentext>
<sentencetext>The best metric in most organizations, where IT is in a operations support role, is Quality of service.
The easiest way to measure quality is via polling.A previous position worked had a policy, which was well supported by upper management, that REQUIRED the completion of a satisfaction survey at the end of every trouble call, as well as monthly satisfaction surveys.- A typical help desk call was followed by a 30 second phone survey- A desk side visit was followed by a ~1min web based survey- A project was followed with an ~5 minute web survey- A large project that involved a project manager was followed by a 15 minute interview with those people who helped define the scope of the project.All surveys were issued and managed by a team outside of the IT structure (though they were IT management types).The great thing about this system was that it was easy to determine who deserved accolades and who didn't do their jobs... at the help desk level, they fired about 70\% of new hires within a month or so because the customer simply didn't like their personalities or they didn't communicate well, but if you stayed on you were paid well and treated exceptionally.
At the desktop support level, similar story, and finally the monthly surveys allowed overall satisfaction to be gaged.We IT folk were also surveyed about each other, and our customers... which I felt was the most important thing.
A customer who we regularly rated as polite, patient, and somehow exceptional would often be rewarded by our management (appreciation lunches, software for their personal use, USB drives for personal use, etc.
) ... so the customers were rarely unfair.
My boss even gave a weekend at a local trendy hotel to one customer for simply giving valuable feedback in the form of a monthly survey comment; he wrote a page detailing how our support saved the company tens of thousands of dollars because IT showed him how to link data from an SQL database into Excel where he was able to better analyze it... this was done by a desktop guy who happened to work late at someones desk and was walking through the building turning off lights.
Needless to say, the desktop guy was handsomely rewarded for simply giving a damn.I realize that it sounds idealistic, and in many ways it was.
Sure time was still an issue, as users would deduct for a slow response, but it was only part of the story.
Most importantly was that everyone was motivated to be respectful because we all knew that any negative trends would be caught and questioned.As I understand it, it was hardest to determine if we were too large as a department... sure the customer will be happy if they have their own dedicated IT person... but it's just not cost effective.
I was not privy to how this was measured, but I imagine they had their ways.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349707</id>
	<title>Smart users vs stupid ones</title>
	<author>Darkness404</author>
	<datestamp>1245175620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p><div class="quote"><p> I consider IT's primary goal to be as transparent to the user as possible, thus this metric was rather troubling to me. Shouldn't we be focused on reducing calls, rather than simply closing them quickly?</p> </div><p>

Not for "stupid" users, the ones you see on a day-to-day basis. Now, this all depends on who you are giving support to, competent IT professionals or the day-to-day office worker. If you are giving them to fellow IT people, it should be a goal to be transparent. For the office worker the main job is productivity, that means fix the problem as soon as possible or tell them there is no problem and have a good day.</p></div>
	</htmltext>
<tokenext>I consider IT 's primary goal to be as transparent to the user as possible , thus this metric was rather troubling to me .
Should n't we be focused on reducing calls , rather than simply closing them quickly ?
Not for " stupid " users , the ones you see on a day-to-day basis .
Now , this all depends on who you are giving support to , competent IT professionals or the day-to-day office worker .
If you are giving them to fellow IT people , it should be a goal to be transparent .
For the office worker the main job is productivity , that means fix the problem as soon as possible or tell them there is no problem and have a good day .</tokentext>
<sentencetext> I consider IT's primary goal to be as transparent to the user as possible, thus this metric was rather troubling to me.
Shouldn't we be focused on reducing calls, rather than simply closing them quickly?
Not for "stupid" users, the ones you see on a day-to-day basis.
Now, this all depends on who you are giving support to, competent IT professionals or the day-to-day office worker.
If you are giving them to fellow IT people, it should be a goal to be transparent.
For the office worker the main job is productivity, that means fix the problem as soon as possible or tell them there is no problem and have a good day.
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350443</id>
	<title>Just count yourself lucky...</title>
	<author>mario\_grgic</author>
	<datestamp>1245177840000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>that your organization has made your job measurable. It does not matter what they measure your performance by, as long as it is something tangible.</p><p>So, you get payed by how many tickets you managed to close in a month. Fine. So, you close as many as you can in a month, resulting in lower quality of each problem fix, resulting in more tickets posted and assigned to you, resulting in you having ensured that next month you have enough tickets as well.</p><p>This can go on indefinitely, or your wise superiors might decide to measure your work somehow else.</p></htmltext>
<tokenext>that your organization has made your job measurable .
It does not matter what they measure your performance by , as long as it is something tangible.So , you get payed by how many tickets you managed to close in a month .
Fine. So , you close as many as you can in a month , resulting in lower quality of each problem fix , resulting in more tickets posted and assigned to you , resulting in you having ensured that next month you have enough tickets as well.This can go on indefinitely , or your wise superiors might decide to measure your work somehow else .</tokentext>
<sentencetext>that your organization has made your job measurable.
It does not matter what they measure your performance by, as long as it is something tangible.So, you get payed by how many tickets you managed to close in a month.
Fine. So, you close as many as you can in a month, resulting in lower quality of each problem fix, resulting in more tickets posted and assigned to you, resulting in you having ensured that next month you have enough tickets as well.This can go on indefinitely, or your wise superiors might decide to measure your work somehow else.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350157</id>
	<title>Time to close tickets is 1 factor, not the ONLY 1</title>
	<author>King\_TJ</author>
	<datestamp>1245176940000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>I think the average time taken to close a trouble ticket is important, but it's not the only factor you want to look at.</p><p>The primary purpose of issuing unique trouble ticket numbers is to provide an easy "one stop" tracking mechanism for the issue.  A customer (or employee) should always be able to reference a ticket # to support staff, and in turn, they should be able to pull up a fairly comprehensive history of what's been done so far to resolve the issue.</p><p>If you push too hard for closing tickets quickly, you'll see a tendency for new tickets to get issued on things which should REALLY be continuations of an existing ticket, held open longer.</p><p>(EG.  I call in complaining that my inkjet printer won't print yellow.  A ticket is created and they tell me my color cartridge is clogged up, so put a new one in and I should be fine.  Ticket is closed.  I switch cartridges with a new one, and discover it STILL doesn't print yellow.  I call in and a new ticket is made for what's really the same issue.  I'm told how to run the printer through cleaning cycles, and instructed that I may have to do it "up to 10 times" to see results.  Ticket closed.  I get around to trying that the next day when I get time, and even after 10 or 15 attempts, no yellow is coming out.  I call back in, only to have ANOTHER new ticket opened, and the tech wastes my time asking me if I "tried a new cartridge yet?" and I have to interrupt him in the middle of re-explaining how to do a cleaning cycle.  Problem is eventually determined to require a replacement printer<nobr> <wbr></nobr>... but should obviously have all been filed under one ticket.)</p></htmltext>
<tokenext>I think the average time taken to close a trouble ticket is important , but it 's not the only factor you want to look at.The primary purpose of issuing unique trouble ticket numbers is to provide an easy " one stop " tracking mechanism for the issue .
A customer ( or employee ) should always be able to reference a ticket # to support staff , and in turn , they should be able to pull up a fairly comprehensive history of what 's been done so far to resolve the issue.If you push too hard for closing tickets quickly , you 'll see a tendency for new tickets to get issued on things which should REALLY be continuations of an existing ticket , held open longer. ( EG .
I call in complaining that my inkjet printer wo n't print yellow .
A ticket is created and they tell me my color cartridge is clogged up , so put a new one in and I should be fine .
Ticket is closed .
I switch cartridges with a new one , and discover it STILL does n't print yellow .
I call in and a new ticket is made for what 's really the same issue .
I 'm told how to run the printer through cleaning cycles , and instructed that I may have to do it " up to 10 times " to see results .
Ticket closed .
I get around to trying that the next day when I get time , and even after 10 or 15 attempts , no yellow is coming out .
I call back in , only to have ANOTHER new ticket opened , and the tech wastes my time asking me if I " tried a new cartridge yet ?
" and I have to interrupt him in the middle of re-explaining how to do a cleaning cycle .
Problem is eventually determined to require a replacement printer ... but should obviously have all been filed under one ticket .
)</tokentext>
<sentencetext>I think the average time taken to close a trouble ticket is important, but it's not the only factor you want to look at.The primary purpose of issuing unique trouble ticket numbers is to provide an easy "one stop" tracking mechanism for the issue.
A customer (or employee) should always be able to reference a ticket # to support staff, and in turn, they should be able to pull up a fairly comprehensive history of what's been done so far to resolve the issue.If you push too hard for closing tickets quickly, you'll see a tendency for new tickets to get issued on things which should REALLY be continuations of an existing ticket, held open longer.(EG.
I call in complaining that my inkjet printer won't print yellow.
A ticket is created and they tell me my color cartridge is clogged up, so put a new one in and I should be fine.
Ticket is closed.
I switch cartridges with a new one, and discover it STILL doesn't print yellow.
I call in and a new ticket is made for what's really the same issue.
I'm told how to run the printer through cleaning cycles, and instructed that I may have to do it "up to 10 times" to see results.
Ticket closed.
I get around to trying that the next day when I get time, and even after 10 or 15 attempts, no yellow is coming out.
I call back in, only to have ANOTHER new ticket opened, and the tech wastes my time asking me if I "tried a new cartridge yet?
" and I have to interrupt him in the middle of re-explaining how to do a cleaning cycle.
Problem is eventually determined to require a replacement printer ... but should obviously have all been filed under one ticket.
)</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353551</id>
	<title>Re:Stop asking to do stupid things</title>
	<author>Bigjeff5</author>
	<datestamp>1245146400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>A month is downright quick in some environments.</p><p>In mine it can take a week to get the hostname approved, which then allows you to apply for approval to install the OS.  Of course, that requires a change request for which a backout plan is mandatory.  What's your backout plan for if your server doesn't boot?  I don't know, turn it off and troubleshoot?  Makes no sense, but still required.  Oh yeah and the first submission of any change request is an automatic denial, just to be sure you really want.  This process, if you're lucky, may only take days.  But it may also take months.</p><p>Ok, so, by now, you are cleared to install the OS.  Time to order the server (no point in keeping it on hand on the off chance your request is denied outright, in spite of the fact that your department is the one paying for it, not corporate), that will only take a week or two.  Unless you are remote, like me, then it can take longer.  Fortunately, just ordering the hardware is much simpler, and you don't expect any hangups there.</p><p>Now, 1-3 months after you initiated the process to get the new server, you can set it up, which generally takes an afternoon + a few days to a month of testing time, depending on what is going on it.</p><p>You should try working in a monolithic corporate structure, EVERYTHING is like that.  It sucks.  But it pays well.</p></htmltext>
<tokenext>A month is downright quick in some environments.In mine it can take a week to get the hostname approved , which then allows you to apply for approval to install the OS .
Of course , that requires a change request for which a backout plan is mandatory .
What 's your backout plan for if your server does n't boot ?
I do n't know , turn it off and troubleshoot ?
Makes no sense , but still required .
Oh yeah and the first submission of any change request is an automatic denial , just to be sure you really want .
This process , if you 're lucky , may only take days .
But it may also take months.Ok , so , by now , you are cleared to install the OS .
Time to order the server ( no point in keeping it on hand on the off chance your request is denied outright , in spite of the fact that your department is the one paying for it , not corporate ) , that will only take a week or two .
Unless you are remote , like me , then it can take longer .
Fortunately , just ordering the hardware is much simpler , and you do n't expect any hangups there.Now , 1-3 months after you initiated the process to get the new server , you can set it up , which generally takes an afternoon + a few days to a month of testing time , depending on what is going on it.You should try working in a monolithic corporate structure , EVERYTHING is like that .
It sucks .
But it pays well .</tokentext>
<sentencetext>A month is downright quick in some environments.In mine it can take a week to get the hostname approved, which then allows you to apply for approval to install the OS.
Of course, that requires a change request for which a backout plan is mandatory.
What's your backout plan for if your server doesn't boot?
I don't know, turn it off and troubleshoot?
Makes no sense, but still required.
Oh yeah and the first submission of any change request is an automatic denial, just to be sure you really want.
This process, if you're lucky, may only take days.
But it may also take months.Ok, so, by now, you are cleared to install the OS.
Time to order the server (no point in keeping it on hand on the off chance your request is denied outright, in spite of the fact that your department is the one paying for it, not corporate), that will only take a week or two.
Unless you are remote, like me, then it can take longer.
Fortunately, just ordering the hardware is much simpler, and you don't expect any hangups there.Now, 1-3 months after you initiated the process to get the new server, you can set it up, which generally takes an afternoon + a few days to a month of testing time, depending on what is going on it.You should try working in a monolithic corporate structure, EVERYTHING is like that.
It sucks.
But it pays well.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351467</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28359611</id>
	<title>Re:obvious</title>
	<author>merlinokos</author>
	<datestamp>1245240480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Similarly, as an IT manager, I measure (and get measured) on how upset people get when something major goes wrong.
<br>If the network is in a state where it breaks regularly, minor problems are seen as major, and major problems are seen as disaster. Feedback to upper management is 'IT sucks.'
<br>If the network is in a working state that rarely breaks, and communication to the company is good, then minor problems are fixed before people notice, and major problems are seen as minor problems. Feedback to upper management is 'Nothing ever seems to go wrong. IT is great.' That's the response we had a month ago, from one of our most vocal critics, in spite of the fact that we had a major server outage less than a week before.</htmltext>
<tokenext>Similarly , as an IT manager , I measure ( and get measured ) on how upset people get when something major goes wrong .
If the network is in a state where it breaks regularly , minor problems are seen as major , and major problems are seen as disaster .
Feedback to upper management is 'IT sucks .
' If the network is in a working state that rarely breaks , and communication to the company is good , then minor problems are fixed before people notice , and major problems are seen as minor problems .
Feedback to upper management is 'Nothing ever seems to go wrong .
IT is great .
' That 's the response we had a month ago , from one of our most vocal critics , in spite of the fact that we had a major server outage less than a week before .</tokentext>
<sentencetext>Similarly, as an IT manager, I measure (and get measured) on how upset people get when something major goes wrong.
If the network is in a state where it breaks regularly, minor problems are seen as major, and major problems are seen as disaster.
Feedback to upper management is 'IT sucks.
'
If the network is in a working state that rarely breaks, and communication to the company is good, then minor problems are fixed before people notice, and major problems are seen as minor problems.
Feedback to upper management is 'Nothing ever seems to go wrong.
IT is great.
' That's the response we had a month ago, from one of our most vocal critics, in spite of the fact that we had a major server outage less than a week before.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350237</id>
	<title>Submitter is an insensitive clod</title>
	<author>Anonymous</author>
	<datestamp>1245177240000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Metrics?</p><p>I'm a USian, and use Imperials, you insensitive clod!</p></htmltext>
<tokenext>Metrics ? I 'm a USian , and use Imperials , you insensitive clod !</tokentext>
<sentencetext>Metrics?I'm a USian, and use Imperials, you insensitive clod!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352031</id>
	<title>Re:Not QUITE the stupidest metric I can think of..</title>
	<author>Anonymous</author>
	<datestamp>1245183720000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>I can get up, go to their desk, and solve the problem permanently.</p></div><p>I have to admit, I am more than a little curious about whether this 'permanent solution' results in bloodshed.</p></div>
	</htmltext>
<tokenext>I can get up , go to their desk , and solve the problem permanently.I have to admit , I am more than a little curious about whether this 'permanent solution ' results in bloodshed .</tokentext>
<sentencetext>I can get up, go to their desk, and solve the problem permanently.I have to admit, I am more than a little curious about whether this 'permanent solution' results in bloodshed.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349721</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28366153</id>
	<title>Wrong approach</title>
	<author>Anonymous</author>
	<datestamp>1245232680000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Rather than a numeric metric calculated from automated systems, just send internal customer satisfaction surveys to employees.  Have some numeric question ("Overall how satisfied are you...") and lots of opportunity for people to write things out ("What problems did you think IT could have addressed better?")  Even if you don't employ the sophisticated techniques available to collect meaningful, accurate data, you will certainly learn more than just looking for a time metric.</p></htmltext>
<tokenext>Rather than a numeric metric calculated from automated systems , just send internal customer satisfaction surveys to employees .
Have some numeric question ( " Overall how satisfied are you... " ) and lots of opportunity for people to write things out ( " What problems did you think IT could have addressed better ?
" ) Even if you do n't employ the sophisticated techniques available to collect meaningful , accurate data , you will certainly learn more than just looking for a time metric .</tokentext>
<sentencetext>Rather than a numeric metric calculated from automated systems, just send internal customer satisfaction surveys to employees.
Have some numeric question ("Overall how satisfied are you...") and lots of opportunity for people to write things out ("What problems did you think IT could have addressed better?
")  Even if you don't employ the sophisticated techniques available to collect meaningful, accurate data, you will certainly learn more than just looking for a time metric.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350477</id>
	<title>Re:count tickets never openend</title>
	<author>Freetardo Jones</author>
	<datestamp>1245177960000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>An IT-department, IMHO, should be working on making itself obsolete.</p></div><p>So in the absence of the IT department, are the software/hardware updates, amongst everything else they maintain, going to be just done by magic?</p></div>
	</htmltext>
<tokenext>An IT-department , IMHO , should be working on making itself obsolete.So in the absence of the IT department , are the software/hardware updates , amongst everything else they maintain , going to be just done by magic ?</tokentext>
<sentencetext>An IT-department, IMHO, should be working on making itself obsolete.So in the absence of the IT department, are the software/hardware updates, amongst everything else they maintain, going to be just done by magic?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350553</id>
	<title>Re:First metric...</title>
	<author>Anonymous</author>
	<datestamp>1245178200000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p><div class="quote"><p>...timeliness of the <b>TSP</b> reports</p></div><p>good job at failing</p></div>
	</htmltext>
<tokenext>...timeliness of the TSP reportsgood job at failing</tokentext>
<sentencetext>...timeliness of the TSP reportsgood job at failing
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349677</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350249</id>
	<title>Re:Sounds good to me.</title>
	<author>Freetardo Jones</author>
	<datestamp>1245177240000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>For every fax that needs IT intervention to be sent, the IT department loses one point.</p></div><p>What if the intervention was needed due to no fault of anyone in the IT department?  Why should they be penalized cause some asshat was incorrectly using a piece of equipment?</p></div>
	</htmltext>
<tokenext>For every fax that needs IT intervention to be sent , the IT department loses one point.What if the intervention was needed due to no fault of anyone in the IT department ?
Why should they be penalized cause some asshat was incorrectly using a piece of equipment ?</tokentext>
<sentencetext>For every fax that needs IT intervention to be sent, the IT department loses one point.What if the intervention was needed due to no fault of anyone in the IT department?
Why should they be penalized cause some asshat was incorrectly using a piece of equipment?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349861</id>
	<title>Not good to count number of tickets</title>
	<author>Anonymous</author>
	<datestamp>1245176040000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>I think that when the metric is to reduce the number of calls, the natural human tendency is to ignore calls, shift calls to other people, etc. to make it look like you're doing better when you're not.</p><p>So that's why most people look at your find versus fix ratio, the number of bugs you find versus the number you fix / the length of time it takes to fix them.  It's not great to have zillions of issues, but you should always try to fix the issues as quickly as possible.</p></htmltext>
<tokenext>I think that when the metric is to reduce the number of calls , the natural human tendency is to ignore calls , shift calls to other people , etc .
to make it look like you 're doing better when you 're not.So that 's why most people look at your find versus fix ratio , the number of bugs you find versus the number you fix / the length of time it takes to fix them .
It 's not great to have zillions of issues , but you should always try to fix the issues as quickly as possible .</tokentext>
<sentencetext>I think that when the metric is to reduce the number of calls, the natural human tendency is to ignore calls, shift calls to other people, etc.
to make it look like you're doing better when you're not.So that's why most people look at your find versus fix ratio, the number of bugs you find versus the number you fix / the length of time it takes to fix them.
It's not great to have zillions of issues, but you should always try to fix the issues as quickly as possible.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351983</id>
	<title>Re:Stupid metrics</title>
	<author>TheQuantumShift</author>
	<datestamp>1245183480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The short call times issue is really about money. The company you actually worked for had a contract with gateway/emachines that specified a certain $ amount per minute. Call times outside a limit (most likely 7-11 minutes in your case) the $/min. rate drops dramatically and the call center actually starts losing money on the call. The client (gateway) cares about problem resolution, the call center cares about answering the most calls. The lower limit makes it harder for the call center to fudge the numbers by giving "reboot and call back in 24hrs" answers. And the upper limit ensures the most calls answered.
<br> <br>
Call centers care about answering calls, not fixing problems. It's very sad that I see even internal help desks follow the call center mantra, all the while preaching to high heaven about ITIL and the like. I definitely identify with the submitter. Of course I take it further by speculating that the real reason for the focus on nonsense metrics is to have a case for outsourcing. But I'm paranoid like that...</htmltext>
<tokenext>The short call times issue is really about money .
The company you actually worked for had a contract with gateway/emachines that specified a certain $ amount per minute .
Call times outside a limit ( most likely 7-11 minutes in your case ) the $ /min .
rate drops dramatically and the call center actually starts losing money on the call .
The client ( gateway ) cares about problem resolution , the call center cares about answering the most calls .
The lower limit makes it harder for the call center to fudge the numbers by giving " reboot and call back in 24hrs " answers .
And the upper limit ensures the most calls answered .
Call centers care about answering calls , not fixing problems .
It 's very sad that I see even internal help desks follow the call center mantra , all the while preaching to high heaven about ITIL and the like .
I definitely identify with the submitter .
Of course I take it further by speculating that the real reason for the focus on nonsense metrics is to have a case for outsourcing .
But I 'm paranoid like that.. .</tokentext>
<sentencetext>The short call times issue is really about money.
The company you actually worked for had a contract with gateway/emachines that specified a certain $ amount per minute.
Call times outside a limit (most likely 7-11 minutes in your case) the $/min.
rate drops dramatically and the call center actually starts losing money on the call.
The client (gateway) cares about problem resolution, the call center cares about answering the most calls.
The lower limit makes it harder for the call center to fudge the numbers by giving "reboot and call back in 24hrs" answers.
And the upper limit ensures the most calls answered.
Call centers care about answering calls, not fixing problems.
It's very sad that I see even internal help desks follow the call center mantra, all the while preaching to high heaven about ITIL and the like.
I definitely identify with the submitter.
Of course I take it further by speculating that the real reason for the focus on nonsense metrics is to have a case for outsourcing.
But I'm paranoid like that...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350715</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350563</id>
	<title>Recount:</title>
	<author>Arivia</author>
	<datestamp>1245178260000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>.recount toggle</htmltext>
<tokenext>.recount toggle</tokentext>
<sentencetext>.recount toggle</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351651</id>
	<title>After working at a place like this - my judgment</title>
	<author>Anonymous</author>
	<datestamp>1245182160000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext>When I worked for an unnamed company (they're big and you see them in every movie) they based our metrics off...<br> <br>

-dispatch rate on calls  (a decent metric to determine how much troubleshooting your are doing vs. just sending a part to maybe fix a problem)<br>
-repeat dispatch rate  (really determines if you are a good troubleshooter - if you don't get it right the first time, what good are ya?<nobr> <wbr></nobr>;)<br>
-cases closed per week  (self explanatory)<br>
-repeat call rate (when a person doesn't call you back directly even though they have your extension)<br>
-customer satisfaction score (The survey you might get asked...  there is only one question that counts--- if you give anything less than a 7 to the question "How satisfied are you with *company x* you are hurting the tech)<br>

<br>I could go on, but there are the ones that are somewhat in a tech's control.  I get the feeling the submitter works at the same company.</htmltext>
<tokenext>When I worked for an unnamed company ( they 're big and you see them in every movie ) they based our metrics off.. . -dispatch rate on calls ( a decent metric to determine how much troubleshooting your are doing vs. just sending a part to maybe fix a problem ) -repeat dispatch rate ( really determines if you are a good troubleshooter - if you do n't get it right the first time , what good are ya ?
; ) -cases closed per week ( self explanatory ) -repeat call rate ( when a person does n't call you back directly even though they have your extension ) -customer satisfaction score ( The survey you might get asked... there is only one question that counts--- if you give anything less than a 7 to the question " How satisfied are you with * company x * you are hurting the tech ) I could go on , but there are the ones that are somewhat in a tech 's control .
I get the feeling the submitter works at the same company .</tokentext>
<sentencetext>When I worked for an unnamed company (they're big and you see them in every movie) they based our metrics off... 

-dispatch rate on calls  (a decent metric to determine how much troubleshooting your are doing vs. just sending a part to maybe fix a problem)
-repeat dispatch rate  (really determines if you are a good troubleshooter - if you don't get it right the first time, what good are ya?
;)
-cases closed per week  (self explanatory)
-repeat call rate (when a person doesn't call you back directly even though they have your extension)
-customer satisfaction score (The survey you might get asked...  there is only one question that counts--- if you give anything less than a 7 to the question "How satisfied are you with *company x* you are hurting the tech)

I could go on, but there are the ones that are somewhat in a tech's control.
I get the feeling the submitter works at the same company.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354709</id>
	<title>There are no Steadfast Metrics</title>
	<author>BuckaBooBob</author>
	<datestamp>1245152160000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Some realistic metrics...</p><p>Repeat Issues..<br>Ineffective IT will see an issue that reoccurs and never fixes the source of the issue instead supports and extends the issue rather than fixing it.. and if you think there are issues that cannot be resolved.. then your not using the right tool or educating people correctly..</p><p>Resolutions that worked.<br>If someone comes to you with an issue and you get them to "try" 22 "things" and the problem still persists.. You have either not asked the correct information.. Received the correct information.. or you do not understand the issue and are trying the pin the tail on the donkey approach of support which makes more issues than it solves.</p><p>PEBKAC Factor<br>User training issues need to be identified. Admin wants to see IT be more efficient.. educate users so IT staff does explain to someone how to use the tools they need to know how to use to do their job properly.. Mechanic's that would call to ask which way tightens and which way loosens should never pick up a wrench to work on a car.. why do we let people that don't don't understand there is a right click and a left click work on computers..</p><p>
&nbsp; If you see a theme.. its because there is.. reduced call volumes.. That is effective IT and it needs to be looked at differently than most anything else because its a dynamic environment. Increased Call volumes come from change and decreased call volumes come from people working effectively..</p></htmltext>
<tokenext>Some realistic metrics...Repeat Issues..Ineffective IT will see an issue that reoccurs and never fixes the source of the issue instead supports and extends the issue rather than fixing it.. and if you think there are issues that can not be resolved.. then your not using the right tool or educating people correctly..Resolutions that worked.If someone comes to you with an issue and you get them to " try " 22 " things " and the problem still persists.. You have either not asked the correct information.. Received the correct information.. or you do not understand the issue and are trying the pin the tail on the donkey approach of support which makes more issues than it solves.PEBKAC FactorUser training issues need to be identified .
Admin wants to see IT be more efficient.. educate users so IT staff does explain to someone how to use the tools they need to know how to use to do their job properly.. Mechanic 's that would call to ask which way tightens and which way loosens should never pick up a wrench to work on a car.. why do we let people that do n't do n't understand there is a right click and a left click work on computers. .   If you see a theme.. its because there is.. reduced call volumes.. That is effective IT and it needs to be looked at differently than most anything else because its a dynamic environment .
Increased Call volumes come from change and decreased call volumes come from people working effectively. .</tokentext>
<sentencetext>Some realistic metrics...Repeat Issues..Ineffective IT will see an issue that reoccurs and never fixes the source of the issue instead supports and extends the issue rather than fixing it.. and if you think there are issues that cannot be resolved.. then your not using the right tool or educating people correctly..Resolutions that worked.If someone comes to you with an issue and you get them to "try" 22 "things" and the problem still persists.. You have either not asked the correct information.. Received the correct information.. or you do not understand the issue and are trying the pin the tail on the donkey approach of support which makes more issues than it solves.PEBKAC FactorUser training issues need to be identified.
Admin wants to see IT be more efficient.. educate users so IT staff does explain to someone how to use the tools they need to know how to use to do their job properly.. Mechanic's that would call to ask which way tightens and which way loosens should never pick up a wrench to work on a car.. why do we let people that don't don't understand there is a right click and a left click work on computers..
  If you see a theme.. its because there is.. reduced call volumes.. That is effective IT and it needs to be looked at differently than most anything else because its a dynamic environment.
Increased Call volumes come from change and decreased call volumes come from people working effectively..</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352475</id>
	<title>Re:count tickets never openend</title>
	<author>Mr.Intel</author>
	<datestamp>1245185400000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>That's a great idea, but far too ethereal for management in a corporation of any size.</p><p>I work for a large multinational company that has 500,000 plus employees.  Our IT infrastructure is huge and the unwieldly management put in charge of it cannot get their hands or heads around what we actually do.  Their answer is to count the number of tickets we <i>open</i>, not close.  They have to have something that's measurable.  Tickets that were never opened can't be counted, so they choose the former and the rank and file IT workers suffer for it. It's completely pointless to spend thirty seconds rebooting a printer, or clicking on an error message for a user that's too traumatized by adware to do it themselves, than to waste ten minutes to fill out the fifty fields that are required for the trouble ticket.  Especially because I've got ten more people with similar problems waiting in the wings.  By the time I get a chance to enter in any ticket, it's going to be for the guy that took an hour to track down the correct version of Java to run the obscure app on an ancient Intranet site that is only required once a year.  How am I supposed to remember every Tom, Dick, and Harry that can't be bothered to replace a toner cartridge?</p><p>Even though we are understaffed to the point where we can't put in all our tickets, the very fact that our "numbers" are down translates into a bunch of deaf ears when we plead for help.  "You only closed five tickets a day in March, so you can't possibly be that busy."  To the OP: Whatever method you pick, good luck.  Pray for sanity to descend upon the clueless managers whose sole job is to run reports and go to meetings.  My sanity depends on it.</p></htmltext>
<tokenext>That 's a great idea , but far too ethereal for management in a corporation of any size.I work for a large multinational company that has 500,000 plus employees .
Our IT infrastructure is huge and the unwieldly management put in charge of it can not get their hands or heads around what we actually do .
Their answer is to count the number of tickets we open , not close .
They have to have something that 's measurable .
Tickets that were never opened ca n't be counted , so they choose the former and the rank and file IT workers suffer for it .
It 's completely pointless to spend thirty seconds rebooting a printer , or clicking on an error message for a user that 's too traumatized by adware to do it themselves , than to waste ten minutes to fill out the fifty fields that are required for the trouble ticket .
Especially because I 've got ten more people with similar problems waiting in the wings .
By the time I get a chance to enter in any ticket , it 's going to be for the guy that took an hour to track down the correct version of Java to run the obscure app on an ancient Intranet site that is only required once a year .
How am I supposed to remember every Tom , Dick , and Harry that ca n't be bothered to replace a toner cartridge ? Even though we are understaffed to the point where we ca n't put in all our tickets , the very fact that our " numbers " are down translates into a bunch of deaf ears when we plead for help .
" You only closed five tickets a day in March , so you ca n't possibly be that busy .
" To the OP : Whatever method you pick , good luck .
Pray for sanity to descend upon the clueless managers whose sole job is to run reports and go to meetings .
My sanity depends on it .</tokentext>
<sentencetext>That's a great idea, but far too ethereal for management in a corporation of any size.I work for a large multinational company that has 500,000 plus employees.
Our IT infrastructure is huge and the unwieldly management put in charge of it cannot get their hands or heads around what we actually do.
Their answer is to count the number of tickets we open, not close.
They have to have something that's measurable.
Tickets that were never opened can't be counted, so they choose the former and the rank and file IT workers suffer for it.
It's completely pointless to spend thirty seconds rebooting a printer, or clicking on an error message for a user that's too traumatized by adware to do it themselves, than to waste ten minutes to fill out the fifty fields that are required for the trouble ticket.
Especially because I've got ten more people with similar problems waiting in the wings.
By the time I get a chance to enter in any ticket, it's going to be for the guy that took an hour to track down the correct version of Java to run the obscure app on an ancient Intranet site that is only required once a year.
How am I supposed to remember every Tom, Dick, and Harry that can't be bothered to replace a toner cartridge?Even though we are understaffed to the point where we can't put in all our tickets, the very fact that our "numbers" are down translates into a bunch of deaf ears when we plead for help.
"You only closed five tickets a day in March, so you can't possibly be that busy.
"  To the OP: Whatever method you pick, good luck.
Pray for sanity to descend upon the clueless managers whose sole job is to run reports and go to meetings.
My sanity depends on it.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350503</id>
	<title>You Need to Do Both</title>
	<author>SwashbucklingCowboy</author>
	<datestamp>1245178080000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>"Shouldn't we be focused on reducing calls, rather than simply closing them quickly?"</p><p>You need to do both.</p></htmltext>
<tokenext>" Should n't we be focused on reducing calls , rather than simply closing them quickly ?
" You need to do both .</tokentext>
<sentencetext>"Shouldn't we be focused on reducing calls, rather than simply closing them quickly?
"You need to do both.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351537</id>
	<title>History</title>
	<author>Anonymous</author>
	<datestamp>1245181800000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>
Use your call log history.
</p><p>
If it cost X to do the same thing before then it should cost around X again.
</p><p>
We had a nice system at one place that records about 5 key fields of information.  Based upon those 5 fields you could see a trend of how long it took to find a solution.  It was all costing.
</p><p>
Funny thing was one field was the individual.  A certain VP always had a problem that could be solved by going to his office and turning off the cap locks... even though he would always claim he never touched it and it must be a virus or something.  Almost like the time his laptop would not boot anymore and the issue was related to the 100GB+ of porn on it.  Dam viruses! he he he.
</p><p>
We found about 5\% of the users consumed 95\% of the resources.
</p></htmltext>
<tokenext>Use your call log history .
If it cost X to do the same thing before then it should cost around X again .
We had a nice system at one place that records about 5 key fields of information .
Based upon those 5 fields you could see a trend of how long it took to find a solution .
It was all costing .
Funny thing was one field was the individual .
A certain VP always had a problem that could be solved by going to his office and turning off the cap locks... even though he would always claim he never touched it and it must be a virus or something .
Almost like the time his laptop would not boot anymore and the issue was related to the 100GB + of porn on it .
Dam viruses !
he he he .
We found about 5 \ % of the users consumed 95 \ % of the resources .</tokentext>
<sentencetext>
Use your call log history.
If it cost X to do the same thing before then it should cost around X again.
We had a nice system at one place that records about 5 key fields of information.
Based upon those 5 fields you could see a trend of how long it took to find a solution.
It was all costing.
Funny thing was one field was the individual.
A certain VP always had a problem that could be solved by going to his office and turning off the cap locks... even though he would always claim he never touched it and it must be a virus or something.
Almost like the time his laptop would not boot anymore and the issue was related to the 100GB+ of porn on it.
Dam viruses!
he he he.
We found about 5\% of the users consumed 95\% of the resources.
</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354671</id>
	<title>Re:Sounds good to me.</title>
	<author>syousef</author>
	<datestamp>1245151920000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>For every person who becomes aware of a problem with the fax server, the IT department loses one point. No more "heroics". The goal is to be as invisible as possible to the end users.</p></div><p>That's a pretty naive thing to say, given that an invisible department is one that gets its staff cut and isn't funded. If the proper people are not aware that you're doing a good job, how do you expect them to justify paying for you???</p></div>
	</htmltext>
<tokenext>For every person who becomes aware of a problem with the fax server , the IT department loses one point .
No more " heroics " .
The goal is to be as invisible as possible to the end users.That 's a pretty naive thing to say , given that an invisible department is one that gets its staff cut and is n't funded .
If the proper people are not aware that you 're doing a good job , how do you expect them to justify paying for you ? ?
?</tokentext>
<sentencetext>For every person who becomes aware of a problem with the fax server, the IT department loses one point.
No more "heroics".
The goal is to be as invisible as possible to the end users.That's a pretty naive thing to say, given that an invisible department is one that gets its staff cut and isn't funded.
If the proper people are not aware that you're doing a good job, how do you expect them to justify paying for you??
?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351629</id>
	<title>Re:Stop asking to do stupid things</title>
	<author>techno-vampire</author>
	<datestamp>1245182040000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><i>- Setup accounts without passwords</i> <p>
I wouldn't mind doing that at all.  Of course, I'd also make sure all machines were set up so that you can't log in using an account that doesn't have a password.</p></htmltext>
<tokenext>- Setup accounts without passwords I would n't mind doing that at all .
Of course , I 'd also make sure all machines were set up so that you ca n't log in using an account that does n't have a password .</tokentext>
<sentencetext>- Setup accounts without passwords 
I wouldn't mind doing that at all.
Of course, I'd also make sure all machines were set up so that you can't log in using an account that doesn't have a password.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350689</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350333</id>
	<title>Response Time</title>
	<author>DeckardJK</author>
	<datestamp>1245177480000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>We have a help desk ticketing system that automated issues get logged in.  The on call personel will get pages.  Also... other individuals in the company can make requests and log issues into the system to assign them to groups or individuals.  The only metric really recorded is response time to respond to the client or automated event.  The first concern is communicating early that the concern has been noticed and is/will be scheduled for work.</htmltext>
<tokenext>We have a help desk ticketing system that automated issues get logged in .
The on call personel will get pages .
Also... other individuals in the company can make requests and log issues into the system to assign them to groups or individuals .
The only metric really recorded is response time to respond to the client or automated event .
The first concern is communicating early that the concern has been noticed and is/will be scheduled for work .</tokentext>
<sentencetext>We have a help desk ticketing system that automated issues get logged in.
The on call personel will get pages.
Also... other individuals in the company can make requests and log issues into the system to assign them to groups or individuals.
The only metric really recorded is response time to respond to the client or automated event.
The first concern is communicating early that the concern has been noticed and is/will be scheduled for work.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351027</id>
	<title>ITIL</title>
	<author>kenp2002</author>
	<datestamp>1245179760000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>The metric is valid when looking at the model where you have INCIDENT MANAGEMENT versus PROBLEM MANAGEMENT.</p><p>That first line of call-in is about making sure the human caller gets to a human as quickly as possible. Within 15 minutes flipping that call should be done OR escalated to PROBLEM MANAGEMENT. The reasoning is while you are talking with somone there is another caller trying to get a hold of someone.</p><p>Turn Around time is relevant to INCIDENT MANAGEMENT versus PROBLEM MANAGEMENT. The problem is when there is not a clear difference between incident and problem management groups.</p><p>Three metrics that are needed:<br>Caller Hold Time<br>Call Turn Over Time<br>Ticket Resolve Time</p><p>Hold time is the customer's experience in getting thier problem addressed. Not neccessarily resolved, but addressed.</p><p>Call Turn Over Time is key on hinting at the type of problems. If 90\% of your calls are resolved in under 5 minutes, you more then likely have training issues. If 50\% are resolved in the first 5 minutes and 25\% are escalated to PROBLEM MANAGEMENT then you may have a process failure or technical issue.</p><p>Ticket resolve time is over all the volume of touble you have in regards to the severity of the problem. Logging 1200 hours a week of SEV1 tickets tells of serious problems verus 1200 hours a week of SEV3 or 4 problems.</p><p>Mostly management uses those metric for determining what areas need to be addressed. They are not performance metrics on their own, in fact useless for measuring performance. You would need at least the \% of tickets escalated to even start determining performance.</p><p>This of couse is under the assumption of a split between INCIDENT and PROBLEM management.</p></htmltext>
<tokenext>The metric is valid when looking at the model where you have INCIDENT MANAGEMENT versus PROBLEM MANAGEMENT.That first line of call-in is about making sure the human caller gets to a human as quickly as possible .
Within 15 minutes flipping that call should be done OR escalated to PROBLEM MANAGEMENT .
The reasoning is while you are talking with somone there is another caller trying to get a hold of someone.Turn Around time is relevant to INCIDENT MANAGEMENT versus PROBLEM MANAGEMENT .
The problem is when there is not a clear difference between incident and problem management groups.Three metrics that are needed : Caller Hold TimeCall Turn Over TimeTicket Resolve TimeHold time is the customer 's experience in getting thier problem addressed .
Not neccessarily resolved , but addressed.Call Turn Over Time is key on hinting at the type of problems .
If 90 \ % of your calls are resolved in under 5 minutes , you more then likely have training issues .
If 50 \ % are resolved in the first 5 minutes and 25 \ % are escalated to PROBLEM MANAGEMENT then you may have a process failure or technical issue.Ticket resolve time is over all the volume of touble you have in regards to the severity of the problem .
Logging 1200 hours a week of SEV1 tickets tells of serious problems verus 1200 hours a week of SEV3 or 4 problems.Mostly management uses those metric for determining what areas need to be addressed .
They are not performance metrics on their own , in fact useless for measuring performance .
You would need at least the \ % of tickets escalated to even start determining performance.This of couse is under the assumption of a split between INCIDENT and PROBLEM management .</tokentext>
<sentencetext>The metric is valid when looking at the model where you have INCIDENT MANAGEMENT versus PROBLEM MANAGEMENT.That first line of call-in is about making sure the human caller gets to a human as quickly as possible.
Within 15 minutes flipping that call should be done OR escalated to PROBLEM MANAGEMENT.
The reasoning is while you are talking with somone there is another caller trying to get a hold of someone.Turn Around time is relevant to INCIDENT MANAGEMENT versus PROBLEM MANAGEMENT.
The problem is when there is not a clear difference between incident and problem management groups.Three metrics that are needed:Caller Hold TimeCall Turn Over TimeTicket Resolve TimeHold time is the customer's experience in getting thier problem addressed.
Not neccessarily resolved, but addressed.Call Turn Over Time is key on hinting at the type of problems.
If 90\% of your calls are resolved in under 5 minutes, you more then likely have training issues.
If 50\% are resolved in the first 5 minutes and 25\% are escalated to PROBLEM MANAGEMENT then you may have a process failure or technical issue.Ticket resolve time is over all the volume of touble you have in regards to the severity of the problem.
Logging 1200 hours a week of SEV1 tickets tells of serious problems verus 1200 hours a week of SEV3 or 4 problems.Mostly management uses those metric for determining what areas need to be addressed.
They are not performance metrics on their own, in fact useless for measuring performance.
You would need at least the \% of tickets escalated to even start determining performance.This of couse is under the assumption of a split between INCIDENT and PROBLEM management.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355919</id>
	<title>Re:Time to close tickets is 1 factor, not the ONLY</title>
	<author>DigiShaman</author>
	<datestamp>1245159000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>... but should obviously have all been filed under one ticket.</p></div></blockquote><p>Absolutely. The problem isn't a technical issue per se. The problem here is customer service, or the lack of it.</p><p>Regardless of the technical steps required to assist you with the issue, the ticket should have been left open. Closure of a ticket should only happen once the technician has receive an e-mail or voice verification that the problem has been resolved to a satisfactory level.</p><p>An internal IT department should be run like a business within a business. Sending out surveys to all employees is a good measure of how well you are seen and felt throughout the company. If there are any problems, you work them out internally. If they reveal your department in a positive light, can the CFO and the rest of the bean counters really provide a counter argument to your relevancy?</p></div>
	</htmltext>
<tokenext>... but should obviously have all been filed under one ticket.Absolutely .
The problem is n't a technical issue per se .
The problem here is customer service , or the lack of it.Regardless of the technical steps required to assist you with the issue , the ticket should have been left open .
Closure of a ticket should only happen once the technician has receive an e-mail or voice verification that the problem has been resolved to a satisfactory level.An internal IT department should be run like a business within a business .
Sending out surveys to all employees is a good measure of how well you are seen and felt throughout the company .
If there are any problems , you work them out internally .
If they reveal your department in a positive light , can the CFO and the rest of the bean counters really provide a counter argument to your relevancy ?</tokentext>
<sentencetext>... but should obviously have all been filed under one ticket.Absolutely.
The problem isn't a technical issue per se.
The problem here is customer service, or the lack of it.Regardless of the technical steps required to assist you with the issue, the ticket should have been left open.
Closure of a ticket should only happen once the technician has receive an e-mail or voice verification that the problem has been resolved to a satisfactory level.An internal IT department should be run like a business within a business.
Sending out surveys to all employees is a good measure of how well you are seen and felt throughout the company.
If there are any problems, you work them out internally.
If they reveal your department in a positive light, can the CFO and the rest of the bean counters really provide a counter argument to your relevancy?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350157</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350449</id>
	<title>Re:obvious</title>
	<author>cbiltcliffe</author>
	<datestamp>1245177900000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Both of these are very nebulous, and virtually impossible to truly measure.</p><p>Is your customer satisfied because you did a good job?  Or because the last company they had to deal with was their communications provider who basically said "You don't owe that money?  Well, we say you do.  Pay up."</p><p>Have you not had any serious problems because IT is proactive at preventing them?  Or just because your setup negates most of the big problems that hit the news recently?</p><p>I frequently find that the idiot IT guy who gets people back up and running after a major worm infection, enabled by said IT guy's lack of security patching, gets much higher kudos than the one that did all the preventive maintenance beforehand, and didn't get their users infected in the first place.</p></htmltext>
<tokenext>Both of these are very nebulous , and virtually impossible to truly measure.Is your customer satisfied because you did a good job ?
Or because the last company they had to deal with was their communications provider who basically said " You do n't owe that money ?
Well , we say you do .
Pay up .
" Have you not had any serious problems because IT is proactive at preventing them ?
Or just because your setup negates most of the big problems that hit the news recently ? I frequently find that the idiot IT guy who gets people back up and running after a major worm infection , enabled by said IT guy 's lack of security patching , gets much higher kudos than the one that did all the preventive maintenance beforehand , and did n't get their users infected in the first place .</tokentext>
<sentencetext>Both of these are very nebulous, and virtually impossible to truly measure.Is your customer satisfied because you did a good job?
Or because the last company they had to deal with was their communications provider who basically said "You don't owe that money?
Well, we say you do.
Pay up.
"Have you not had any serious problems because IT is proactive at preventing them?
Or just because your setup negates most of the big problems that hit the news recently?I frequently find that the idiot IT guy who gets people back up and running after a major worm infection, enabled by said IT guy's lack of security patching, gets much higher kudos than the one that did all the preventive maintenance beforehand, and didn't get their users infected in the first place.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351095</id>
	<title>Re:Sliding Average</title>
	<author>lukas84</author>
	<datestamp>1245180120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Most helpdesks calls are just users being obnoxious brats.</p><p>The real approach would be to deduct 10\% salary for each call to the helpdesk. If it turns out to be a real problem, the money will be refunded.</p><p>This will avoid useless calls to helpdesk, and ensure that only real problems result in cases.</p></htmltext>
<tokenext>Most helpdesks calls are just users being obnoxious brats.The real approach would be to deduct 10 \ % salary for each call to the helpdesk .
If it turns out to be a real problem , the money will be refunded.This will avoid useless calls to helpdesk , and ensure that only real problems result in cases .</tokentext>
<sentencetext>Most helpdesks calls are just users being obnoxious brats.The real approach would be to deduct 10\% salary for each call to the helpdesk.
If it turns out to be a real problem, the money will be refunded.This will avoid useless calls to helpdesk, and ensure that only real problems result in cases.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350679</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350357</id>
	<title>time to resolve is sometimes misleading</title>
	<author>Col. Panic</author>
	<datestamp>1245177540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>if the helpdesk clock doesn't start until they open the ticket. i get users all the time who say they spent an hour or more on the phone with the helpdesk. i ask them for future reference if the help desk technician cannot resolve their issue within 10-15 minutes, to open a ticket and escalate the issue.</p><p>it is inexcusable for a first level tech who clearly doesn't know how to fix the issue to waste our client's time fumbling around and not resolving the issue. this is why we have tiers of support - log a ticket and send it to a more experienced technician who will save the client time. what i would like the first level techs to do is track the tickets they could not resolve and later look them up and read the audit trail so they learn what the fix was.</p></htmltext>
<tokenext>if the helpdesk clock does n't start until they open the ticket .
i get users all the time who say they spent an hour or more on the phone with the helpdesk .
i ask them for future reference if the help desk technician can not resolve their issue within 10-15 minutes , to open a ticket and escalate the issue.it is inexcusable for a first level tech who clearly does n't know how to fix the issue to waste our client 's time fumbling around and not resolving the issue .
this is why we have tiers of support - log a ticket and send it to a more experienced technician who will save the client time .
what i would like the first level techs to do is track the tickets they could not resolve and later look them up and read the audit trail so they learn what the fix was .</tokentext>
<sentencetext>if the helpdesk clock doesn't start until they open the ticket.
i get users all the time who say they spent an hour or more on the phone with the helpdesk.
i ask them for future reference if the help desk technician cannot resolve their issue within 10-15 minutes, to open a ticket and escalate the issue.it is inexcusable for a first level tech who clearly doesn't know how to fix the issue to waste our client's time fumbling around and not resolving the issue.
this is why we have tiers of support - log a ticket and send it to a more experienced technician who will save the client time.
what i would like the first level techs to do is track the tickets they could not resolve and later look them up and read the audit trail so they learn what the fix was.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350581</id>
	<title>No Tickets Used??</title>
	<author>bouaketh</author>
	<datestamp>1245178320000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>We don't use trouble tickets here.  Our VPs are well enough aware that I am busy, as are the other 3 IT staffers trying to keep everybody moving forward.  Most issues are resolved on a FIFO basis and each staffer has their own AO.  Barring ISP issues most problems get taken care of the same day.  Good infrastructure+ good co-workers+ VP support= Happy and Efficient IT people</htmltext>
<tokenext>We do n't use trouble tickets here .
Our VPs are well enough aware that I am busy , as are the other 3 IT staffers trying to keep everybody moving forward .
Most issues are resolved on a FIFO basis and each staffer has their own AO .
Barring ISP issues most problems get taken care of the same day .
Good infrastructure + good co-workers + VP support = Happy and Efficient IT people</tokentext>
<sentencetext>We don't use trouble tickets here.
Our VPs are well enough aware that I am busy, as are the other 3 IT staffers trying to keep everybody moving forward.
Most issues are resolved on a FIFO basis and each staffer has their own AO.
Barring ISP issues most problems get taken care of the same day.
Good infrastructure+ good co-workers+ VP support= Happy and Efficient IT people</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349743</id>
	<title>Now pay attention</title>
	<author>JamesP</author>
	<datestamp>1245175680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>s/metrics/bullcrap</p><p>A good metric should be</p><p>1 - Enterprisy looking<br>2 - Easy to gamble by the interested</p><p>Your boss wants a number, give it to them quickly. It's all BS (or 99\% of it at least. Don't agree? Do the job then) in the end.</p><p>So good metrics could be.</p><p>- Unplanned downtime<br>- Number of users, number of bytes used, etc (that plots a nice ascending graph, and ASCENDING IS GOOD, you can print that and put it in the wall)</p><p>If they stay on 'time to close the ticket' NEEDINFO and WORKSFORME is your friend.</p></htmltext>
<tokenext>s/metrics/bullcrapA good metric should be1 - Enterprisy looking2 - Easy to gamble by the interestedYour boss wants a number , give it to them quickly .
It 's all BS ( or 99 \ % of it at least .
Do n't agree ?
Do the job then ) in the end.So good metrics could be.- Unplanned downtime- Number of users , number of bytes used , etc ( that plots a nice ascending graph , and ASCENDING IS GOOD , you can print that and put it in the wall ) If they stay on 'time to close the ticket ' NEEDINFO and WORKSFORME is your friend .</tokentext>
<sentencetext>s/metrics/bullcrapA good metric should be1 - Enterprisy looking2 - Easy to gamble by the interestedYour boss wants a number, give it to them quickly.
It's all BS (or 99\% of it at least.
Don't agree?
Do the job then) in the end.So good metrics could be.- Unplanned downtime- Number of users, number of bytes used, etc (that plots a nice ascending graph, and ASCENDING IS GOOD, you can print that and put it in the wall)If they stay on 'time to close the ticket' NEEDINFO and WORKSFORME is your friend.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351855</id>
	<title>Availability - Fault Downtime</title>
	<author>bmsleight</author>
	<datestamp>1245182880000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Measure availability of systems. The time a fault is opened until the fault is closed. So that if a fault is not really resolved it gets re-raised.  It gets more complex, use this for traffic systems (traffic Lights in London, UK.
 <a href="http://www.barwap.com/files/2008/Oct/19/STRIVING\_FOR\_100\_percent\_AVAILABILITY\_OF\_LONDON-S\_TRAFFIC\_CONTROL\_SYSTEMS\_\_-2.pdf" title="barwap.com">Paper (PDF)</a> [barwap.com] and CCTV across London.</htmltext>
<tokenext>Measure availability of systems .
The time a fault is opened until the fault is closed .
So that if a fault is not really resolved it gets re-raised .
It gets more complex , use this for traffic systems ( traffic Lights in London , UK .
Paper ( PDF ) [ barwap.com ] and CCTV across London .</tokentext>
<sentencetext>Measure availability of systems.
The time a fault is opened until the fault is closed.
So that if a fault is not really resolved it gets re-raised.
It gets more complex, use this for traffic systems (traffic Lights in London, UK.
Paper (PDF) [barwap.com] and CCTV across London.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28356031</id>
	<title>Metrics</title>
	<author>Tired and Emotional</author>
	<datestamp>1245159720000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The devil is always in the details. If the team is simply doing support tasks, time to close is one reasonable part of a metric, but you need to at least measure customer satisfaction since just time to fix will tend to drive customer satisfaction down.
<p>
If they are responsible as well for implementing or administrating the system generating the tickets, then arrival rates need to also be measured. You don't want the team creating easy to fix outages so that they can bias the metrics.
</p><p>
In general you need to try to work out what behaviors a given metric will tend to, or could, produce, and you need to combine the metric with elements that measure unwanted behaviors. You want the resulting metrics to be scale free, so that the metric cannot be gamed simply by changing some parameter.
</p><p>
Simple example, suppsoe you measure a maintenance team by how many new regressions they create (per month say) in the maintained system. The team can get zero (which in this case is good) by never fixing any existing bugs). So as a metric this is useless. Instead of course, you should be measuring regressions per bug fixed. This is scale free because you can measure over one month or one year and the size of the values will not change just because the period changed.</p></htmltext>
<tokenext>The devil is always in the details .
If the team is simply doing support tasks , time to close is one reasonable part of a metric , but you need to at least measure customer satisfaction since just time to fix will tend to drive customer satisfaction down .
If they are responsible as well for implementing or administrating the system generating the tickets , then arrival rates need to also be measured .
You do n't want the team creating easy to fix outages so that they can bias the metrics .
In general you need to try to work out what behaviors a given metric will tend to , or could , produce , and you need to combine the metric with elements that measure unwanted behaviors .
You want the resulting metrics to be scale free , so that the metric can not be gamed simply by changing some parameter .
Simple example , suppsoe you measure a maintenance team by how many new regressions they create ( per month say ) in the maintained system .
The team can get zero ( which in this case is good ) by never fixing any existing bugs ) .
So as a metric this is useless .
Instead of course , you should be measuring regressions per bug fixed .
This is scale free because you can measure over one month or one year and the size of the values will not change just because the period changed .</tokentext>
<sentencetext>The devil is always in the details.
If the team is simply doing support tasks, time to close is one reasonable part of a metric, but you need to at least measure customer satisfaction since just time to fix will tend to drive customer satisfaction down.
If they are responsible as well for implementing or administrating the system generating the tickets, then arrival rates need to also be measured.
You don't want the team creating easy to fix outages so that they can bias the metrics.
In general you need to try to work out what behaviors a given metric will tend to, or could, produce, and you need to combine the metric with elements that measure unwanted behaviors.
You want the resulting metrics to be scale free, so that the metric cannot be gamed simply by changing some parameter.
Simple example, suppsoe you measure a maintenance team by how many new regressions they create (per month say) in the maintained system.
The team can get zero (which in this case is good) by never fixing any existing bugs).
So as a metric this is useless.
Instead of course, you should be measuring regressions per bug fixed.
This is scale free because you can measure over one month or one year and the size of the values will not change just because the period changed.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351431</id>
	<title>IT is measured by middle manager politics</title>
	<author>bodland</author>
	<datestamp>1245181440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>The more CYA, finger pointing (FP) and douche-baggery (DBAG) done between IT managers the better we must be doing.
[br]
[br]
All metrics are just a bastard-child of inter-management dynamics and are in themselves political tools used in executing CYA, FP and DBAG
[br]
Most metrics are completely divorced from the real accomplishments that IT support people achieve everyday. Nobody cares that the average time to resolution goal was exceeded by 38\%. What needs to be recognized is when IT saves the business money and increases user productivity due to the individual actions of IT pros in the trenches.
[br]
But that simply is not possible when the accomplishments rollup to a managment that is about reports, charts and projections and arcane metric tracking.</htmltext>
<tokenext>The more CYA , finger pointing ( FP ) and douche-baggery ( DBAG ) done between IT managers the better we must be doing .
[ br ] [ br ] All metrics are just a bastard-child of inter-management dynamics and are in themselves political tools used in executing CYA , FP and DBAG [ br ] Most metrics are completely divorced from the real accomplishments that IT support people achieve everyday .
Nobody cares that the average time to resolution goal was exceeded by 38 \ % .
What needs to be recognized is when IT saves the business money and increases user productivity due to the individual actions of IT pros in the trenches .
[ br ] But that simply is not possible when the accomplishments rollup to a managment that is about reports , charts and projections and arcane metric tracking .</tokentext>
<sentencetext>The more CYA, finger pointing (FP) and douche-baggery (DBAG) done between IT managers the better we must be doing.
[br]
[br]
All metrics are just a bastard-child of inter-management dynamics and are in themselves political tools used in executing CYA, FP and DBAG
[br]
Most metrics are completely divorced from the real accomplishments that IT support people achieve everyday.
Nobody cares that the average time to resolution goal was exceeded by 38\%.
What needs to be recognized is when IT saves the business money and increases user productivity due to the individual actions of IT pros in the trenches.
[br]
But that simply is not possible when the accomplishments rollup to a managment that is about reports, charts and projections and arcane metric tracking.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349937</id>
	<title>Management get the behavior that it rewards.</title>
	<author>xs650</author>
	<datestamp>1245176220000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>5</modscore>
	<htmltext>Management gets the behavior that it rewards, not necessarily the behavior that it pretends to ask for</htmltext>
<tokenext>Management gets the behavior that it rewards , not necessarily the behavior that it pretends to ask for</tokentext>
<sentencetext>Management gets the behavior that it rewards, not necessarily the behavior that it pretends to ask for</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350367</id>
	<title>Re:When testing a new blade server install...</title>
	<author>Anonymous</author>
	<datestamp>1245177600000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>3</modscore>
	<htmltext><p>Just need to upgrade your network and disk I/O.  I get 14, easy.<nobr> <wbr></nobr>:-)</p><p>Seriously, though...I think the submitter is right.  You should be trying to reduce the total number of tickets, but then you've got to be wary of trying to improve your performance score by saying "That's too small an issue to be opening a ticket for.  I'll just ignore it/fix it on my lunch break/tell the user to bugger off."</p><p>I don't think any single metric is useful.  Probably something like:</p><p>average # of tickets open X 2 +<br>average hours from open to close X 5 +<br># of security breaches in past year X 100 +<br># of times with no open tickets in past week X 1 =</p><p>your IT performance score.  Obviously, lower is better.  Change the weightings to your preference, and if you'd rather a higher number be better, divide 10 by your result.</p><p>Surely somebody's got some formula like this already.  I wouldn't be surprised if there's some obscure standard somewhere that nobody uses because it'll make management look bad......</p></htmltext>
<tokenext>Just need to upgrade your network and disk I/O .
I get 14 , easy .
: - ) Seriously , though...I think the submitter is right .
You should be trying to reduce the total number of tickets , but then you 've got to be wary of trying to improve your performance score by saying " That 's too small an issue to be opening a ticket for .
I 'll just ignore it/fix it on my lunch break/tell the user to bugger off .
" I do n't think any single metric is useful .
Probably something like : average # of tickets open X 2 + average hours from open to close X 5 + # of security breaches in past year X 100 + # of times with no open tickets in past week X 1 = your IT performance score .
Obviously , lower is better .
Change the weightings to your preference , and if you 'd rather a higher number be better , divide 10 by your result.Surely somebody 's got some formula like this already .
I would n't be surprised if there 's some obscure standard somewhere that nobody uses because it 'll make management look bad..... .</tokentext>
<sentencetext>Just need to upgrade your network and disk I/O.
I get 14, easy.
:-)Seriously, though...I think the submitter is right.
You should be trying to reduce the total number of tickets, but then you've got to be wary of trying to improve your performance score by saying "That's too small an issue to be opening a ticket for.
I'll just ignore it/fix it on my lunch break/tell the user to bugger off.
"I don't think any single metric is useful.
Probably something like:average # of tickets open X 2 +average hours from open to close X 5 +# of security breaches in past year X 100 +# of times with no open tickets in past week X 1 =your IT performance score.
Obviously, lower is better.
Change the weightings to your preference, and if you'd rather a higher number be better, divide 10 by your result.Surely somebody's got some formula like this already.
I wouldn't be surprised if there's some obscure standard somewhere that nobody uses because it'll make management look bad......</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349631</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351467</id>
	<title>Re:Stop asking to do stupid things</title>
	<author>Achromatic1978</author>
	<datestamp>1245181560000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><blockquote><div><p>Bring a 32-way server up this week, when your project hasn't been approved yet. These things take about a month to get delivered and another month to get installed, configured, connected to the SAN and ready for applications</p></div></blockquote><p>A month to rackmount a server, install and configure OS, configure iSCSI, or FC? Wow. No wonder your users are unhappy with their expectations of IT.</p></div>
	</htmltext>
<tokenext>Bring a 32-way server up this week , when your project has n't been approved yet .
These things take about a month to get delivered and another month to get installed , configured , connected to the SAN and ready for applicationsA month to rackmount a server , install and configure OS , configure iSCSI , or FC ?
Wow. No wonder your users are unhappy with their expectations of IT .</tokentext>
<sentencetext>Bring a 32-way server up this week, when your project hasn't been approved yet.
These things take about a month to get delivered and another month to get installed, configured, connected to the SAN and ready for applicationsA month to rackmount a server, install and configure OS, configure iSCSI, or FC?
Wow. No wonder your users are unhappy with their expectations of IT.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350689</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352101</id>
	<title>Re:No cnt++</title>
	<author>Anonymous</author>
	<datestamp>1245184020000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>I thought IT got paid for the number of times they said 'No' to us during the day.</p><p>go figure.</p></div><p>Most of my no's come from protecting the user from themselves.</p></div>
	</htmltext>
<tokenext>I thought IT got paid for the number of times they said 'No ' to us during the day.go figure.Most of my no 's come from protecting the user from themselves .</tokentext>
<sentencetext>I thought IT got paid for the number of times they said 'No' to us during the day.go figure.Most of my no's come from protecting the user from themselves.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350683</id>
	<title>Re:When testing a new blade server install...</title>
	<author>Anonymous</author>
	<datestamp>1245178560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Of we could just be BOFHs and eat their heads off, ala <a href="http://www.youtube.com/watch?v=LxsULGo24qI" title="youtube.com" rel="nofollow">this video</a> [youtube.com] (NSFW).</p></htmltext>
<tokenext>Of we could just be BOFHs and eat their heads off , ala this video [ youtube.com ] ( NSFW ) .</tokentext>
<sentencetext>Of we could just be BOFHs and eat their heads off, ala this video [youtube.com] (NSFW).</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349631</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843</id>
	<title>Sounds good to me.</title>
	<author>khasim</author>
	<datestamp>1245175980000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>5</modscore>
	<htmltext><p>For example, for every fax successfully sent via the fax server without IT intervention, the IT department gets one point.</p><p>For every fax that needs IT intervention to be sent, the IT department loses one point.</p><p>For every person who becomes aware of a problem with the fax server, the IT department loses one point. No more "heroics". The goal is to be as invisible as possible to the end users.</p><p>And similar items for every other server/service that IT supports. If nothing else, it will show exactly where the problems really are.</p></htmltext>
<tokenext>For example , for every fax successfully sent via the fax server without IT intervention , the IT department gets one point.For every fax that needs IT intervention to be sent , the IT department loses one point.For every person who becomes aware of a problem with the fax server , the IT department loses one point .
No more " heroics " .
The goal is to be as invisible as possible to the end users.And similar items for every other server/service that IT supports .
If nothing else , it will show exactly where the problems really are .</tokentext>
<sentencetext>For example, for every fax successfully sent via the fax server without IT intervention, the IT department gets one point.For every fax that needs IT intervention to be sent, the IT department loses one point.For every person who becomes aware of a problem with the fax server, the IT department loses one point.
No more "heroics".
The goal is to be as invisible as possible to the end users.And similar items for every other server/service that IT supports.
If nothing else, it will show exactly where the problems really are.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28360929</id>
	<title>common sense metrics help IT improve</title>
	<author>jfederline</author>
	<datestamp>1245250440000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>&gt;&gt;&gt;Shouldn't we be focused on reducing calls, rather than simply closing them quickly?
<p>
Yes, definitely. Furthermore, you should be focused on receiving calls about NEW issues all the time. Receiving calls on known issues over and over means the root causes aren't being fixed by IT. Service Desk managers mistakenly use "First Call Resolution Rate" as a metric - how many calls did the Service Desk resolve on the same call that the user registered the issue with. This is a false possitive - if this is "nice and high" like so many misguided managers want, all it means is that you employ a bunch of robots who are answering the same question over and over and over, and IT still sucks.
</p><p>


&gt;&gt;&gt;My question is: How is your IT performance measured, and how do you think it should be measured?"
</p><p>


Malcolm Fry has written and presented extensively on the common-sense metrics for an IT service desk to track for all IT operational processes.
</p><p>


Some paraphrased excerpts from memory for INCIDENT MGMT:
</p><p>


Total number of incidents - over time, broken down by day and time-of-day. Use to predict and manage workload and staff at baseline.
</p><p>


Mean elapsed time to achieve incident resolution or circumvention - Also for managing workload and staff, because if it takes longer, you need more staff. Notice you don't pick an arbitrary time for a call to last - it lasts however long it has to last to get the customer working again.
</p><p>


First call resolution - long touted as a "great" service desk metric if it is "nice and high", this number should be low and plateau low eventually if PROBLEM MANAGEMENT is doing its job - fixing root causes. The service desk should have to solve/dispatch NEW and UNKNOWN issues more often than being a robot, since that says IT is solving old and existing problems before new ones crop up - PROACTIVENESS. Correlate this metric with the Number of Incidents over time to see if IT fixing things allows you to ramp down Service Desk staff.
</p><p>


There are ways to use metrics to improve the org performance, but ignorant managers frequently use metrics for personal gain and not organizational gain. They should have their bonuses withheld automagically.
</p></htmltext>
<tokenext>&gt; &gt; &gt; Should n't we be focused on reducing calls , rather than simply closing them quickly ?
Yes , definitely .
Furthermore , you should be focused on receiving calls about NEW issues all the time .
Receiving calls on known issues over and over means the root causes are n't being fixed by IT .
Service Desk managers mistakenly use " First Call Resolution Rate " as a metric - how many calls did the Service Desk resolve on the same call that the user registered the issue with .
This is a false possitive - if this is " nice and high " like so many misguided managers want , all it means is that you employ a bunch of robots who are answering the same question over and over and over , and IT still sucks .
&gt; &gt; &gt; My question is : How is your IT performance measured , and how do you think it should be measured ?
" Malcolm Fry has written and presented extensively on the common-sense metrics for an IT service desk to track for all IT operational processes .
Some paraphrased excerpts from memory for INCIDENT MGMT : Total number of incidents - over time , broken down by day and time-of-day .
Use to predict and manage workload and staff at baseline .
Mean elapsed time to achieve incident resolution or circumvention - Also for managing workload and staff , because if it takes longer , you need more staff .
Notice you do n't pick an arbitrary time for a call to last - it lasts however long it has to last to get the customer working again .
First call resolution - long touted as a " great " service desk metric if it is " nice and high " , this number should be low and plateau low eventually if PROBLEM MANAGEMENT is doing its job - fixing root causes .
The service desk should have to solve/dispatch NEW and UNKNOWN issues more often than being a robot , since that says IT is solving old and existing problems before new ones crop up - PROACTIVENESS .
Correlate this metric with the Number of Incidents over time to see if IT fixing things allows you to ramp down Service Desk staff .
There are ways to use metrics to improve the org performance , but ignorant managers frequently use metrics for personal gain and not organizational gain .
They should have their bonuses withheld automagically .</tokentext>
<sentencetext>&gt;&gt;&gt;Shouldn't we be focused on reducing calls, rather than simply closing them quickly?
Yes, definitely.
Furthermore, you should be focused on receiving calls about NEW issues all the time.
Receiving calls on known issues over and over means the root causes aren't being fixed by IT.
Service Desk managers mistakenly use "First Call Resolution Rate" as a metric - how many calls did the Service Desk resolve on the same call that the user registered the issue with.
This is a false possitive - if this is "nice and high" like so many misguided managers want, all it means is that you employ a bunch of robots who are answering the same question over and over and over, and IT still sucks.
&gt;&gt;&gt;My question is: How is your IT performance measured, and how do you think it should be measured?
"



Malcolm Fry has written and presented extensively on the common-sense metrics for an IT service desk to track for all IT operational processes.
Some paraphrased excerpts from memory for INCIDENT MGMT:



Total number of incidents - over time, broken down by day and time-of-day.
Use to predict and manage workload and staff at baseline.
Mean elapsed time to achieve incident resolution or circumvention - Also for managing workload and staff, because if it takes longer, you need more staff.
Notice you don't pick an arbitrary time for a call to last - it lasts however long it has to last to get the customer working again.
First call resolution - long touted as a "great" service desk metric if it is "nice and high", this number should be low and plateau low eventually if PROBLEM MANAGEMENT is doing its job - fixing root causes.
The service desk should have to solve/dispatch NEW and UNKNOWN issues more often than being a robot, since that says IT is solving old and existing problems before new ones crop up - PROACTIVENESS.
Correlate this metric with the Number of Incidents over time to see if IT fixing things allows you to ramp down Service Desk staff.
There are ways to use metrics to improve the org performance, but ignorant managers frequently use metrics for personal gain and not organizational gain.
They should have their bonuses withheld automagically.
</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350879</id>
	<title>Too funny</title>
	<author>jimbobborg</author>
	<datestamp>1245179280000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Are we talking general IT support or just Help Desk?</p><p>Personally, I work server administration.  If my servers are offline without notification, that should be a ping.  If I can keep my servers going without any outages that users can see, I'm doing my job.</p><p>Unfortunately, I've had managers who've only worked with regular office workers or with programmers.  They don't quite "get" what I do, so they start demanding metrics.  They're uncomfortable with just leaving me/us alone and doing our jobs.</p></htmltext>
<tokenext>Are we talking general IT support or just Help Desk ? Personally , I work server administration .
If my servers are offline without notification , that should be a ping .
If I can keep my servers going without any outages that users can see , I 'm doing my job.Unfortunately , I 've had managers who 've only worked with regular office workers or with programmers .
They do n't quite " get " what I do , so they start demanding metrics .
They 're uncomfortable with just leaving me/us alone and doing our jobs .</tokentext>
<sentencetext>Are we talking general IT support or just Help Desk?Personally, I work server administration.
If my servers are offline without notification, that should be a ping.
If I can keep my servers going without any outages that users can see, I'm doing my job.Unfortunately, I've had managers who've only worked with regular office workers or with programmers.
They don't quite "get" what I do, so they start demanding metrics.
They're uncomfortable with just leaving me/us alone and doing our jobs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350067</id>
	<title>How quickly can you close calls?</title>
	<author>randomnote1</author>
	<datestamp>1245176640000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Sounds pretty normal for a call center.  At my last job management got excited if a case was open for over two weeks regardless if the issue was resolved or not.  That's what I call great customer service!</htmltext>
<tokenext>Sounds pretty normal for a call center .
At my last job management got excited if a case was open for over two weeks regardless if the issue was resolved or not .
That 's what I call great customer service !</tokentext>
<sentencetext>Sounds pretty normal for a call center.
At my last job management got excited if a case was open for over two weeks regardless if the issue was resolved or not.
That's what I call great customer service!</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350715</id>
	<title>Stupid metrics</title>
	<author>Aladrin</author>
	<datestamp>1245178680000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>3</modscore>
	<htmltext><p>Stupid metrics are part of the problem.  When I worked for Gateway, they wanted your call average to be between 7 and 11 minutes.  If you went above for the week/month, you were too slow and bad at your job.  If you went below, you were probably just getting people off the phone without solving their problems.</p><p>That metric worked for most people, because they talk slow and have to look up every single issue.</p><p>For me, it was killer.  I was consistently getting 5 minutes averages, even with that inevitable once-a-day 1-hour phone call.  I got reprimanded twice about it before I gave up and quit.  Almost every caller was happy with how I helped them.  The others couldn't be helped, or I made a mistake.  (I told a guy he could clean his keyboard, once...  They had switched to keyboards that fall apart if you try to open them, apparently.  In my defense, I had offered to send one, but the guy thought cleaning it would be a lot faster.)</p><p>Also note that a certain percentage of calls were recorded and reviewed, and I -never- got talked to about any of my calls.  The only complaint I had was the keyboard guy.  And yet I still got yelled at for short call times.</p><p>Again, stupid metrics are stupid.  Call-time has nothing to do with customer satisfaction.</p></htmltext>
<tokenext>Stupid metrics are part of the problem .
When I worked for Gateway , they wanted your call average to be between 7 and 11 minutes .
If you went above for the week/month , you were too slow and bad at your job .
If you went below , you were probably just getting people off the phone without solving their problems.That metric worked for most people , because they talk slow and have to look up every single issue.For me , it was killer .
I was consistently getting 5 minutes averages , even with that inevitable once-a-day 1-hour phone call .
I got reprimanded twice about it before I gave up and quit .
Almost every caller was happy with how I helped them .
The others could n't be helped , or I made a mistake .
( I told a guy he could clean his keyboard , once... They had switched to keyboards that fall apart if you try to open them , apparently .
In my defense , I had offered to send one , but the guy thought cleaning it would be a lot faster .
) Also note that a certain percentage of calls were recorded and reviewed , and I -never- got talked to about any of my calls .
The only complaint I had was the keyboard guy .
And yet I still got yelled at for short call times.Again , stupid metrics are stupid .
Call-time has nothing to do with customer satisfaction .</tokentext>
<sentencetext>Stupid metrics are part of the problem.
When I worked for Gateway, they wanted your call average to be between 7 and 11 minutes.
If you went above for the week/month, you were too slow and bad at your job.
If you went below, you were probably just getting people off the phone without solving their problems.That metric worked for most people, because they talk slow and have to look up every single issue.For me, it was killer.
I was consistently getting 5 minutes averages, even with that inevitable once-a-day 1-hour phone call.
I got reprimanded twice about it before I gave up and quit.
Almost every caller was happy with how I helped them.
The others couldn't be helped, or I made a mistake.
(I told a guy he could clean his keyboard, once...  They had switched to keyboards that fall apart if you try to open them, apparently.
In my defense, I had offered to send one, but the guy thought cleaning it would be a lot faster.
)Also note that a certain percentage of calls were recorded and reviewed, and I -never- got talked to about any of my calls.
The only complaint I had was the keyboard guy.
And yet I still got yelled at for short call times.Again, stupid metrics are stupid.
Call-time has nothing to do with customer satisfaction.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28365709</id>
	<title>Re:count tickets never openend</title>
	<author>webscathe</author>
	<datestamp>1245230340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>The company I'm working for now is starting to go crazy about KPI's, Key Performance Indicators... Total tickets closed is ridiculous, the only metric that shows (more tickets closed is better??) is how much stuff is screwed up and needs to be fixed. There are only two things I can think of that have any value to IT.</p><p>MTBF - Mean Time Between Failures<br>
How long have we gone between thing X breaking?</p><p>MTTR - Mean Time To Recovery<br>
How long does it take you to fix thing X when it breaks?</p><p>Both of these metrics have the benefit of expressing the value of proactive IT work.</p></htmltext>
<tokenext>The company I 'm working for now is starting to go crazy about KPI 's , Key Performance Indicators... Total tickets closed is ridiculous , the only metric that shows ( more tickets closed is better ? ?
) is how much stuff is screwed up and needs to be fixed .
There are only two things I can think of that have any value to IT.MTBF - Mean Time Between Failures How long have we gone between thing X breaking ? MTTR - Mean Time To Recovery How long does it take you to fix thing X when it breaks ? Both of these metrics have the benefit of expressing the value of proactive IT work .</tokentext>
<sentencetext>The company I'm working for now is starting to go crazy about KPI's, Key Performance Indicators... Total tickets closed is ridiculous, the only metric that shows (more tickets closed is better??
) is how much stuff is screwed up and needs to be fixed.
There are only two things I can think of that have any value to IT.MTBF - Mean Time Between Failures
How long have we gone between thing X breaking?MTTR - Mean Time To Recovery
How long does it take you to fix thing X when it breaks?Both of these metrics have the benefit of expressing the value of proactive IT work.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354839</id>
	<title>IT should *not* (&amp; can not!) make itself obsol</title>
	<author>jonaskoelker</author>
	<datestamp>1245153000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><div class="quote"><p>An IT-department, IMHO, should be working on making itself obsolete.</p></div><p>I disagree.</p><p>As an inspirational aside, consider <a href="http://en.wikipedia.org/wiki/Peelian\_Principles" title="wikipedia.org">Peel's Principles</a> [wikipedia.org] about what an ethical police force is.</p><p><div class="quote"><p>Police, at all times, should maintain a relationship with the public that gives reality to the historic tradition that the police are the public and the public are the police; <em>the police being only members of the public who are paid to give full-time attention to duties which are incumbent upon every citizen</em> in the interests of community welfare and existence.</p></div><p>I must've skipped the history lesson where all the other kids were told that the public is the police and vice versa, so I don't have much to say about it.</p><p>What I think is relevant is the idea of paying someone to pay full-time attention to a particular task even if others could do it.  It takes time filling out order forms for more backup tapes and drives for the storage system.  It takes time repairing the tape robot (or being the tape monkey).  It takes time installing ssh-tunneled IRC daemons and mail servers and... whatever other tasks the IT department is tasked with.</p><p>Plus, I question the degree to which you can obviate the need for specialized knowledge.  Even if it's a piece of cake installing ssh-tunneled IRC daemons, knowing that you want <em>that</em> rather than jabbber daemons (or sending internal memos through MSN servers!) is not trivial.</p><p>Any department should work on (1) doing what it's there to do; and (2) do it more effectively.  But its tasks (presumably) need to be done.  By dissolving the department, you only shift the tasks onto someone else.  Is that really the smart(est) thing to do?</p><p><div class="quote"><p>A nice metric might be the count of tickets that are never opened.</p></div><p>I agree with this, wholeheartedly!  So does Robert Peel:</p><p><div class="quote"><p>The test of police efficiency is the absence of crime and disorder, not the visible evidence of police action in dealing with it.</p></div></div>
	</htmltext>
<tokenext>An IT-department , IMHO , should be working on making itself obsolete.I disagree.As an inspirational aside , consider Peel 's Principles [ wikipedia.org ] about what an ethical police force is.Police , at all times , should maintain a relationship with the public that gives reality to the historic tradition that the police are the public and the public are the police ; the police being only members of the public who are paid to give full-time attention to duties which are incumbent upon every citizen in the interests of community welfare and existence.I must 've skipped the history lesson where all the other kids were told that the public is the police and vice versa , so I do n't have much to say about it.What I think is relevant is the idea of paying someone to pay full-time attention to a particular task even if others could do it .
It takes time filling out order forms for more backup tapes and drives for the storage system .
It takes time repairing the tape robot ( or being the tape monkey ) .
It takes time installing ssh-tunneled IRC daemons and mail servers and... whatever other tasks the IT department is tasked with.Plus , I question the degree to which you can obviate the need for specialized knowledge .
Even if it 's a piece of cake installing ssh-tunneled IRC daemons , knowing that you want that rather than jabbber daemons ( or sending internal memos through MSN servers !
) is not trivial.Any department should work on ( 1 ) doing what it 's there to do ; and ( 2 ) do it more effectively .
But its tasks ( presumably ) need to be done .
By dissolving the department , you only shift the tasks onto someone else .
Is that really the smart ( est ) thing to do ? A nice metric might be the count of tickets that are never opened.I agree with this , wholeheartedly !
So does Robert Peel : The test of police efficiency is the absence of crime and disorder , not the visible evidence of police action in dealing with it .</tokentext>
<sentencetext>An IT-department, IMHO, should be working on making itself obsolete.I disagree.As an inspirational aside, consider Peel's Principles [wikipedia.org] about what an ethical police force is.Police, at all times, should maintain a relationship with the public that gives reality to the historic tradition that the police are the public and the public are the police; the police being only members of the public who are paid to give full-time attention to duties which are incumbent upon every citizen in the interests of community welfare and existence.I must've skipped the history lesson where all the other kids were told that the public is the police and vice versa, so I don't have much to say about it.What I think is relevant is the idea of paying someone to pay full-time attention to a particular task even if others could do it.
It takes time filling out order forms for more backup tapes and drives for the storage system.
It takes time repairing the tape robot (or being the tape monkey).
It takes time installing ssh-tunneled IRC daemons and mail servers and... whatever other tasks the IT department is tasked with.Plus, I question the degree to which you can obviate the need for specialized knowledge.
Even if it's a piece of cake installing ssh-tunneled IRC daemons, knowing that you want that rather than jabbber daemons (or sending internal memos through MSN servers!
) is not trivial.Any department should work on (1) doing what it's there to do; and (2) do it more effectively.
But its tasks (presumably) need to be done.
By dissolving the department, you only shift the tasks onto someone else.
Is that really the smart(est) thing to do?A nice metric might be the count of tickets that are never opened.I agree with this, wholeheartedly!
So does Robert Peel:The test of police efficiency is the absence of crime and disorder, not the visible evidence of police action in dealing with it.
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351519</id>
	<title>Not just a problem of metrics</title>
	<author>Anonymous</author>
	<datestamp>1245181740000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>The problem is not just one of meeting metrics, even when there is a customer focus, there is still a trade-off between how thuroughly you fix a problem vs. how long you take (and how long your customer is without their computer/internet).</p><p>For example, in college I worked at my campus' ResNet department, and generally students would come down with problems only when issues got so bad that they couldn't start their computer. A lot of time, this could be fixed in a few hours by running chkdsk/r from recovery console. However, upon fixing the bluescreen, it became aparent that the computer was also loaded down with gobs of malware. The trouble ticket was "solved" within a few hours, but it was clear that there were more issues - do we return the laptop and check off an issue, or actually try to improve the students computer?</p><p>Our general MO was to get the computer entirely virus/malware clean and updated, but from time to time when we encountered a new piece of malware, it would take usover a week to fix things, and also there was a fair bit of redundent effort ('well, we think we cleaned this, but lets run another spybot scan, just to be sure') or people who just didn't know what they were doing. This MO was leading to far to great a focus on depth of service, and emphasising metrics actually helped us better our turnaround time, which in turn improved our reputation throughout campus, and only very rarely would we hold onto a computer for a week.</p><p>The trick is setting up a good balance between depth of service and turnaround time.</p></htmltext>
<tokenext>The problem is not just one of meeting metrics , even when there is a customer focus , there is still a trade-off between how thuroughly you fix a problem vs. how long you take ( and how long your customer is without their computer/internet ) .For example , in college I worked at my campus ' ResNet department , and generally students would come down with problems only when issues got so bad that they could n't start their computer .
A lot of time , this could be fixed in a few hours by running chkdsk/r from recovery console .
However , upon fixing the bluescreen , it became aparent that the computer was also loaded down with gobs of malware .
The trouble ticket was " solved " within a few hours , but it was clear that there were more issues - do we return the laptop and check off an issue , or actually try to improve the students computer ? Our general MO was to get the computer entirely virus/malware clean and updated , but from time to time when we encountered a new piece of malware , it would take usover a week to fix things , and also there was a fair bit of redundent effort ( 'well , we think we cleaned this , but lets run another spybot scan , just to be sure ' ) or people who just did n't know what they were doing .
This MO was leading to far to great a focus on depth of service , and emphasising metrics actually helped us better our turnaround time , which in turn improved our reputation throughout campus , and only very rarely would we hold onto a computer for a week.The trick is setting up a good balance between depth of service and turnaround time .</tokentext>
<sentencetext>The problem is not just one of meeting metrics, even when there is a customer focus, there is still a trade-off between how thuroughly you fix a problem vs. how long you take (and how long your customer is without their computer/internet).For example, in college I worked at my campus' ResNet department, and generally students would come down with problems only when issues got so bad that they couldn't start their computer.
A lot of time, this could be fixed in a few hours by running chkdsk/r from recovery console.
However, upon fixing the bluescreen, it became aparent that the computer was also loaded down with gobs of malware.
The trouble ticket was "solved" within a few hours, but it was clear that there were more issues - do we return the laptop and check off an issue, or actually try to improve the students computer?Our general MO was to get the computer entirely virus/malware clean and updated, but from time to time when we encountered a new piece of malware, it would take usover a week to fix things, and also there was a fair bit of redundent effort ('well, we think we cleaned this, but lets run another spybot scan, just to be sure') or people who just didn't know what they were doing.
This MO was leading to far to great a focus on depth of service, and emphasising metrics actually helped us better our turnaround time, which in turn improved our reputation throughout campus, and only very rarely would we hold onto a computer for a week.The trick is setting up a good balance between depth of service and turnaround time.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350223</id>
	<title>Social metrics</title>
	<author>davecrusoe</author>
	<datestamp>1245177180000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>0</modscore>
	<htmltext>We're doing a lot of social outreach, and measured by metrics like how many new members join through our outreach. We're still searching for the best metric to measure our progress in this realm.

To that extent, we had to develop our own tool (!), available for free to others at <a href="http://www.sociafyq.com/" title="sociafyq.com">http://www.sociafyq.com/</a> [sociafyq.com] .

Cheers,
--Dave</htmltext>
<tokenext>We 're doing a lot of social outreach , and measured by metrics like how many new members join through our outreach .
We 're still searching for the best metric to measure our progress in this realm .
To that extent , we had to develop our own tool ( !
) , available for free to others at http : //www.sociafyq.com/ [ sociafyq.com ] .
Cheers , --Dave</tokentext>
<sentencetext>We're doing a lot of social outreach, and measured by metrics like how many new members join through our outreach.
We're still searching for the best metric to measure our progress in this realm.
To that extent, we had to develop our own tool (!
), available for free to others at http://www.sociafyq.com/ [sociafyq.com] .
Cheers,
--Dave</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349703</id>
	<title>Reducing calls...</title>
	<author>Anonymous</author>
	<datestamp>1245175560000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>...reduces jobs.</p></htmltext>
<tokenext>...reduces jobs .</tokentext>
<sentencetext>...reduces jobs.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349659</id>
	<title>I think it should be measured...</title>
	<author>IntricateEnigma</author>
	<datestamp>1245175380000</datestamp>
	<modclass>Funny</modclass>
	<modscore>5</modscore>
	<htmltext><p>...by the number of callers left alive at the end of the day.</p></htmltext>
<tokenext>...by the number of callers left alive at the end of the day .</tokentext>
<sentencetext>...by the number of callers left alive at the end of the day.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351883</id>
	<title>Before you apply metrics...</title>
	<author>SloppyElvis</author>
	<datestamp>1245183060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Before you apply metrics you need to define the problem you are trying to solve.

<ol>
<li>Determine what it is that needs improving</li><li>Determine the symptoms/indications that either contribute to or result from the thing that needs improving</li><li>Determine the metrics which can quantify the symptoms/indications</li><li>Create a plan to improve the thing that needs improving</li><li>Execute the plan to improve the thing that needs improving</li><li>Measure if plan execution is having any affect on the symptoms/indications using the metrics</li><li>Evaluate if you have the proper execution</li><li>Evaluate if you have the proper plan</li><li>Evaluate if you have the proper metrics</li><li>Evaluate if you have the proper symptoms/indications</li><li>Evaluate if you have the proper thing to improve</li><li>Rinse and Repeat</li></ol><p>

It is madness to measure for the sake of measuring.</p></htmltext>
<tokenext>Before you apply metrics you need to define the problem you are trying to solve .
Determine what it is that needs improvingDetermine the symptoms/indications that either contribute to or result from the thing that needs improvingDetermine the metrics which can quantify the symptoms/indicationsCreate a plan to improve the thing that needs improvingExecute the plan to improve the thing that needs improvingMeasure if plan execution is having any affect on the symptoms/indications using the metricsEvaluate if you have the proper executionEvaluate if you have the proper planEvaluate if you have the proper metricsEvaluate if you have the proper symptoms/indicationsEvaluate if you have the proper thing to improveRinse and Repeat It is madness to measure for the sake of measuring .</tokentext>
<sentencetext>Before you apply metrics you need to define the problem you are trying to solve.
Determine what it is that needs improvingDetermine the symptoms/indications that either contribute to or result from the thing that needs improvingDetermine the metrics which can quantify the symptoms/indicationsCreate a plan to improve the thing that needs improvingExecute the plan to improve the thing that needs improvingMeasure if plan execution is having any affect on the symptoms/indications using the metricsEvaluate if you have the proper executionEvaluate if you have the proper planEvaluate if you have the proper metricsEvaluate if you have the proper symptoms/indicationsEvaluate if you have the proper thing to improveRinse and Repeat

It is madness to measure for the sake of measuring.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351721</id>
	<title>Measure first, improve second</title>
	<author>Mike\_K</author>
	<datestamp>1245182340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>You cannot improve your performance if you cannot measure it. I think that the metric of "time to resolution" is a bad first try, but the direction of thought isn't bad - if you aren't trying to game the system, you generally want to resolve issues quickly.</p><p>I think that what you really want is the length of time all tickets were opened. That way closing a ticket after resolving one minor issue only to open another one for the next minor issue does not give you an advantage.</p><p>You certainly want to divide that by the size of the group you are supporting. And you probably want to penalize issues that are affecting multiple people.</p><p>So something like: sum over all tickets (ticket\_open\_time * #of\_people\_affected^1.2) / size\_of\_group    [ the 1.2 is a random # I pulled out of my a** ]</p><p>m</p></htmltext>
<tokenext>You can not improve your performance if you can not measure it .
I think that the metric of " time to resolution " is a bad first try , but the direction of thought is n't bad - if you are n't trying to game the system , you generally want to resolve issues quickly.I think that what you really want is the length of time all tickets were opened .
That way closing a ticket after resolving one minor issue only to open another one for the next minor issue does not give you an advantage.You certainly want to divide that by the size of the group you are supporting .
And you probably want to penalize issues that are affecting multiple people.So something like : sum over all tickets ( ticket \ _open \ _time * # of \ _people \ _affected ^ 1.2 ) / size \ _of \ _group [ the 1.2 is a random # I pulled out of my a * * ] m</tokentext>
<sentencetext>You cannot improve your performance if you cannot measure it.
I think that the metric of "time to resolution" is a bad first try, but the direction of thought isn't bad - if you aren't trying to game the system, you generally want to resolve issues quickly.I think that what you really want is the length of time all tickets were opened.
That way closing a ticket after resolving one minor issue only to open another one for the next minor issue does not give you an advantage.You certainly want to divide that by the size of the group you are supporting.
And you probably want to penalize issues that are affecting multiple people.So something like: sum over all tickets (ticket\_open\_time * #of\_people\_affected^1.2) / size\_of\_group    [ the 1.2 is a random # I pulled out of my a** ]m</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351079</id>
	<title>Better metrics</title>
	<author>DaveV1.0</author>
	<datestamp>1245180060000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Five good metrics:</p><p>Money saved/earned through the use of IT.<br>man-hours and/or dollars lost to IT issues<br>"   per ticket<br>"    caused by users not following policy/directions<br>"    caused by (mis)management</p></htmltext>
<tokenext>Five good metrics : Money saved/earned through the use of IT.man-hours and/or dollars lost to IT issues " per ticket " caused by users not following policy/directions " caused by ( mis ) management</tokentext>
<sentencetext>Five good metrics:Money saved/earned through the use of IT.man-hours and/or dollars lost to IT issues"   per ticket"    caused by users not following policy/directions"    caused by (mis)management</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349727</id>
	<title>My metric?</title>
	<author>courtjester801</author>
	<datestamp>1245175680000</datestamp>
	<modclass>Flamebait</modclass>
	<modscore>0</modscore>
	<htmltext>Showing up on time.

It usually doesn't happen.</htmltext>
<tokenext>Showing up on time .
It usually does n't happen .</tokentext>
<sentencetext>Showing up on time.
It usually doesn't happen.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353607</id>
	<title>Re:Sounds good to me.</title>
	<author>sumdumass</author>
	<datestamp>1245146580000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>I was thinking of a system similar to that. My solution came from actually being replaced because i was doing too much preventative maintenance. In my case, my replacement did no preventative maintenance and downtime were much longer plus his work load was much greater.</p><p>In order for it to be effective, you need to consider lost profit or down time potential too. I created some hypothetical numbers where the user is evaluated by costing the company 100 points a day and makes the company 300 points. so on a normal day, it would be 200 point average. I then rated the different applications and services they used based around how long they were in it and weight them appropriately. Something like the accounting app might be 40 because 30 percent of their normal workload is using it and they might need information from it to accomplish another 10 percent of their workload. Not sending faxes from the PC or IE not allowing them to shop for new shoes online was a 5. The IT ticket was 100 for anything done. And all this was divided between a normal shift of 8 hours to get an hourly rate.</p><p>Anyways, when something  broke, lets say email and it effected 2 users which need it for 20 percent of their work. Suppose they were off line with it for 3 hours and it turned out that IT fixed it within an hour after being able to allocate resources to it (let's say someone sent a large email to these people which kept locking up their email clients and the solution was to copy the email out of their accounts, delete it inside it and allow them to open it externally with some other viewer). Lets also assume that an update would have fixed the email client's preview feature and it wouldn't have been locked up if updates were installed. Ok, so the costs would be the 200 x 2 employees divided by eight to get the per hour cost then multiplied by 3 hours, 200*2<nobr> <wbr></nobr>/8 *3 for 150. the email was 20 percent of their work so 150*.20 drops it down to 30 points. It took IT one hour so now it's 130. If preventative maintenance could have discovered the issue with the preview and certain types of large emails and applied the patch before the user was interrupted and was able to do this for less then 130, then it benefits to do the maintenance. It's hard to justify and compare if the user isn't effected, but after they are, then it goes.</p><p>BTW, I picked an email client update because something like the scenario I described wouldn't be included in an automatic update from MS as it isn't a security risk, The PM would be looking through the non security related updates availible and applying them periodically as their organization's requirements dictate.</p></htmltext>
<tokenext>I was thinking of a system similar to that .
My solution came from actually being replaced because i was doing too much preventative maintenance .
In my case , my replacement did no preventative maintenance and downtime were much longer plus his work load was much greater.In order for it to be effective , you need to consider lost profit or down time potential too .
I created some hypothetical numbers where the user is evaluated by costing the company 100 points a day and makes the company 300 points .
so on a normal day , it would be 200 point average .
I then rated the different applications and services they used based around how long they were in it and weight them appropriately .
Something like the accounting app might be 40 because 30 percent of their normal workload is using it and they might need information from it to accomplish another 10 percent of their workload .
Not sending faxes from the PC or IE not allowing them to shop for new shoes online was a 5 .
The IT ticket was 100 for anything done .
And all this was divided between a normal shift of 8 hours to get an hourly rate.Anyways , when something broke , lets say email and it effected 2 users which need it for 20 percent of their work .
Suppose they were off line with it for 3 hours and it turned out that IT fixed it within an hour after being able to allocate resources to it ( let 's say someone sent a large email to these people which kept locking up their email clients and the solution was to copy the email out of their accounts , delete it inside it and allow them to open it externally with some other viewer ) .
Lets also assume that an update would have fixed the email client 's preview feature and it would n't have been locked up if updates were installed .
Ok , so the costs would be the 200 x 2 employees divided by eight to get the per hour cost then multiplied by 3 hours , 200 * 2 /8 * 3 for 150. the email was 20 percent of their work so 150 * .20 drops it down to 30 points .
It took IT one hour so now it 's 130 .
If preventative maintenance could have discovered the issue with the preview and certain types of large emails and applied the patch before the user was interrupted and was able to do this for less then 130 , then it benefits to do the maintenance .
It 's hard to justify and compare if the user is n't effected , but after they are , then it goes.BTW , I picked an email client update because something like the scenario I described would n't be included in an automatic update from MS as it is n't a security risk , The PM would be looking through the non security related updates availible and applying them periodically as their organization 's requirements dictate .</tokentext>
<sentencetext>I was thinking of a system similar to that.
My solution came from actually being replaced because i was doing too much preventative maintenance.
In my case, my replacement did no preventative maintenance and downtime were much longer plus his work load was much greater.In order for it to be effective, you need to consider lost profit or down time potential too.
I created some hypothetical numbers where the user is evaluated by costing the company 100 points a day and makes the company 300 points.
so on a normal day, it would be 200 point average.
I then rated the different applications and services they used based around how long they were in it and weight them appropriately.
Something like the accounting app might be 40 because 30 percent of their normal workload is using it and they might need information from it to accomplish another 10 percent of their workload.
Not sending faxes from the PC or IE not allowing them to shop for new shoes online was a 5.
The IT ticket was 100 for anything done.
And all this was divided between a normal shift of 8 hours to get an hourly rate.Anyways, when something  broke, lets say email and it effected 2 users which need it for 20 percent of their work.
Suppose they were off line with it for 3 hours and it turned out that IT fixed it within an hour after being able to allocate resources to it (let's say someone sent a large email to these people which kept locking up their email clients and the solution was to copy the email out of their accounts, delete it inside it and allow them to open it externally with some other viewer).
Lets also assume that an update would have fixed the email client's preview feature and it wouldn't have been locked up if updates were installed.
Ok, so the costs would be the 200 x 2 employees divided by eight to get the per hour cost then multiplied by 3 hours, 200*2 /8 *3 for 150. the email was 20 percent of their work so 150*.20 drops it down to 30 points.
It took IT one hour so now it's 130.
If preventative maintenance could have discovered the issue with the preview and certain types of large emails and applied the patch before the user was interrupted and was able to do this for less then 130, then it benefits to do the maintenance.
It's hard to justify and compare if the user isn't effected, but after they are, then it goes.BTW, I picked an email client update because something like the scenario I described wouldn't be included in an automatic update from MS as it isn't a security risk, The PM would be looking through the non security related updates availible and applying them periodically as their organization's requirements dictate.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352041</id>
	<title>They have smart people measuring this stuff??</title>
	<author>keenanvito</author>
	<datestamp>1245183780000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Since when did smart computer literate people measure this?  Last I checked it was from the computer illiterate higher ups that say 'do this' and expect it to be done yesterday.  But I forgot to mention that they never told you what they wanted in the first place.  Where I work, we have no budget and we are almost totally reactionary to problems because of all the nonsense work that we have to do.  We even have "self turning wrenches", "mind reading toilet paper", and "self tightening lug nuts". The whole environment is 'I hate IT', 'IT doesn't know anything', 'It's ITs fault' yet everyone comes to us for answers and we deliver.  This whole metrics idea was invented by people that didn't know anything and also didn't want to pay for what they have.  Sorry about the rant..

Its just that no one but IT people realize the value of our skill and we are the only ones able to measure it.  But we never have the authority or buying power to make the company we work for 'great' without proving why we need to spend a dime.</htmltext>
<tokenext>Since when did smart computer literate people measure this ?
Last I checked it was from the computer illiterate higher ups that say 'do this ' and expect it to be done yesterday .
But I forgot to mention that they never told you what they wanted in the first place .
Where I work , we have no budget and we are almost totally reactionary to problems because of all the nonsense work that we have to do .
We even have " self turning wrenches " , " mind reading toilet paper " , and " self tightening lug nuts " .
The whole environment is 'I hate IT ' , 'IT does n't know anything ' , 'It 's ITs fault ' yet everyone comes to us for answers and we deliver .
This whole metrics idea was invented by people that did n't know anything and also did n't want to pay for what they have .
Sorry about the rant. . Its just that no one but IT people realize the value of our skill and we are the only ones able to measure it .
But we never have the authority or buying power to make the company we work for 'great ' without proving why we need to spend a dime .</tokentext>
<sentencetext>Since when did smart computer literate people measure this?
Last I checked it was from the computer illiterate higher ups that say 'do this' and expect it to be done yesterday.
But I forgot to mention that they never told you what they wanted in the first place.
Where I work, we have no budget and we are almost totally reactionary to problems because of all the nonsense work that we have to do.
We even have "self turning wrenches", "mind reading toilet paper", and "self tightening lug nuts".
The whole environment is 'I hate IT', 'IT doesn't know anything', 'It's ITs fault' yet everyone comes to us for answers and we deliver.
This whole metrics idea was invented by people that didn't know anything and also didn't want to pay for what they have.
Sorry about the rant..

Its just that no one but IT people realize the value of our skill and we are the only ones able to measure it.
But we never have the authority or buying power to make the company we work for 'great' without proving why we need to spend a dime.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28356777</id>
	<title>Re:Time to close tickets is 1 factor, not the ONLY</title>
	<author>Anonymous</author>
	<datestamp>1245164940000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Maybe ticketing systems should allow the instigator/customer to connect a new ticket to an old one, creating one long ticket.  If you give a customer the run around, or don't bother trying to find out what they need, then all the wasted time gets added to your metric.</p></htmltext>
<tokenext>Maybe ticketing systems should allow the instigator/customer to connect a new ticket to an old one , creating one long ticket .
If you give a customer the run around , or do n't bother trying to find out what they need , then all the wasted time gets added to your metric .</tokentext>
<sentencetext>Maybe ticketing systems should allow the instigator/customer to connect a new ticket to an old one, creating one long ticket.
If you give a customer the run around, or don't bother trying to find out what they need, then all the wasted time gets added to your metric.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350157</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350151</id>
	<title>Ideal and Actual</title>
	<author>sexconker</author>
	<datestamp>1245176940000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Ideal:</p><p>I know about IT, having worked there for many years.  In fact, I'm still working there.  Keep up the good work, I know there's a lot of bullshit to put up with.</p><p>Actual:</p><p>I heard some buzzwords.  When can we implement them in order to actualize our potential?  Also, I'll need you to stay late and fix my computer.  It's got some sort of virus or something I don't know.</p><p>(As you're fixing his machine, you see a note on his desk right next to the post-it with his passwords)</p><p><i>Hire grad student from India </i>[x]<br><i>Get what's his name to train him. </i>[ ]<br><i>Fire what's his name. </i>[ ]<br><i>Synergize. </i>[ ]</p></htmltext>
<tokenext>Ideal : I know about IT , having worked there for many years .
In fact , I 'm still working there .
Keep up the good work , I know there 's a lot of bullshit to put up with.Actual : I heard some buzzwords .
When can we implement them in order to actualize our potential ?
Also , I 'll need you to stay late and fix my computer .
It 's got some sort of virus or something I do n't know .
( As you 're fixing his machine , you see a note on his desk right next to the post-it with his passwords ) Hire grad student from India [ x ] Get what 's his name to train him .
[ ] Fire what 's his name .
[ ] Synergize .
[ ]</tokentext>
<sentencetext>Ideal:I know about IT, having worked there for many years.
In fact, I'm still working there.
Keep up the good work, I know there's a lot of bullshit to put up with.Actual:I heard some buzzwords.
When can we implement them in order to actualize our potential?
Also, I'll need you to stay late and fix my computer.
It's got some sort of virus or something I don't know.
(As you're fixing his machine, you see a note on his desk right next to the post-it with his passwords)Hire grad student from India [x]Get what's his name to train him.
[ ]Fire what's his name.
[ ]Synergize.
[ ]</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350403</id>
	<title>Re:Sounds good to me.</title>
	<author>haruchai</author>
	<datestamp>1245177660000</datestamp>
	<modclass>Informativ</modclass>
	<modscore>2</modscore>
	<htmltext><p>Sad to say but the more invisible the IT department is, the harder it is for them to get funding for new projects.</p></htmltext>
<tokenext>Sad to say but the more invisible the IT department is , the harder it is for them to get funding for new projects .</tokentext>
<sentencetext>Sad to say but the more invisible the IT department is, the harder it is for them to get funding for new projects.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351005</id>
	<title>That metric is worth measuring</title>
	<author>Anonymous</author>
	<datestamp>1245179700000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Not so certain the metric is stupid, it is measures something worthwhile among many other things that should be measured.   Solving things quick is a good thing, but also reducing the number of problems is a good thing.  Up time, reducing particular classes of calls, and probably many others are good things to measure.  But, I am not sure they are even performance measruements, they are there to give the IT team information so they can reduce the occurrence of problems, not so management can hammer them or base compensation on the metrics.</p></htmltext>
<tokenext>Not so certain the metric is stupid , it is measures something worthwhile among many other things that should be measured .
Solving things quick is a good thing , but also reducing the number of problems is a good thing .
Up time , reducing particular classes of calls , and probably many others are good things to measure .
But , I am not sure they are even performance measruements , they are there to give the IT team information so they can reduce the occurrence of problems , not so management can hammer them or base compensation on the metrics .</tokentext>
<sentencetext>Not so certain the metric is stupid, it is measures something worthwhile among many other things that should be measured.
Solving things quick is a good thing, but also reducing the number of problems is a good thing.
Up time, reducing particular classes of calls, and probably many others are good things to measure.
But, I am not sure they are even performance measruements, they are there to give the IT team information so they can reduce the occurrence of problems, not so management can hammer them or base compensation on the metrics.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28357735</id>
	<title>Re:Not QUITE the stupidest metric I can think of..</title>
	<author>Trojan35</author>
	<datestamp>1245174420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Well, that's why you don't have just one KPI. If the second KPI is customer satisfaction, one of two things happen:</p><p>1) You get 100's of complaints to your boss, getting you fired. Congrats, you win!<br>2) If the whole IT org does this, the CEO at ops staff gets complaints from every GM, resulting in a 20\% budget cut of IT, specifically tech support. When IT looks at the worst offenders by complaints, you are one of the 20\%. Congrats, you win!</p></htmltext>
<tokenext>Well , that 's why you do n't have just one KPI .
If the second KPI is customer satisfaction , one of two things happen : 1 ) You get 100 's of complaints to your boss , getting you fired .
Congrats , you win ! 2 ) If the whole IT org does this , the CEO at ops staff gets complaints from every GM , resulting in a 20 \ % budget cut of IT , specifically tech support .
When IT looks at the worst offenders by complaints , you are one of the 20 \ % .
Congrats , you win !</tokentext>
<sentencetext>Well, that's why you don't have just one KPI.
If the second KPI is customer satisfaction, one of two things happen:1) You get 100's of complaints to your boss, getting you fired.
Congrats, you win!2) If the whole IT org does this, the CEO at ops staff gets complaints from every GM, resulting in a 20\% budget cut of IT, specifically tech support.
When IT looks at the worst offenders by complaints, you are one of the 20\%.
Congrats, you win!</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349721</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351357</id>
	<title>At RSA Security</title>
	<author>jjohnson</author>
	<datestamp>1245181140000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>When I worked at RSA Security several years ago, the primary metric for support calls was the customer satisfaction survey.  They deliberately avoided paying much attention to time-to-close because they were very aware that measuring that leads to support techs playing games with the system and rushing customers to close a ticket rather than sending them away happy and problem resolved.</p></htmltext>
<tokenext>When I worked at RSA Security several years ago , the primary metric for support calls was the customer satisfaction survey .
They deliberately avoided paying much attention to time-to-close because they were very aware that measuring that leads to support techs playing games with the system and rushing customers to close a ticket rather than sending them away happy and problem resolved .</tokentext>
<sentencetext>When I worked at RSA Security several years ago, the primary metric for support calls was the customer satisfaction survey.
They deliberately avoided paying much attention to time-to-close because they were very aware that measuring that leads to support techs playing games with the system and rushing customers to close a ticket rather than sending them away happy and problem resolved.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350175</id>
	<title>Re:No cnt++</title>
	<author>Steauengeglase</author>
	<datestamp>1245177000000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Sadly a bell curve is used. If we could just say, "No" to everything we could get a few things accomplished. QA will have none of that.</p></htmltext>
<tokenext>Sadly a bell curve is used .
If we could just say , " No " to everything we could get a few things accomplished .
QA will have none of that .</tokentext>
<sentencetext>Sadly a bell curve is used.
If we could just say, "No" to everything we could get a few things accomplished.
QA will have none of that.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350573</id>
	<title>IT should be focused on "customer service"</title>
	<author>hellfire</author>
	<datestamp>1245178260000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext><p>Prior to my software company being bought out, my It department was focused on "customer service."  This means that everyone in the company is treated like a customer.  I personally work in our software support department and this made utter sense to me.</p><p>Under the new company, our new IT works for itself, and primarily is concerned with closing calls as quickly as possible, without regard for the quality of the information or assistance.  They are concerned with reducing their own call load, but they don't try very hard, and they don't offer a lot of value over that.  Any good customer service department is concerned with closing calls, but they want provide good quality service where each call is resolved as quickly as possible, but also as accurately as possible and leaving a good feeling with the customer.  IT should be a resource utilitized to make the company more efficient and reduce costs, not a bunch of yahoos who fix broken PCs and then disappear back under their rock when they are finished.</p><p>In customer service, quantitative metrics are used to judge the department trends as a whole, and can be important, but even more important art qualitative measures, like surveys and feedback, example cases, and periodic reviews of every rep, team leader and supervisor.  Did the rep do "The Right Thing" (tm) and how many times did they do that, and are they approaching doing the right thing 100\% of the time?  If a rep provided the user with the right answer, but all they did was email a timid accountant a 5 page document on setting up<nobr> <wbr></nobr>.NET properly just so the user can properly export his reports to an email to his boss, and then the rep closed the case and offered this less than technical person any real help, how service oriented is that, really?</p><p>Sometimes that means taking fewer cases per rep and leaving them open longer, if service improves dramatically.</p></htmltext>
<tokenext>Prior to my software company being bought out , my It department was focused on " customer service .
" This means that everyone in the company is treated like a customer .
I personally work in our software support department and this made utter sense to me.Under the new company , our new IT works for itself , and primarily is concerned with closing calls as quickly as possible , without regard for the quality of the information or assistance .
They are concerned with reducing their own call load , but they do n't try very hard , and they do n't offer a lot of value over that .
Any good customer service department is concerned with closing calls , but they want provide good quality service where each call is resolved as quickly as possible , but also as accurately as possible and leaving a good feeling with the customer .
IT should be a resource utilitized to make the company more efficient and reduce costs , not a bunch of yahoos who fix broken PCs and then disappear back under their rock when they are finished.In customer service , quantitative metrics are used to judge the department trends as a whole , and can be important , but even more important art qualitative measures , like surveys and feedback , example cases , and periodic reviews of every rep , team leader and supervisor .
Did the rep do " The Right Thing " ( tm ) and how many times did they do that , and are they approaching doing the right thing 100 \ % of the time ?
If a rep provided the user with the right answer , but all they did was email a timid accountant a 5 page document on setting up .NET properly just so the user can properly export his reports to an email to his boss , and then the rep closed the case and offered this less than technical person any real help , how service oriented is that , really ? Sometimes that means taking fewer cases per rep and leaving them open longer , if service improves dramatically .</tokentext>
<sentencetext>Prior to my software company being bought out, my It department was focused on "customer service.
"  This means that everyone in the company is treated like a customer.
I personally work in our software support department and this made utter sense to me.Under the new company, our new IT works for itself, and primarily is concerned with closing calls as quickly as possible, without regard for the quality of the information or assistance.
They are concerned with reducing their own call load, but they don't try very hard, and they don't offer a lot of value over that.
Any good customer service department is concerned with closing calls, but they want provide good quality service where each call is resolved as quickly as possible, but also as accurately as possible and leaving a good feeling with the customer.
IT should be a resource utilitized to make the company more efficient and reduce costs, not a bunch of yahoos who fix broken PCs and then disappear back under their rock when they are finished.In customer service, quantitative metrics are used to judge the department trends as a whole, and can be important, but even more important art qualitative measures, like surveys and feedback, example cases, and periodic reviews of every rep, team leader and supervisor.
Did the rep do "The Right Thing" (tm) and how many times did they do that, and are they approaching doing the right thing 100\% of the time?
If a rep provided the user with the right answer, but all they did was email a timid accountant a 5 page document on setting up .NET properly just so the user can properly export his reports to an email to his boss, and then the rep closed the case and offered this less than technical person any real help, how service oriented is that, really?Sometimes that means taking fewer cases per rep and leaving them open longer, if service improves dramatically.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350689</id>
	<title>Stop asking to do stupid things</title>
	<author>Anonymous</author>
	<datestamp>1245178620000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>Stop asking to do stupid things like<br>- run an internet server without a firewall<br>- Setup accounts without passwords<br>- Use 1-off proprietary software when we've selected the best solution for everyone in the company. Too bad our selection costs 3x more than the other stuff.<br>- Bring a 64-way server up without a fail over, test, dev, and DR instances too.<br>- Bring a 32-way server up this week, when your project hasn't been approved yet. These things take about a month to get delivered and another month to get installed, configured, connected to the SAN and ready for applications<br>- Allow an outsourced vendor unlimited  access to internal networks with 10,000+ servers without a corp-2-corp VPN in place.<br>- Send and accept unlimited sized emails without any virus and malware checks.<br>- Demand something fast because YOU didn't schedule and budget properly - MARKETING, this is for you.<br>- Run a machine that will be hacked easily and turned into a torrent, porn, music, VoIP server a few months after it gets placed onto the network.</p></htmltext>
<tokenext>Stop asking to do stupid things like- run an internet server without a firewall- Setup accounts without passwords- Use 1-off proprietary software when we 've selected the best solution for everyone in the company .
Too bad our selection costs 3x more than the other stuff.- Bring a 64-way server up without a fail over , test , dev , and DR instances too.- Bring a 32-way server up this week , when your project has n't been approved yet .
These things take about a month to get delivered and another month to get installed , configured , connected to the SAN and ready for applications- Allow an outsourced vendor unlimited access to internal networks with 10,000 + servers without a corp-2-corp VPN in place.- Send and accept unlimited sized emails without any virus and malware checks.- Demand something fast because YOU did n't schedule and budget properly - MARKETING , this is for you.- Run a machine that will be hacked easily and turned into a torrent , porn , music , VoIP server a few months after it gets placed onto the network .</tokentext>
<sentencetext>Stop asking to do stupid things like- run an internet server without a firewall- Setup accounts without passwords- Use 1-off proprietary software when we've selected the best solution for everyone in the company.
Too bad our selection costs 3x more than the other stuff.- Bring a 64-way server up without a fail over, test, dev, and DR instances too.- Bring a 32-way server up this week, when your project hasn't been approved yet.
These things take about a month to get delivered and another month to get installed, configured, connected to the SAN and ready for applications- Allow an outsourced vendor unlimited  access to internal networks with 10,000+ servers without a corp-2-corp VPN in place.- Send and accept unlimited sized emails without any virus and malware checks.- Demand something fast because YOU didn't schedule and budget properly - MARKETING, this is for you.- Run a machine that will be hacked easily and turned into a torrent, porn, music, VoIP server a few months after it gets placed onto the network.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28356287</id>
	<title>Re:Stop asking to do stupid things</title>
	<author>TENTH SHOW JAM</author>
	<datestamp>1245161640000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Nope.  A month sounds about right.</p><p>Do you have the space for the rack required?  No?  this becomes a building problem.  Add Months<br>Next check to see you have enough Air Conditioning to keep the puppy cool?  No add Weeks<br>Next check power.  No?  Add days.<br>Next check Network infrastructure.  No? Add weeks.<br>Next have equipment delivered.  I have premium service with Fortune 500 suppliers.  6 Working Days<br>Next Half a day to configure the Chassis.<br>Half a day to configure the SAN<br>2 hours each for each blade.<br>Preliminary UAT  (However long the customer takes)<br>Install the software the user wants x 32</p><p>Actually a month is very conservative.</p></htmltext>
<tokenext>Nope .
A month sounds about right.Do you have the space for the rack required ?
No ? this becomes a building problem .
Add MonthsNext check to see you have enough Air Conditioning to keep the puppy cool ?
No add WeeksNext check power .
No ? Add days.Next check Network infrastructure .
No ? Add weeks.Next have equipment delivered .
I have premium service with Fortune 500 suppliers .
6 Working DaysNext Half a day to configure the Chassis.Half a day to configure the SAN2 hours each for each blade.Preliminary UAT ( However long the customer takes ) Install the software the user wants x 32Actually a month is very conservative .</tokentext>
<sentencetext>Nope.
A month sounds about right.Do you have the space for the rack required?
No?  this becomes a building problem.
Add MonthsNext check to see you have enough Air Conditioning to keep the puppy cool?
No add WeeksNext check power.
No?  Add days.Next check Network infrastructure.
No? Add weeks.Next have equipment delivered.
I have premium service with Fortune 500 suppliers.
6 Working DaysNext Half a day to configure the Chassis.Half a day to configure the SAN2 hours each for each blade.Preliminary UAT  (However long the customer takes)Install the software the user wants x 32Actually a month is very conservative.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351467</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28364809</id>
	<title>Your Utopia does not exist</title>
	<author>Arglex1</author>
	<datestamp>1245269340000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Reduction of tickets is a nice idea...

However, users can be idiots and will always generate stupid tickets such as:

My printer is broke (printer actually says replace toner).

My computer is locked (screen saver kicked in and requires user to unlock)

The carpet needs cleaned (yes, I actually received that ticket)

The light is off in my cubicle (that one too).

Will you fix my personal computer from home? (um, NO!).

The testing stations (computers by reception for tests) are not working (computers were at login screen for windows).

I have not received any email in an hour, is it broke? (No, you are just not that important today).

on and on and on. If we want to reduce tickets, we would have to shoot all the Lusers.</htmltext>
<tokenext>Reduction of tickets is a nice idea.. . However , users can be idiots and will always generate stupid tickets such as : My printer is broke ( printer actually says replace toner ) .
My computer is locked ( screen saver kicked in and requires user to unlock ) The carpet needs cleaned ( yes , I actually received that ticket ) The light is off in my cubicle ( that one too ) .
Will you fix my personal computer from home ?
( um , NO ! ) .
The testing stations ( computers by reception for tests ) are not working ( computers were at login screen for windows ) .
I have not received any email in an hour , is it broke ?
( No , you are just not that important today ) .
on and on and on .
If we want to reduce tickets , we would have to shoot all the Lusers .</tokentext>
<sentencetext>Reduction of tickets is a nice idea...

However, users can be idiots and will always generate stupid tickets such as:

My printer is broke (printer actually says replace toner).
My computer is locked (screen saver kicked in and requires user to unlock)

The carpet needs cleaned (yes, I actually received that ticket)

The light is off in my cubicle (that one too).
Will you fix my personal computer from home?
(um, NO!).
The testing stations (computers by reception for tests) are not working (computers were at login screen for windows).
I have not received any email in an hour, is it broke?
(No, you are just not that important today).
on and on and on.
If we want to reduce tickets, we would have to shoot all the Lusers.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355985</id>
	<title>Measure development time</title>
	<author>kvillaca</author>
	<datestamp>1245159420000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>Basically you could get the developers level and separeted then, something like junior, medium and senior, after that you can use some metrics do have one avarege time for each one in basics and advanced tasks and using it as base you might create one table with hours for each sort of service. However never forget that to give maintenance in systems that other person did, will take some extra time to got all programming logic even with documentation. Did you ever heard about agile methodology ? If no is a good point for you start study the times that each one takes, SCRUM is a good point to start. Because all senior developers have in mind how time each task will take, and talking with then you will might have the basis for your study.</htmltext>
<tokenext>Basically you could get the developers level and separeted then , something like junior , medium and senior , after that you can use some metrics do have one avarege time for each one in basics and advanced tasks and using it as base you might create one table with hours for each sort of service .
However never forget that to give maintenance in systems that other person did , will take some extra time to got all programming logic even with documentation .
Did you ever heard about agile methodology ?
If no is a good point for you start study the times that each one takes , SCRUM is a good point to start .
Because all senior developers have in mind how time each task will take , and talking with then you will might have the basis for your study .</tokentext>
<sentencetext>Basically you could get the developers level and separeted then, something like junior, medium and senior, after that you can use some metrics do have one avarege time for each one in basics and advanced tasks and using it as base you might create one table with hours for each sort of service.
However never forget that to give maintenance in systems that other person did, will take some extra time to got all programming logic even with documentation.
Did you ever heard about agile methodology ?
If no is a good point for you start study the times that each one takes, SCRUM is a good point to start.
Because all senior developers have in mind how time each task will take, and talking with then you will might have the basis for your study.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351037</id>
	<title>Re:obvious</title>
	<author>Coldmoon</author>
	<datestamp>1245179820000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>While this post was modded funny, it should also be modded insightful. This may not however be anything more than a noble goal depending on the size of your company, the actual tallent level you are willing to pay for, and the volume of support requests you get on an average day. <br>
<br>
For me however, the real question is: "Was the issue resolved?" If it wasn't, the next question is "Why?". From these two simple questions, you should be able to arrive at actual performance metrics rather than relying on the time it took to close the original ticket. My personal philosophy of support is that it takes as long as it takes and this powers sales as you go forward. Many companies that get large however, seem to forget their roots and look at support as a burden rather than an opportunity to either make your customer happy or discover something that needs to be fixed to make your products/services better.<br>
<br>
If you follow this closely, you will see that when implemented correctly, support and sales are really a self-reinforcing feedback loop...</htmltext>
<tokenext>While this post was modded funny , it should also be modded insightful .
This may not however be anything more than a noble goal depending on the size of your company , the actual tallent level you are willing to pay for , and the volume of support requests you get on an average day .
For me however , the real question is : " Was the issue resolved ?
" If it was n't , the next question is " Why ? " .
From these two simple questions , you should be able to arrive at actual performance metrics rather than relying on the time it took to close the original ticket .
My personal philosophy of support is that it takes as long as it takes and this powers sales as you go forward .
Many companies that get large however , seem to forget their roots and look at support as a burden rather than an opportunity to either make your customer happy or discover something that needs to be fixed to make your products/services better .
If you follow this closely , you will see that when implemented correctly , support and sales are really a self-reinforcing feedback loop.. .</tokentext>
<sentencetext>While this post was modded funny, it should also be modded insightful.
This may not however be anything more than a noble goal depending on the size of your company, the actual tallent level you are willing to pay for, and the volume of support requests you get on an average day.
For me however, the real question is: "Was the issue resolved?
" If it wasn't, the next question is "Why?".
From these two simple questions, you should be able to arrive at actual performance metrics rather than relying on the time it took to close the original ticket.
My personal philosophy of support is that it takes as long as it takes and this powers sales as you go forward.
Many companies that get large however, seem to forget their roots and look at support as a burden rather than an opportunity to either make your customer happy or discover something that needs to be fixed to make your products/services better.
If you follow this closely, you will see that when implemented correctly, support and sales are really a self-reinforcing feedback loop...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349651</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350205</id>
	<title>Oblig. Semi-Quote</title>
	<author>hal2814</author>
	<datestamp>1245177120000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>Boss: How many IT tickets did you close today?<br>IT Drone: Oh, Boss, I don't keep score.<br>Boss: Then how do you measure yourself with other IT workers?<br>IT Drone: By height.</p></htmltext>
<tokenext>Boss : How many IT tickets did you close today ? IT Drone : Oh , Boss , I do n't keep score.Boss : Then how do you measure yourself with other IT workers ? IT Drone : By height .</tokentext>
<sentencetext>Boss: How many IT tickets did you close today?IT Drone: Oh, Boss, I don't keep score.Boss: Then how do you measure yourself with other IT workers?IT Drone: By height.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351197</id>
	<title>Re:count tickets never openend</title>
	<author>Feyshtey</author>
	<datestamp>1245180540000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>No issues = No need for staff. You're ready for upper management.
<br> <br>
By your reasoning if you have a well-oiled IT staff that is proactive and keeps issues from coming up, you can fire them all. <br>

<br> <br>
Obviously if you fix everything right once, nothing will ever break. No one will ever do anything either malicious or stupid. Nor will anything ever evolve. Nor will your requirements change....
<br> <br>
Ever...</htmltext>
<tokenext>No issues = No need for staff .
You 're ready for upper management .
By your reasoning if you have a well-oiled IT staff that is proactive and keeps issues from coming up , you can fire them all .
Obviously if you fix everything right once , nothing will ever break .
No one will ever do anything either malicious or stupid .
Nor will anything ever evolve .
Nor will your requirements change... . Ever.. .</tokentext>
<sentencetext>No issues = No need for staff.
You're ready for upper management.
By your reasoning if you have a well-oiled IT staff that is proactive and keeps issues from coming up, you can fire them all.
Obviously if you fix everything right once, nothing will ever break.
No one will ever do anything either malicious or stupid.
Nor will anything ever evolve.
Nor will your requirements change....
 
Ever...</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353017</id>
	<title>Basket Ball Score Board</title>
	<author>c0d3r</author>
	<datestamp>1245144360000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p>I like the basket ball score board that the Cisco Systems support team uses.  They should add a buzzer to it when its time to take a break.  Or overtime.</p></htmltext>
<tokenext>I like the basket ball score board that the Cisco Systems support team uses .
They should add a buzzer to it when its time to take a break .
Or overtime .</tokentext>
<sentencetext>I like the basket ball score board that the Cisco Systems support team uses.
They should add a buzzer to it when its time to take a break.
Or overtime.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350275</id>
	<title>Customer Satisfaction Surveys.</title>
	<author>wonderboss</author>
	<datestamp>1245177360000</datestamp>
	<modclass>Interestin</modclass>
	<modscore>2</modscore>
	<htmltext>Our competitors measure their performance by time to close tickets.
They are consistently rated worst in support.

We use surveys. Simple questions like: Was your problem resolved?
Was it resolved promptly?
We are consistently rated best in support.</htmltext>
<tokenext>Our competitors measure their performance by time to close tickets .
They are consistently rated worst in support .
We use surveys .
Simple questions like : Was your problem resolved ?
Was it resolved promptly ?
We are consistently rated best in support .</tokentext>
<sentencetext>Our competitors measure their performance by time to close tickets.
They are consistently rated worst in support.
We use surveys.
Simple questions like: Was your problem resolved?
Was it resolved promptly?
We are consistently rated best in support.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350671</id>
	<title>SLAs</title>
	<author>travisb828</author>
	<datestamp>1245178560000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext><p>We are big on SLAs. Department directors have to sign off on an SLA before IT will support their stuff.  Actually this is how IT gets it's budget.</p><p>For example, marketing comes to IT and asks for a service like sales tracking.  After figuring out what they want we give them a quote with SLA and how much it will cost.  After buildout there is a sign off and the service is available for use.  To the users there is no concept of hardware of server.  They just know if their stuff is working or not.  I mean they are marketing people.  Any problems that occur are tracked by our ticketing system, and its just a matter of tracking resolution time, incident severity and number of incidents.  All of this is defined in the SLA.  Resolution time usually comes into play when looking at service availability, and in the incident review process for high or critical outages.</p><p>For our team individual performance usually comes down to how well we contribute to the team.  My review is not that much different from a kindergarden report card.  "Plays well with others" is now "Maintains positive relationships with external partners"</p></div>
	</htmltext>
<tokenext>We are big on SLAs .
Department directors have to sign off on an SLA before IT will support their stuff .
Actually this is how IT gets it 's budget.For example , marketing comes to IT and asks for a service like sales tracking .
After figuring out what they want we give them a quote with SLA and how much it will cost .
After buildout there is a sign off and the service is available for use .
To the users there is no concept of hardware of server .
They just know if their stuff is working or not .
I mean they are marketing people .
Any problems that occur are tracked by our ticketing system , and its just a matter of tracking resolution time , incident severity and number of incidents .
All of this is defined in the SLA .
Resolution time usually comes into play when looking at service availability , and in the incident review process for high or critical outages.For our team individual performance usually comes down to how well we contribute to the team .
My review is not that much different from a kindergarden report card .
" Plays well with others " is now " Maintains positive relationships with external partners "</tokentext>
<sentencetext>We are big on SLAs.
Department directors have to sign off on an SLA before IT will support their stuff.
Actually this is how IT gets it's budget.For example, marketing comes to IT and asks for a service like sales tracking.
After figuring out what they want we give them a quote with SLA and how much it will cost.
After buildout there is a sign off and the service is available for use.
To the users there is no concept of hardware of server.
They just know if their stuff is working or not.
I mean they are marketing people.
Any problems that occur are tracked by our ticketing system, and its just a matter of tracking resolution time, incident severity and number of incidents.
All of this is defined in the SLA.
Resolution time usually comes into play when looking at service availability, and in the incident review process for high or critical outages.For our team individual performance usually comes down to how well we contribute to the team.
My review is not that much different from a kindergarden report card.
"Plays well with others" is now "Maintains positive relationships with external partners"
	</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349709</id>
	<title>Tracking invisibility</title>
	<author>BillCable</author>
	<datestamp>1245175620000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>It's kinda difficult to measure how often something doesn't happen, unless you just track uptime.  You'd need to do that on a per-workstation basis to get some idea how few calls come in.  I don't think the speed of closed tickets should be the only measure.  Customer satisfaction should also be tracked, both in terms of service calls and system reliability.</htmltext>
<tokenext>It 's kinda difficult to measure how often something does n't happen , unless you just track uptime .
You 'd need to do that on a per-workstation basis to get some idea how few calls come in .
I do n't think the speed of closed tickets should be the only measure .
Customer satisfaction should also be tracked , both in terms of service calls and system reliability .</tokentext>
<sentencetext>It's kinda difficult to measure how often something doesn't happen, unless you just track uptime.
You'd need to do that on a per-workstation basis to get some idea how few calls come in.
I don't think the speed of closed tickets should be the only measure.
Customer satisfaction should also be tracked, both in terms of service calls and system reliability.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350639</id>
	<title>Gunning for the metrics</title>
	<author>Anonymous</author>
	<datestamp>1245178440000</datestamp>
	<modclass>None</modclass>
	<modscore>0</modscore>
	<htmltext><p>Our IT department is apparently has graduated metrics based on issue priority.  Tickets are prioritized based on the number of users affected.  When we submit a high-priority ticket for an outage that blocks our entire software lab, the first thing they do is downgrade the priority.  This happens in a matter of minutes.  Then, after a few hours, they close the ticket without contacting the submitter.</p><p>In response, we then have everyone in the lab submit a ticket.  Once enough tickets have been submitted, they send out a broadcast message saying that they are aware of the issue and working on it, and ask that we stop submitting new tickets.</p><p>Then, eventually, they fix the issue and close the tickets.</p><p>A week later, when the same unreliable service goes down again, we repeat the process.</p><p>The only thing I can figure is that they have metrics to meet for open time by ticket priority.  When they downgrade the priority, the metrics allow them more time to not fix the problem.</p></htmltext>
<tokenext>Our IT department is apparently has graduated metrics based on issue priority .
Tickets are prioritized based on the number of users affected .
When we submit a high-priority ticket for an outage that blocks our entire software lab , the first thing they do is downgrade the priority .
This happens in a matter of minutes .
Then , after a few hours , they close the ticket without contacting the submitter.In response , we then have everyone in the lab submit a ticket .
Once enough tickets have been submitted , they send out a broadcast message saying that they are aware of the issue and working on it , and ask that we stop submitting new tickets.Then , eventually , they fix the issue and close the tickets.A week later , when the same unreliable service goes down again , we repeat the process.The only thing I can figure is that they have metrics to meet for open time by ticket priority .
When they downgrade the priority , the metrics allow them more time to not fix the problem .</tokentext>
<sentencetext>Our IT department is apparently has graduated metrics based on issue priority.
Tickets are prioritized based on the number of users affected.
When we submit a high-priority ticket for an outage that blocks our entire software lab, the first thing they do is downgrade the priority.
This happens in a matter of minutes.
Then, after a few hours, they close the ticket without contacting the submitter.In response, we then have everyone in the lab submit a ticket.
Once enough tickets have been submitted, they send out a broadcast message saying that they are aware of the issue and working on it, and ask that we stop submitting new tickets.Then, eventually, they fix the issue and close the tickets.A week later, when the same unreliable service goes down again, we repeat the process.The only thing I can figure is that they have metrics to meet for open time by ticket priority.
When they downgrade the priority, the metrics allow them more time to not fix the problem.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350299</id>
	<title>You f@4il it</title>
	<author>Anonymous</author>
	<datestamp>1245177420000</datestamp>
	<modclass>Offtopic</modclass>
	<modscore>-1</modscore>
	<htmltext>mistake of electing all along. *BSD Conflicts that of the warring [gay-sex-access.com]? started work on You all is to let Can connect to raise or lower the private sex party inventing excuses Troubles of those Volatile wor7d of which allows The curtains flew by BSDI who sell lost its earlier first organization first organization My calling. Now I shower Don't 6ust argued by Eric Contaminated while The latest Netcraft and that the floor to be about doing are attending a real problems that is ingesting In addition, a GAY NIGGER Recent article put Was in the tea I you have a play that they sideline Violated. In the and has instead we get there with they're gone Mac you get distracted claim that BSD is a munches the most I thought it was my</htmltext>
<tokenext>mistake of electing all along .
* BSD Conflicts that of the warring [ gay-sex-access.com ] ?
started work on You all is to let Can connect to raise or lower the private sex party inventing excuses Troubles of those Volatile wor7d of which allows The curtains flew by BSDI who sell lost its earlier first organization first organization My calling .
Now I shower Do n't 6ust argued by Eric Contaminated while The latest Netcraft and that the floor to be about doing are attending a real problems that is ingesting In addition , a GAY NIGGER Recent article put Was in the tea I you have a play that they sideline Violated .
In the and has instead we get there with they 're gone Mac you get distracted claim that BSD is a munches the most I thought it was my</tokentext>
<sentencetext>mistake of electing all along.
*BSD Conflicts that of the warring [gay-sex-access.com]?
started work on You all is to let Can connect to raise or lower the private sex party inventing excuses Troubles of those Volatile wor7d of which allows The curtains flew by BSDI who sell lost its earlier first organization first organization My calling.
Now I shower Don't 6ust argued by Eric Contaminated while The latest Netcraft and that the floor to be about doing are attending a real problems that is ingesting In addition, a GAY NIGGER Recent article put Was in the tea I you have a play that they sideline Violated.
In the and has instead we get there with they're gone Mac you get distracted claim that BSD is a munches the most I thought it was my</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354647</id>
	<title>Re:I think it should be measured...</title>
	<author>syousef</author>
	<datestamp>1245151680000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext><p><nobr> <wbr></nobr></p><div class="quote"><p>..by the number of callers left alive at the end of the day.</p></div><p>I don't understand this metric. Should we be aiming for a high number or a low one?</p></div>
	</htmltext>
<tokenext>..by the number of callers left alive at the end of the day.I do n't understand this metric .
Should we be aiming for a high number or a low one ?</tokentext>
<sentencetext> ..by the number of callers left alive at the end of the day.I don't understand this metric.
Should we be aiming for a high number or a low one?
	</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349659</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350431</id>
	<title>Re:My two cents</title>
	<author>rednip</author>
	<datestamp>1245177840000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>So, you disagree with the submitter, as 'taken' / 'fixed' is exactly the sort of performance metric he's complaining about.  Also, I hope that you just made up that formula, as it doesn't really make any sense;
<dl>
<dt>Excellent employee:</dt><dd>s (total calls) = 100</dd><dd>h (total solved) = 100</dd><dd>d (time in hours)= 2 </dd><dd>total 5</dd><dt>very bad Employee;</dt><dd>s = 100</dd><dd>h = 50</dd><dd>d = 2</dd><dd>total 6</dd></dl><p>
As I see it you would have to get below half your calls resolved for calls to affect that number at all, assuming some down time.</p></htmltext>
<tokenext>So , you disagree with the submitter , as 'taken ' / 'fixed ' is exactly the sort of performance metric he 's complaining about .
Also , I hope that you just made up that formula , as it does n't really make any sense ; Excellent employee : s ( total calls ) = 100h ( total solved ) = 100d ( time in hours ) = 2 total 5very bad Employee ; s = 100h = 50d = 2total 6 As I see it you would have to get below half your calls resolved for calls to affect that number at all , assuming some down time .</tokentext>
<sentencetext>So, you disagree with the submitter, as 'taken' / 'fixed' is exactly the sort of performance metric he's complaining about.
Also, I hope that you just made up that formula, as it doesn't really make any sense;

Excellent employee:s (total calls) = 100h (total solved) = 100d (time in hours)= 2 total 5very bad Employee;s = 100h = 50d = 2total 6
As I see it you would have to get below half your calls resolved for calls to affect that number at all, assuming some down time.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349763</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353103</id>
	<title>Re:Stupid metrics</title>
	<author>againjj</author>
	<datestamp>1245144600000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>You should have put the customer on hold for 2 minutes and 15 seconds as soon as they ask the first question, surfed slashdot in the interim, and then continued with the call as normal.  The metrics work, you get a bunch of extra break time, the customer is still helped, everyone's happy.</htmltext>
<tokenext>You should have put the customer on hold for 2 minutes and 15 seconds as soon as they ask the first question , surfed slashdot in the interim , and then continued with the call as normal .
The metrics work , you get a bunch of extra break time , the customer is still helped , everyone 's happy .</tokentext>
<sentencetext>You should have put the customer on hold for 2 minutes and 15 seconds as soon as they ask the first question, surfed slashdot in the interim, and then continued with the call as normal.
The metrics work, you get a bunch of extra break time, the customer is still helped, everyone's happy.</sentencetext>
	<parent>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350715</parent>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349763</id>
	<title>My two cents</title>
	<author>Anonymous</author>
	<datestamp>1245175740000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>2</modscore>
	<htmltext>Amount of service calls: s
<p>Amount of service calls resolved: h
</p><p>Server/network downtime (in hours): d

</p><p>Use formula '(s / h) + 2d"

</p><p>Use resulting number to chart IT support performance, assuming that the network + server uptime and stability is more important than user inconvenience. You could decide that anything above a certain threshold is too much, or use it to compare personnel with each other.</p></htmltext>
<tokenext>Amount of service calls : s Amount of service calls resolved : h Server/network downtime ( in hours ) : d Use formula ' ( s / h ) + 2d " Use resulting number to chart IT support performance , assuming that the network + server uptime and stability is more important than user inconvenience .
You could decide that anything above a certain threshold is too much , or use it to compare personnel with each other .</tokentext>
<sentencetext>Amount of service calls: s
Amount of service calls resolved: h
Server/network downtime (in hours): d

Use formula '(s / h) + 2d"

Use resulting number to chart IT support performance, assuming that the network + server uptime and stability is more important than user inconvenience.
You could decide that anything above a certain threshold is too much, or use it to compare personnel with each other.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350077</id>
	<title>This is why the IT department is always cut first</title>
	<author>joeyg1973</author>
	<datestamp>1245176700000</datestamp>
	<modclass>Insightful</modclass>
	<modscore>4</modscore>
	<htmltext>Here is the problem... you are trying to assign arbitrary numbers to something that cannot be measured.  These are numbers for accountants, they want one number to be able to show them where to cut cost.  Problem is that there is no way to quantify how much money an IT department saves a company.  Metrics have gotten out of control in this country.  We are always measuring the cost and never measuring the value.  How do you assign a number to a person who is not a number?  How do you quantify the guy who spent all weekend fixing the server?  How do you quantify the accrued knowledge of a human being?  It impossible to do.  The accountants never ask questions like, "How would my quality of life be affected if I couldn't get effective tech support?", "How much money would the company loose if these computers and programs didn't exist?".  You need to measure the man and his work as a whole, person to person.</htmltext>
<tokenext>Here is the problem... you are trying to assign arbitrary numbers to something that can not be measured .
These are numbers for accountants , they want one number to be able to show them where to cut cost .
Problem is that there is no way to quantify how much money an IT department saves a company .
Metrics have gotten out of control in this country .
We are always measuring the cost and never measuring the value .
How do you assign a number to a person who is not a number ?
How do you quantify the guy who spent all weekend fixing the server ?
How do you quantify the accrued knowledge of a human being ?
It impossible to do .
The accountants never ask questions like , " How would my quality of life be affected if I could n't get effective tech support ?
" , " How much money would the company loose if these computers and programs did n't exist ? " .
You need to measure the man and his work as a whole , person to person .</tokentext>
<sentencetext>Here is the problem... you are trying to assign arbitrary numbers to something that cannot be measured.
These are numbers for accountants, they want one number to be able to show them where to cut cost.
Problem is that there is no way to quantify how much money an IT department saves a company.
Metrics have gotten out of control in this country.
We are always measuring the cost and never measuring the value.
How do you assign a number to a person who is not a number?
How do you quantify the guy who spent all weekend fixing the server?
How do you quantify the accrued knowledge of a human being?
It impossible to do.
The accountants never ask questions like, "How would my quality of life be affected if I couldn't get effective tech support?
", "How much money would the company loose if these computers and programs didn't exist?".
You need to measure the man and his work as a whole, person to person.</sentencetext>
</comment>
<comment>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355947</id>
	<title>The way we do things...</title>
	<author>anotherncbeachbum</author>
	<datestamp>1245159180000</datestamp>
	<modclass>None</modclass>
	<modscore>1</modscore>
	<htmltext>We have 5 different people, all of which do things 5 different ways.  On one end of the scale you have folks who address issues as they are brought to light, on the other end of the scale you have folks who work to resolve issues before them come to light.

What's worked for me is looking over my trouble tickets to see if there is a pattern. Users having issues with an application? Ok, let's look at that further. Is it due to a troublesome application or lack of user training/understanding? If it's a troublesome app I look at getting the problem resolved. If it's something that I can duplicate I go to the vendor with it and ask them to resolve it w/o my having to purchase an upgrade if possible. Training has always been an issue, our hiring process says that users need to have an understanding of Windows XP and Office 2003 along with basic internet/email skills. It's right there, plain and simple. Often this part is ignored - they'll ask the user if they can use Windows/Office/etc and they always say yes. I end up kicking that back to HR asking them to define use - hell, my kid was moving the mouse around and randomly typing on the keyboard when she was 2. For many of our apps I've written basic training documentation, that seems to help.

I also try to be proactive in regards to security. I check our AV logs daily and whenever a new patch is released for a product we use I throw it on the test box to see how it plays with what we were running. If it passes I'll apply it - not too hard to do. Write a patchlink script or just deploy it ASAP. Some of my workers wait until out monthly security report is due then they scramble to get caught up. I've also worked to close a lot of security holes. Email is one - let's see....no reason for users to email<nobr> <wbr></nobr>.bat,<nobr> <wbr></nobr>.exe, etc...so I block them. Mailing lists are locked down to members only, everything else has to be approved. Earlier this year a greeting card link that contained Trojan.Vundo hit the mail system. I saw the first one come through from an outside source, which it blocked. I then wrote a filter to reject the content of it. None of my 300+ users received it. The others? Many people ended up clicking on the link and ended up with downtime, a couple of places were so infected that we dropped their network connection until they cleaned up.

The folks above me have different metrics. My boss has a motto - "due diligence".  When Vundo started taking over machines we had a conference call and had to report in. When I had to share my experience I said "what trojan? I blocked that thing at 5am, it's been quiet out here". Bad move....I was scolded for not doing my due diligence. He'd rather have us step in and work all night to clean up a mess than to prevent the mess in the first place. Yeah, show me how that works. While you guys were working 18 hour days cleaning up the mess I worked an 8 hour day then went out to a nice quiet dinner.</htmltext>
<tokenext>We have 5 different people , all of which do things 5 different ways .
On one end of the scale you have folks who address issues as they are brought to light , on the other end of the scale you have folks who work to resolve issues before them come to light .
What 's worked for me is looking over my trouble tickets to see if there is a pattern .
Users having issues with an application ?
Ok , let 's look at that further .
Is it due to a troublesome application or lack of user training/understanding ?
If it 's a troublesome app I look at getting the problem resolved .
If it 's something that I can duplicate I go to the vendor with it and ask them to resolve it w/o my having to purchase an upgrade if possible .
Training has always been an issue , our hiring process says that users need to have an understanding of Windows XP and Office 2003 along with basic internet/email skills .
It 's right there , plain and simple .
Often this part is ignored - they 'll ask the user if they can use Windows/Office/etc and they always say yes .
I end up kicking that back to HR asking them to define use - hell , my kid was moving the mouse around and randomly typing on the keyboard when she was 2 .
For many of our apps I 've written basic training documentation , that seems to help .
I also try to be proactive in regards to security .
I check our AV logs daily and whenever a new patch is released for a product we use I throw it on the test box to see how it plays with what we were running .
If it passes I 'll apply it - not too hard to do .
Write a patchlink script or just deploy it ASAP .
Some of my workers wait until out monthly security report is due then they scramble to get caught up .
I 've also worked to close a lot of security holes .
Email is one - let 's see....no reason for users to email .bat , .exe , etc...so I block them .
Mailing lists are locked down to members only , everything else has to be approved .
Earlier this year a greeting card link that contained Trojan.Vundo hit the mail system .
I saw the first one come through from an outside source , which it blocked .
I then wrote a filter to reject the content of it .
None of my 300 + users received it .
The others ?
Many people ended up clicking on the link and ended up with downtime , a couple of places were so infected that we dropped their network connection until they cleaned up .
The folks above me have different metrics .
My boss has a motto - " due diligence " .
When Vundo started taking over machines we had a conference call and had to report in .
When I had to share my experience I said " what trojan ?
I blocked that thing at 5am , it 's been quiet out here " .
Bad move....I was scolded for not doing my due diligence .
He 'd rather have us step in and work all night to clean up a mess than to prevent the mess in the first place .
Yeah , show me how that works .
While you guys were working 18 hour days cleaning up the mess I worked an 8 hour day then went out to a nice quiet dinner .</tokentext>
<sentencetext>We have 5 different people, all of which do things 5 different ways.
On one end of the scale you have folks who address issues as they are brought to light, on the other end of the scale you have folks who work to resolve issues before them come to light.
What's worked for me is looking over my trouble tickets to see if there is a pattern.
Users having issues with an application?
Ok, let's look at that further.
Is it due to a troublesome application or lack of user training/understanding?
If it's a troublesome app I look at getting the problem resolved.
If it's something that I can duplicate I go to the vendor with it and ask them to resolve it w/o my having to purchase an upgrade if possible.
Training has always been an issue, our hiring process says that users need to have an understanding of Windows XP and Office 2003 along with basic internet/email skills.
It's right there, plain and simple.
Often this part is ignored - they'll ask the user if they can use Windows/Office/etc and they always say yes.
I end up kicking that back to HR asking them to define use - hell, my kid was moving the mouse around and randomly typing on the keyboard when she was 2.
For many of our apps I've written basic training documentation, that seems to help.
I also try to be proactive in regards to security.
I check our AV logs daily and whenever a new patch is released for a product we use I throw it on the test box to see how it plays with what we were running.
If it passes I'll apply it - not too hard to do.
Write a patchlink script or just deploy it ASAP.
Some of my workers wait until out monthly security report is due then they scramble to get caught up.
I've also worked to close a lot of security holes.
Email is one - let's see....no reason for users to email .bat, .exe, etc...so I block them.
Mailing lists are locked down to members only, everything else has to be approved.
Earlier this year a greeting card link that contained Trojan.Vundo hit the mail system.
I saw the first one come through from an outside source, which it blocked.
I then wrote a filter to reject the content of it.
None of my 300+ users received it.
The others?
Many people ended up clicking on the link and ended up with downtime, a couple of places were so infected that we dropped their network connection until they cleaned up.
The folks above me have different metrics.
My boss has a motto - "due diligence".
When Vundo started taking over machines we had a conference call and had to report in.
When I had to share my experience I said "what trojan?
I blocked that thing at 5am, it's been quiet out here".
Bad move....I was scolded for not doing my due diligence.
He'd rather have us step in and work all night to clean up a mess than to prevent the mess in the first place.
Yeah, show me how that works.
While you guys were working 18 hour days cleaning up the mess I worked an 8 hour day then went out to a nice quiet dinner.</sentencetext>
</comment>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_40</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349651
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350401
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349763
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351979
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_41</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349913
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355793
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353607
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_32</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350403
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_28</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349861
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355333
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_31</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350689
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351467
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353551
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350715
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353103
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349631
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350367
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353467
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_47</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354671
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349651
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351037
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_51</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349693
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350679
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351095
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349859
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350311
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_37</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350175
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_30</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350715
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351421
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351877
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_44</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350237
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349721
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352031
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352475
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350157
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350785
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_50</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350573
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352401
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_45</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350715
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351983
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_36</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350711
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350345
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_35</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349721
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28357735
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349651
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28359611
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351911
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_42</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28365709
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_29</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349659
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351969
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_34</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349659
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354647
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_48</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350689
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351629
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349763
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351515
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354839
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349913
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353691
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350477
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_49</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350689
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351467
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28356287
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_53</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349651
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350449
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_27</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349631
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350683
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351197
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349677
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350553
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_46</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350077
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350951
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349659
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354837
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349913
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353609
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350107
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354687
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349763
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350431
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_52</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350157
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355919
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_38</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349709
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350747
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_43</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28360297
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352101
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_39</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350077
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350957
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350157
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28356777
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_33</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350249
</commentlist>
</thread>
<thread>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#thread_09_06_16_1630230_1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843
http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351459
</commentlist>
</thread>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.3</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349925
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.19</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349709
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350747
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.1</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351027
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.17</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350013
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.16</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350077
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350957
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350951
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.14</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349861
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355333
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.7</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349721
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28357735
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352031
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.5</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349763
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351979
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350431
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351515
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.12</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350157
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355919
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28356777
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350785
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.23</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350715
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353103
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351421
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351983
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.8</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350573
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352401
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.21</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352325
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.10</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349631
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350367
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353467
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350683
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.2</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350727
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.6</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349913
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353609
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353691
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28355793
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.0</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349693
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350679
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351095
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.24</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349677
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350553
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.15</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349931
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.22</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349857
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.13</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349711
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.4</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353319
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.25</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349669
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349859
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350311
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28365709
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28360297
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354839
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352475
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351197
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351911
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349843
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351877
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351459
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350403
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353607
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354671
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350345
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350249
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350107
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350477
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354687
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.11</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350057
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.26</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349651
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350449
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350401
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351037
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28359611
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.20</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349659
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354837
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351969
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28354647
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.18</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349637
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350175
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28352101
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350237
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350711
-http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28350689
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351629
--http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28351467
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28353551
---http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28356287
</commentlist>
</conversation>
<conversation>
	<id>http://www.semanticweb.org/ontologies/ConversationInstances.owl#conversation09_06_16_1630230.9</id>
	<commentlist>http://www.semanticweb.org/ontologies/ConversationInstances.owl#comment09_06_16_1630230.28349743
</commentlist>
</conversation>
