Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Service Desk metrics

Metrics should be established so that performance of the Service Desk can be evaluated at regular intervals. This is important to assess the health, maturity, efficiency, effectiveness and any opportunities to improve Service Desk operations.

Metrics for Service Desk performance must be realistic and carefully chosen. It is common to select those metrics that are easily available and that may seem to be a possible indication of performance; however, this can be misleading. For example, the total number of calls received by the Service Desk is not in itself an indication of either good or bad performance and may in fact be caused by events completely outside the control of the Service Desk – for example a particularly busy period for the organization, or the release of a new version of a major corporate system.

An increase in the number of calls to the Service Desk can indicate less reliable services over that period of time – but may also indicate increased user confidence in a Service Desk that is maturing, resulting in a higher likelihood that users will seek assistance rather than try to cope alone. For this type of metric to be reliable for reaching either conclusion, further comparison of previous periods for any Service Desk improvements implemented since the last measurement baseline, or service reliability changes, problems, etc. to isolate the true cause for the increase is needed.

Further analysis and more detailed metrics are therefore needed and must be examined over a period of time. These will include the call-handling statistics previously mentioned under telephony, and additionally:

  • The first-line resolution rate: the percentage of calls resolved at first line, without the need for escalation to other support groups. This is the figure often quoted by organizations as the primary measure of the Service Desks performance – and used for comparison purposes with the performance of other desks – but care is needed when making any comparisons. For greater accuracy and more valid comparisons this can be broken down further as follows:
    • The percentage of calls resolved during the first contact with the Service Desk, i.e. while the user is still on the telephone to report the call
    • The percentage of calls resolved by the Service Desk staff themselves without having to seek deeper support from other groups. Note: some desks will choose to co-locate or embed more technically skilled second-line staff with the Service Desk (see Incident Management for further details). In such cases it is important when making comparisons to also separate out (i) The percentage resolved by the Service Desk staff alone; and (ii) The percentage resolved by the first-line Service Desk staff and second-line support staff combined.
  • Average time to resolve an incident (when resolved at first line)
  • Average time to escalate an incident (where first-line resolution is not possible)
  • Average Service Desk cost of handling an incident. Two metrics should be considered here:
    • Total cost of the Service Desk divided by the number of calls. This will provide an average figure which is useful as an index and for planning purposes but does not accurately represent the relative costs of different types of calls
    • By calculating the percentage of call duration time on the desk overall and working out a cost per minute (total costs for the period divided by total call duration minutes’) this can be used to calculate the cost for individual calls and give a more accurate figure.

By evaluating the types of incidents with call duration, a more refined picture of cost per call by types arises and gives indication of which incident types tend to cost more to resolve and possible targets for improvements.



  • Percentage of customer or user updates conducted within target times, as defined in SLA targets
  • Average time to review and close a resolved call
  • The number of calls broken down by time of day and day of week, combined with the average call-time metric, is critical in determining the number of staff required.

Further general details on metrics and how they should be used to drive forward service quality is included in the Continual Service Improvement publication.

6.2.5.1 Customer/user satisfaction surveys

As well as tracking the ‘hard’ measures of the Service Desk’s performance (via the metrics described above), it is also important to assess ‘soft’ measures – such as how well the customers and users feel their calls have been answered, whether they feel the Service Desk operator was courteous and professional, whether they instilled confidence in the user.

This type of measure is best obtained from the users themselves. This can be done as part of a wider customer/user satisfaction survey covering all of IT or can be specifically targeted at the Service Desk issues alone.

One effective way of achieving the latter is through a call-back telephone survey, where an independent Service Desk Operator or Supervisor rings back a small percentage of users shortly after their incident has been resolved, to ask the specific questions needed.

Care should be taken to keep the number of questions to a minimum (five to six at the most) so that the users will have the time to cooperate. Also survey questions should be designed so that the user or customer knows what area or subject questions are about and which incident or service they are referring to. The Service Desk must act on low satisfaction levels and any feedback received.

To allow adequate comparisons, the same percentage of calls should be selected in each period and they should be rigorously carried out despite any other time pressures.

 

Surveys are a complex and specialized area, requiring a good understanding of statistics and survey techniques. This publication will not attempt to provide an overview of all of these, but a summary of some of the more widely used techniques and tools is listed in Table 6.1.

Technique/Tool Advantages Disadvantages
After-call survey Callers are asked to remain on the phone after the call and then asked to rate the service they were provided
  • High response rate since the caller is already on the phone
  • Caller is surveyed immediately after the call so their experience is recent
  • People may feel pressured into taking the survey, resulting in a negative service experience
  • The surveyor is seen as part of the Service Desk being surveyed, which may discourage open answers
Outbound telephone survey Customers and users who have previously used the Service Desk are contacted some time after their experience with the Service Desk
  • Higher response rate since the caller is interviewed directly
  • Specific categories of user or customer can be targeted for feedback (e.g. people who requested a specific service, or people experienced a disruption to a particular service)
  • This method could be seen as intrusive, if the call disrupts the user or customer from their work
  • The survey is conducted some time after the user or customer used the Service Desk, so their perception may have changed
Personal interviews Customers and users are interviewed personally by the person doing the survey. This is especially effective for customers or users who use the Service Desk extensively or who have had a very negative experience
  • The interviewer is able to observe non-verbal signals as well as listening to what the user or customer is saying
  • Users and customers feel a greater degree of personal attention and a sense that their answers are being taken seriously ·
  • Interviews are time-consuming for both the interviewer and the respondent
  • Users and customers could turn the interviews into complaint sessions ·
Group interviews Customers and users are interviewed in small groups. This is good for gathering general impressions and for determining whether there is a need to change certain aspects of the Service Desk, e.g. service hours or location
  • A larger number of users and customers can be interviewed
  • Questions are more generic and therefore more consistent between interviews ·
  • People may not express themselves freely in front of their peers or managers
  • People’s opinions can easily be changed by others in the group during the interview ·
Postal/e-mail surveys Survey questionnaires are mailed to a target set of customers and users. They are asked to return their responses by e/mail
  • Specific or all customers or users can be targeted
  • Postal surveys can be anonymous, allowing people to express themselves more freely
  • E-mail surveys are not anonymous, but can be created using automated forms that make it convenient and easy for the user to reply and increase the likelihood it will be completed ·
  • Postal surveys are labour intensive to process
  • The percentage of people responding to postal surveys tends to be small
  • Misinterpretation of a question could affect the result ·
Online surveys Questionnaires are posted on a website and users and customers encouraged via e-mail or links from a popular site to participate in the survey
  • The potential audience of these surveys is fairly large
  • Respondents can complete the questionnaire in their own time
  • The links on popular websites are good reminders without being intrusive
The percentage of respondents cannot be predicted

Table 6.1 Survey techniques and tools


Date: 2014-12-29; view: 875


<== previous page | next page ==>
Super Users | Common tools and processes
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.007 sec.)