Audit and evaluation of a perioperative service Ciaran Hurley

Chapter 19 Audit and evaluation of a perioperative service


Ciaran Hurley


SUMMARY


What does evaluation mean?


Why should perioperative services be evaluated?


What should be evaluated?


Designing a tool for evaluation of a service


Ethical considerations.


INTRODUCTION


The era of evidence-based healthcare is well and truly established and in this context it is important that service providers can demonstrate that they offer valuable returns for the investments that they receive, whether their funds are sourced from individuals, an acute care trust, primary care trust or from regional or national budgets. This chapter outlines some of the basic principles of evaluation with a focus on perioperative care services. To make some of the points relevant to readers’ experience, a number of examples are used throughout the chapter to illustrate the application of the principles of evaluation; these are through the adherence with the National Institute for Health and Clinical Excellence (NICE) recommendations for preoperative investigations, the adherence with a target of universal preoperative assessment for surgical patients, the adherence with the RCN guideline for perioperative fasting, patient satisfaction in an ambulatory surgery department and identification of bottle-necks in the operating department’s throughput of patients. These examples should allow the reader to make connections between some of the abstract principles of evaluation and the practical aspects of choosing a focus, design and method.


What does evaluation mean?


Commonly the word ‘evaluation’ relates to making a judgement or assessment. The technical application with which this chapter is concerned relates to approaches taken to assess the effect of a service or object. This assessment can take many forms. There are a number of forms of assessment, each with a distinct purpose, and the choice between them must be informed by the purpose of the assessment that is being carried out. The forms of evaluation are commonly known by the three terms service evaluation, audit and research. Service evaluation is used to identify the state of a service that is being offered and may be focused on a comparison of the reality with the recommendations of an advisory body such as NICE, or of a literature review. Audits seek to establish the compliance of a service with a standard that has been previously agreed. Research is concerned with the investigation of aspects of practice about which little is currently known or understood. A range of methods can be used to collect the data required for evaluation and the methods are not unique to any of the forms. Methods include survey, questionnaire, observation schedules, case studies and experiments of various types.


In their seminal book, Lincoln and Guba1 coined the word evaluand to indicate the subject to be evaluated. Almost anything can be the subject of evaluation and in the context of organisations that provide service evaluands can be remarkably diverse. While almost anything can be the subject of evaluation it is important to bear in mind that the evaluation must have a purpose; evaluation for its own sake is a waste of resources, producing little benefit for the service provider, recipients the personnel delivering them.


Why should perioperative services be evaluated?


There are a number of reasons to evaluate perioperative services. I have already alluded to one of them, and that is to provide evidence that the service being provided meets the expectations of the people or organisations that provide the funds on which the service runs. The objective of evaluations motivated by finance is fairly crude; people putting up cash want evidence that it is used efficiently and that the service provides value for money.


Another group with expectations about a service may be the clients, patients or users; evaluation can identify patients’ experiences and perceptions of the services provided and thus provide the basis for identifying targets for improvement and a framework for intervention. Possible motivations for such evaluation are complex. There is certainly an element of satisfaction in knowing that your clients enjoy the service they receive, but self-congratulation is hardly a laudable reason for undertaking evaluation. Providing patient groups, hospital management boards and those in primary care who purchase surgical services with evidence that the service provided meets the expectations of patients will make a difference in a climate where the Choose and Book system means patients exercise control and choice over the site at which they receive their treatment. Similarly, more patients might choose to have surgery in a hospital that can provide reliable evidence of the quality of their services, and under the Payment by Results scheme, more patients means more income for the hospital, something which the directors of a foundation hospital trust will want to see. Patient involvement groups can help to identify the correct focus for evaluation as well as advising on how to implement and sustain improvements to the services offered.


A third reason is to test the service provider’s performance against targets set either internally or by external agencies such as NICE, the Department of Health, the National Health Service Executive and others. In some cases, such as the Essence of Care benchmarks, the targets and the methods of measuring them are quite simple. Others may be more complex, for example reducing the number of operations cancelled for non-clinical reasons; the causes of such cancellation are generally related to system or process failures. Problems that originate from system failures can be identified by an activity known as process mapping, in which people involved in the service gather to isolate the actual and potential problems and suggest possible solutions. Having identified the problems, a process of service redesign should aim to change the aspects of the system that are problematic. For this purpose a number of NHS organisations have set up service redesign teams, groups of people with experience and skills who support the process of change to ensure that it is effective and that the changes are maintained after the initial enthusiasm fades.


Reasons to evaluate perioperative services can be broadly categorised into those that justify financial outlay, those that demonstrate patient satisfaction and those that demonstrate the achievement of non-financial targets. In an organisational context, evaluation may be driven either by an external agency or by internal motivations and while external agencies often provide the basis of an assessment tool, it may need to be adapted in order to extract valid data from your organisation; internally motivated evaluation will almost certainly have many unique elements in its design.


What should be evaluated?


A number of aspects of perioperative practice that could be the subject of some kind of evaluation were listed in the introduction to this chapter. None of these aspects stand alone as a question of evaluation because they each lack focus, containing a number of potential questions: Why do deviations from the NICE recommended investigations occur and what are the financial and process costs? How many patients arrive for surgery without attending preoperative assessment services and what impact does this have on their care? For how long do patients fast before surgery and what is the incidence of the postoperative complications associated with fasting for too long? What elements of our service do patients like, not like and want to see improved? For how long do theatre personnel stand idle waiting for a patient to arrive at each stage of the perioperative journey? Are there patterns to the time and place at which personnel are waiting for a patient to arrive? All of these questions need to be further refined to identify the specific and singular problems that underlie the problem being evaluated.


Identifying these inherent questions represents one of the earliest and most important stages of the evaluative process: refining the question. Simply asking ‘Are we meeting the NICE recommendations?’ will inevitably lead to the answer ‘No’, but experience and common sense suggest that there are often good reasons for deviating from recommended practices. Asking ‘Do our staff ever stand idle?’ will most probably be answered ‘Yes’, concluding the investigation. More pertinent questions may be ‘Is there a pattern to the bottle-necks in the system?’, ‘What are the causes of the bottle-necks?’ and ‘Can the bottle-necks be overcome?’


Refining the questions in this way is important because it helps to identify the type of question to which the problem belongs. The list above contains two types of question; some are quantitative, having an answer that relies upon numerical data; others are qualitative and are answered according to the ways in which people or groups respond or interact with the evaluand.


Compliance with targets and personnel idle time are purely quantitative problems as the only way to answer the questions is to count things: the number of patients who have not attended preoperative services, the number of investigations that are not indicated by findings of an assessment and examination, the amount of time in which theatre personnel are not engaged in patient contact.


Patient satisfaction is a qualitative issue because it asks about aspects of service that patients liked and disliked, enjoyed or suffered. Some problems, such as fasting, have both qualitative and quantitative elements. While the duration of fasting is a simple measurement of time, it is equally possible to discover the patient’s psychosocial response, which is qualitative in nature.


Already I have begun to outline an important aspect of the next section of this chapter which examines the design of the evaluation tool. The old adage that ‘failure to plan means planning to fail’ is as true in this context as any other, because failure to refine the question will very likely lead to choosing inappropriate methods of finding the answer. Refinement is therefore a vital part of the evaluation, demanding the investigator’s attention at a very early stage.


Designing a tool for evaluation of a service


The following section is an overview of the process by which questions of evaluation can be answered. This is not a recipe that will guarantee desired results, merely a framework and one source of advice; some aspects of it may safely be ignored, others are essential; some aspects can be done in varying sequence while others are dependent on the outcome of previous stages. The vital elements are summarised in Table 19.1.


The critical review of literature before an investigation serves a number of purposes; it can identify an answer to the question and thus eliminate the need to spend resources on data collection and analysis; it can influence the questions asked by revealing new and important understanding of the evaluand and it can influence the methods of evaluation by identifying methods that have been successfully employed by others. Performing a literature review is not a matter of simply reading and summarising the published literature, however, and the skills of critical analysis and synthesis should be learned, fostered and supported in order to ensure that the findings of review are valid. Comprehensive advice regarding literature review is available from many sources; a handful are listed with the references to this chapter.2,3,4



Table 19.1 The stages of evaluation, their purpose and requirements























Stage of evaluation Purpose of each stage Requirements
Literature review Identifying what is known about the evaluand in the literature and confirming the need to proceed with data collection     Knowledge of and skill in the literature review method
    Ability in critical reading
    Ability to extract data from literature
Design Plan and organise the collection and analysis of data     Knowledge of the evaluand, from literature review and/or practical experience
    Knowledge and/or experience of methods of evaluation
Analysis of data Extract meaning from the data     Ability to identify the method of analysis required by the nature of the data
    Knowledge and skill to apply the method of analysis
Dissemination of findings Sharing the findings with interested parties     Basic IT skills
    Access to the means of communication appropriate to the audience and the data

The design of any investigation is essential because it establishes the relationship between the three basic elements of any evaluation: the sample, tools and analysis. The sample, the source of raw data, may be a person or group, one or a number of objects or a set of data related to people or objects and should be capable of delivering the data that is required for the evaluation. The method of collecting the data should be appropriate to the type of data collected and should allow for storage and recall of the data for the purpose of analysis. Finally, the method of analysis must be suited to the type of data that has been collected. Failure to identify these relationships and to plan methods of evaluation that make congruent links between them can lead to invalid or unreliable findings.


The analysis of the data holds great significance for the overall validity and reliability of the findings of the evaluation. Quantitative data can be daunting for anyone without training in the use of statistics. There are written guides that can help an investigator decide what test is appropriate for the question and the type of data collected. It is advisable to seek advice as early as possible in the design process because a statistician will offer a range of advice: whether the data to be collected is amenable to statistical analysis, the best test to extract conclusions from the data, the sample size required for valid conclusions. Most organisations have access to the support of an audit department where people with expertise are available to help with all stages of evaluation, or a research department if the evaluation falls into that category.


Qualitative data are analysed in a very different way to quantitative data. The data generally takes the form of words, spoken or written, and the source of the data is people. Being notoriously unpredictable, people cannot be relied upon to fit into easy or simple categories and, to a certain extent, analysis requires a willingness to go with the flow.


If consistency in the type or presentation of the response is an important part of the evaluation, attention should be paid to the manner in which the data is collected; the researcher has little control over the responses in a self-administered questionnaire but a researcher-administered questionnaire provides an opportunity to answer queries, ensure that the respondent has understood the question and gives their response in a valid format. Qualitative analysis generally identifies themes in all or most of data and compares and contrasts the responses of the individuals in relation to each theme. A popular example of an approach to qualitative data analysis is the ‘Framework’ method detailed by Ritchie et al.5,6


A cautionary tale in evaluation design


A senior doctor wanted to know where and when patients were delayed in the process of being operated upon. He created a proforma, to be commenced when the department secretary telephoned the ward to request a patient be brought to the reception area. Each time a movement within the department was completed, the time was logged in the proforma; when the patient arrived in reception, when the support worker arrived to take the patient to the anaesthetic room, arrival in the anaesthetic room, arrival of the anaesthetist and so on. He thought he had planned everything to the letter, but on the day the investigation began a very significant problem became apparent. The many clocks in the department were not synchronised and differed, in some cases by as much as five minutes, resulting in data that was nonsensical. For instance, it appeared that some patients arrived in the anaesthetic room before they had left reception. It is clear that in this case the tools to collect the data (the clocks) were not fit for purpose; the proforma would have yielded the data the doctor required, the analysis was a matter of simple descriptive statistics, the sample of every patient was appropriate and the clocks were the only weak link in the whole project.


Ethical considerations


At face value it may seem that evaluation of perioperative services is unlikely to be ethically challenging; surely anything that aims to improve the services we offer is ethically justified? This section aims to demonstrate the potential for evaluation to breach the principles of ethics that are commonly accepted in western cultures.7 The over-riding principle of ethical evaluation is the avoidance of unnecessary harm and disproportionate exploitation. If people are to be inconvenienced or put at risk as a result of the investigation a number of criteria must be met. To ensure good practice, investigations are subject to an approval and governance process and all investigations should be discussed with the administrator of the appropriate governance process. Service evaluation and audits are normally managed in-house by an internal department, research has a national governance process with local representation, and advice should be sought from the hospital’s research and development department.


Good practice aims to minimise the risk of a range of potential problems including: inappropriate use of personal data and confidential information, conflict of interest between a patient’s need for healthcare and the investigator’s aims, creating conflict between employer and employee, and causing distress to people by asking them to recall unpleasant events from the past. Sometimes the subject of an evaluation is important enough to warrant a certain level of risk however, and where such risks are predictable, mechanisms are required to manage the consequences; if an audit of working practices results in criticism of an employer, the employees ought to be protected from subsequent bullying or harassment at work; if patients are being asked to recall events from the past, counselling services should be available to support those who find such recall distressing.


CONCLUSION


This chapter is not intended to be a ‘how to…’ guide to evaluating services. That would require a whole text of its own. Rather it outlines the basic principles that should be applied during the design stage, with reference to some common features of perioperative practice. The subject of the evaluation should be sufficiently focused to allow for precision in later stages of sampling, data collection and analysis. Once the focus is fixed, the investigator should identify the population that will provide the information required. This population may be people, objects or data; nonetheless it is essential to go to the source of the data, since any other source will bring data that is of little or no value to the project. The design of an evaluation should include the method by which the data is to be analysed to ensure that there is congruence between the question asked, the data collected and the means of analysis. Consideration should also be given in the design stages to the sources of dissemination of the findings and the audience to which the data will appeal. Finally, once the design is complete, it should be presented to a review body to ensure that the project meets the expected standards of ethics and design. Once the approval to proceed has been granted, the hard work begins! Good luck.


REFERENCES


1. Y. Lincoln and E. Guba (1985). Naturalistic Inquiry. Newbury Park CA: Sage Publications.


2. D.F. Polit (2006). Essentials of Nursing Research: Methods, Application and Utilisation (6th edn). London, Philadelphia PA: Lippincott, Williams and Wilkins.


3. C. Hart (1998). Doing a Literature Review. London: Sage Publications.


4. N. Burns and S.K. Grove (2005). The Practice of Nursing Research: Conduct, Critique and Utilisation (5th edn). St Louis MO: Elsevier Saunders.


5. J. Ritchie, L. Spencer and W. O’Connor (2003). Analysis: Practices, principles and process. In Qualitative Research Practice, ed. J. Ritchie and J. Lewis. London: Sage Publications.


6. J. Ritchie, L. Spencer and W. O’Connor (2003). Analysis: Carrying out qualitative analysis. In Qualitative Research Practice, ed. J. Ritchie and J. Lewis. London: Sage Publications.


7. T. L. Beauchamp and J. F. Childress (2001). Principles of Biomedical Ethics (5th edn). Oxford: Oxford University Press.


Only gold members can continue reading. Log In or Register to continue

Mar 21, 2017 | Posted by in ANESTHESIA | Comments Off on Audit and evaluation of a perioperative service Ciaran Hurley

Full access? Get Clinical Tree

Get Clinical Tree app for offline access