Appraising Clinical Practice Guidelines

CHAPTER 6






 

Appraising Clinical Practice Guidelines


Jason T. Slyer, DNP, RN, FNP-BC, CHFN, FNAP


Primary care providers make clinical decisions on a daily basis. These decisions are often made under a veil of uncertainty. The translation of health research into usable evidence can help reduce this uncertainty in clinical practice. This, however, is dependent on a primary care provider’s ability to identify, appraise, interpret, and incorporate research evidence into practice.


The evidence-based practice (EBP) movement came to fruition in 1992 when the term evidence-based medicine was coined by Gordon Guyatt and colleagues of McMaster University. Evidence-based medicine was first defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients … [EBM integrates] clinical expertise with the best available external clinical evidence from systematic research” (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996, p. 71). As the term evolved, different professions within the health care field began to adopt an evidence-based philosophy of practice. The definition of EBP further developed into a problem-solving approach to clinical care that integrates the conscientious use of evidence from “well-designed studies, the clinician’s expertise and the patient’s values and preferences” (Fineout-Overholt, Melnyk, & Schultz, 2005, p. 335). The core components of the best research evidence, patient’s preferences, clinical expertise, and the clinical context are inherent to the practice of EBP across disciplines and settings (see Figure 6.1).


Straus, Glasziou, Richardson, and Haynes (2011) outline a five-step process for employing EBP:



       1.  Ask an answerable, clinically focused question.


       2.  Identify the best evidence that answers that question.


       3.  Critically appraise the evidence for validity, strength, and clinical applicability.


       4.  Integrate evidence into clinical practice, incorporating patient values and beliefs.


       5.  Evaluate the effectiveness of the application of the evidence in clinical practice while making improvements where needed.


Although these steps seem simple enough, carrying them out effectively can be a time-consuming process. With the rapidly growing knowledge base of health-related research, it is increasingly difficult for primary care providers to stay abreast of new knowledge and appraise that knowledge for use in practice. Many providers turn to systematic reviews and clinical practice guidelines (CPGs) that summarize available evidence in a more readily accessible format.


DEFINING CLINICAL PRACTICE GUIDELINES






 

Research evidence that has been critically appraised and synthesized is essential for busy providers to practice using an evidence-based framework. CPGs came into existence in the early 20th century with the publication of the American Academy of Pediatrics’ Redbook of Infectious Diseases in 1938 (Institute of Medicine [IOM], 2011). As the amount of research evidence available and the demands to frame clinical practice from an EBP approach increase, the need for CPGs has become more evident.


In 1990, at the request of the United States Congress, the IOM published Clinical Practice Guidelines: Directions for a New Program, which was followed in 1992 by Guidelines for Clinical Practice: From Development to Use (IOM, 1990, 1992). These reports provided direction for the newly formed Agency for Healthcare Policy and Research (now called the Agency for Healthcare Research and Quality), which was tasked with developing CPGs to appraise and synthesize the growing body of evidence (IOM, 2011).


As defined by the IOM, CPGs are “statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options” (IOM, 2011, p. 4). Recommendations contained within CPGs should be systematically derived from the best research evidence available by a panel of experts who have knowledge of the practice problem being evaluated. Whereas systematic reviews of the literature identify the best available evidence to answer a focused clinical question, CPGs further identify recommendations of what should and should not be done in a specific clinical context. Gaps frequently exist in the current knowledge base, and bias often plagues research findings. As a result, CPGs provide a grading of the quality of evidence used to make clinical recommendations and identify the strength of the recommendations, while taking into account benefits and harms (IOM, 2008).


The IOM has identified eight attributes that high-quality CPGs contain (IOM, 1990).



         Validity: CPGs are valid if, when carried out as directed, the projected outcomes are achieved.


         Reliability: CPGs are methodologically reliable if, following the same methods, other experts arrive at the same recommendations. CPGs are clinically reliable if, in a similar clinical context, different practitioners apply the guidelines in the same way.


         Clinical applicability: CPGs should explicitly identify the population to which the guidelines should be applied and should be as inclusive as the evidence allows.


         Clinical flexibility: CPGs should identify any exceptions to the recommendations provided to allow for flexibility in the interpretation of recommendations within the boundaries of the available evidence.


         Clarity: CPGs should be organized with a logical flow, defining all terms and avoiding ambiguous language, while being specific to the population and clinical context to which the recommendations should be applied.


         Interprofessional process: CPG development groups should include all key stakeholders who will be affected by the guideline recommendations.


         Scheduled review: CPGs should include a statement indicating when the guideline will be reviewed to determine if new evidence has emerged that may alter the present guidelines.


         Documentation: CPGs should be transparent in the documentation of the methods used, the evidence identified, and any assumptions made in the development of the guideline.



SEARCHING FOR CLINICAL PRACTICE GUIDELINES






 

After a clinical question has been posed, the next step of the EBP process is identifying the best available evidence to answer the question being asked. Databases such as the National Library of Medicine’s PubMed, the Cumulative Index of Nursing and Allied Health Literature (CINAHL), and EMBASE are common resources used to search for answers to clinical questions. These databases, however, contain millions of records extending over decades. Busy providers rarely have the time to search through and appraise the vast amount of research evidence available in order to incorporate the best evidence into practice. Systematic reviews and CPGs provide summaries of evidence to guide practice. Many groups are producing CPGs and these guidelines can be readily accessed on the web. In addition to the resources provided in Table 6.1, websites of professional organizations such as the American Cancer Society, the American College of Cardiology, or the American Diabetes Association are good resources for locating CPGs.


The number of CPGs available continues to increase and primary care providers will need to be able to evaluate the validity, reliability, and relevance of a CPG before implementing recommendations into practice. For example, as of April 2013 a search of the National Guideline Clearinghouse (Agency for Healthcare Research and Quality, n.d.-a) listed 60 CPGs related to the management of type 2 diabetes mellitus. These CPGs were originally published between 1994 and 2012, some including subsequent revisions.



 














TABLE 6.1


Clinical Practice Guideline Resources

























National Guideline Clearinghouse


http://guideline.gov/


Guidelines International Network


www.g-i-n.net/


National Institute for Health and Care Excellence (NICE)


www.nice.org.uk/


Canadian Medical Association Infobase: Clinical Practice Guidelines


www.cma.ca/cpgs/


Scottish Intercollegiate Guidelines Network (SIGN)


www.sign.ac.uk/


Australian National Health and Medical Research Council: Clinical Practice Guidelines Portal


www.clinicalguidelines.gov.au/






One of the challenges of identifying a CPG is to filter through variable guidelines related to a specific health condition. Different interest groups may create conflicting guidelines related to one particular health condition based on the interests of the group. Conflicting guidelines are often a result of weak evidence or gaps in the evidence where the clinical expertise of the development group members are used to fill in the gaps. Appraising the guidelines for sources of bias becomes a key step in choosing a CPG for use in practice.


Another consideration when choosing a CPG is the date the guideline was developed. How old is too old? CPGs require periodic updating as new evidence is developed. There is no set timeline for when CPGs should be updated. Updates are usually done based on the topic and the speed with which new evidence is being developed related to that topic. For areas such as cancer care, where new evidence is frequently produced, CPGs may have to be updated on a more frequent basis than other areas; for example, peptic ulcer disease where management has remained relatively static. Evaluating the dates of evidence cited in a CPG can give some information regarding how current the recommendations are. Guideline developers could be contacted to determine if an update is planned or if the developers feel the guideline is still current enough for use in practice. A database search for any systematic reviews published since the release of a CPG can be useful to determine if a guideline is in need of an update.


Shekelle et al. (2001) identified six circumstances that necessitate updating of CPGs:



         Changes in the available interventions


         Changes in evidence related to the benefits or harms of interventions


         Changes in the outcomes considered important


         Changes in the evidence related to optimal practice


         Changes in the values placed on outcomes


         Changes in resource availability


In a review of 17 CPGs produces by the Agency for Healthcare Research and Quality, Shekelle et al. (2001) concluded that half of the guidelines became obsolete after 5.8 years. They went on to recommend that all CPGs be reviewed every 3 years. The IOM (2011) has not provided a time estimate but only recommends regular reviews of the literature for the emergence of new evidence that would necessitate the updating of CPGs.


APPRAISING CPGs






 

Determining whether a CPG is valid and relevant to a specific clinical context first requires a critical appraisal of the guidelines. Despite the methodologies formulated by the IOM for development of CPGs, guideline developers do not always adhere to these standards (Shaneyfelt, Mayo-Smith, & Rothwangl, 1999). Transparency in the guideline development process is necessary for an assessment of the rigors of guideline development. Determining how well guideline developers adhered to and reported on each of the eight attributes described by the IOM is an important step in applying evidence to practice.


Appraisal Tools


Numerous clinical appraisal instruments have been developed to aid users in the critical appraisal process to differentiate between higher- and lower-quality CPGs. A review of the literature identified two systematic reviews comparing CPG appraisal instruments. The first identified 15 different instruments developed between 1992 and 1999 (Graham, Calder, Hebert, Carter, & Tetroe, 2000). The second identified 24 instruments in a search up through 2003 (Vlayen, Aertgeerts, Hannes, Sermeus, & Ramaekers, 2005). Many of the instruments in existence are based on the eight attributes of CPGs described by the IOM. Only four of the critical appraisal instruments identified in these systematic reviews underwent testing for reliability and validity. Of these, the Cluzeau instrument (Cluzeau, Littlejohns, Grimshaw, Feder, & Moran, 1999), which was based on the instrument original developed by the IOM, and the Appraisal of Guidelines, Research and Evaluation (AGREE) instrument (Agree Collaboration, 2003), which is based on the Cluzeau instrument, were determined by the reviewers to address all of the dimensions deemed necessary for an appraisal instrument.


The AGREE instrument has become the most widely used critical appraisal tool for CPGs. It is easy to use and has been tested for reliability and validity on 100 guidelines from 11 countries by more than 200 different appraisers (Agree Collaboration, 2003). The AGREE instrument has since been refined, improving its reliability and validity, and updated to better meet the needs of the user (AGREE Next Steps Consortium, 2009). The AGREE II instrument contains 23 items grouped into 6 domains, with each item scored on a 7-point Likert scale. There is no cutoff point that differentiates between a good- and a poor-quality CPG; this decision should be made by the appraiser taking into account the clinical context in which the CPG is to be applied. The AGREE II instrument can be used to appraise CPGs on any health condition and stage on the continuum of care from health promotion and screening to diagnosis and treatment (AGREE Next Steps Consortium, 2009). The original AGREE instrument and the AGREE II instrument, which is available in six languages, can be found on the AGREE Research Trust’s website (www.agreetrust.org).


One important limitation of existing appraisal instruments, including the AGREE II instrument, is the lack of assessment of the clinical content of the CPG or the quality of the evidence that supports the CPG’s recommendations (Vlayen et al., 2005). All appraisals of CPGs are limited by the extent to which the guideline development process was documented. The transparent documentation of the development process, however, does not always lead to high-quality recommendations, making critical appraisal of CPGs an important step in putting their recommendations into practice.


In 2002, the Conference on Guidelines Standardization (COGS), a group representing 22 different professional organizations, developed a checklist of 18 items to support more comprehensive documentation in CPGs (Shiffman et al., 2003). The 18 items outlined in the COGS statement are recommendations of what should be contained in all CPGs to enhance validity and usability of guidelines. The National Guideline Clearinghouse has also developed a list of 55 attributes that a CPG must address in order to be published in its database, including scope, methodology, evidence supporting the recommendations, benefits and harms, contraindications, and implementation of the guideline (Agency for Healthcare Research and Quality, n.d.-b). The development of standardized attributes for CPG reporting adds to the transparency of reporting the CPG development process and supports the critical appraisal of guidelines, which is recommended prior to implementation of recommendations into practice.


Domains of Clinical Practice Guidelines Quality Assessment


When appraising CPGs, primary care providers want to determine what the recommendations are, if the recommendations are valid, and if the recommendations are applicable to the context in which the provider intends to apply them. The six domains evaluated by the AGREE II instrument can aid primary care providers in answering these questions. These six domains, each evaluating a different quality attribute, include scope and practice, stakeholder involvement, rigor of development, clarity and presentation, applicability, and editorial independence (AGREE Next Steps Consortium, 2009). The questions under each domain can be found in Table 6.2.


SCOPE OF PRACTICE


The scope and purpose domain is concerned with the overall aim of the CPG. A CPG should contain a specific statement that clearly identifies the objectives of the guideline and how the recommendations contained within the guideline can impact health. The clinical questions being asked should be explicitly stated. Questions are typically defined using the PICO format where the population (P), intervention (I), comparator (C), and outcomes (O) of interest are described. Clear questions are necessary to guide the search of the literature in the development of recommendations for practice. The target population in which the recommendations are meant to be applied must also be described, as recommendations should not be generalizable to other populations outside the scope of a guideline. Defining characteristics of the population, such as age, gender, race, ethnicity, diagnosis, clinical characteristics, or other defining attributes, should be included.


STAKEHOLDER INVOLVEMENT


CPGs are typically developed by groups of professionals who hold a vested interest in the questions being asked by the guideline. Members of the development group, including their discipline and expertise, should be reported in the CPG. All relevant stakeholders should have representation in the development group. An interprofessional panel brings diverse experiences and philosophies to the guideline development process to ensure that the evidence is interpreted with limited bias.


Apr 11, 2017 | Posted by in ANESTHESIA | Comments Off on Appraising Clinical Practice Guidelines

Full access? Get Clinical Tree

Get Clinical Tree app for offline access