This chapter covers both quality assessment and improvement systems and patient safety issues. These are in some cases distinct and in others inextricably interrelated. They are each debated on national and international levels, and both are fundamental elements of the daily practice of pain medicine. The first section considers quality assessment and improvement programs and some practical steps to take to create a “QA/QI” program in a pain practice. The second section discusses the patient safety movement, both on a national scale and in terms of how each pain practitioner can expect to be involved.
Section 6.1
Quality Assessment and Improvement in the Pain Clinic
A significant factor in the deficits of our nation’s health is the broader social dysfunction of the nation: the inequities of education, income, employment, social support, and opportunity that still exist in this country. Nonetheless, we are cognizant that our health care system is also severely troubled. In the ensuing two sections we discuss the topics of quality and patient safety separately. These sections review the practical efforts that pain physicians can make to address these issues at the level that they can control: in the pain clinic.
What is “Quality” in Health Care: Do we Have it?
Quality as an issue in health care is a relatively recent phenomenon. Starr’s massive and Pulitzer Prize–winning 1982 review of health care policy and its relationship to society has no entry in its index for the word “quality” (or “value” or “outcome” for that matter). Health care quality has been defined in many ways over the past several years. Since providers are not the only parties interested in defining and determining quality, many opinions are available on what constitutes health care quality.
Lohr created a definition that the National Academy of Sciences’ Institute of Medicine (IOM) has included in its discussion of quality : “Quality is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.” This definition, with its emphasis on targeting known “desired” goals of health and its link to evidence-based medicine, is attractive to most physicians. As Donabedian wrote in his landmark paper on quality assessment in medicine in 1966 : “As such, the definition of quality may be almost anything anyone wishes it to be, although it is, ordinarily, a reflection of values and goals current in the medical care system and in the larger society of which it is a part.”
A recommended addition to the “medical” definition that was offered by Lohr is one that recognizes the importance of process (delivery of an intervention), as well as the structures that support that care in determining the actual outcome of the care. This three-legged stool provides opportunity for measurement (and improvement) of more than just report cards, which leads to a more comprehensive definition of health care quality if we add what is suggested by Bowers and Kiefe : “quality being the extent to which structure and process maximize the likelihood of good outcomes.”
Again, the emphasis is on the “likelihood” of good outcomes because high-quality care and outcomes are not necessarily directly linked. As Chassin and Galvin stated, the vagaries of the human condition mean that good-quality medical care can be followed by a poor outcome and excellent outcomes can occur despite poor care.
What is “Quality” in Health Care: Do we Have it?
Quality as an issue in health care is a relatively recent phenomenon. Starr’s massive and Pulitzer Prize–winning 1982 review of health care policy and its relationship to society has no entry in its index for the word “quality” (or “value” or “outcome” for that matter). Health care quality has been defined in many ways over the past several years. Since providers are not the only parties interested in defining and determining quality, many opinions are available on what constitutes health care quality.
Lohr created a definition that the National Academy of Sciences’ Institute of Medicine (IOM) has included in its discussion of quality : “Quality is the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.” This definition, with its emphasis on targeting known “desired” goals of health and its link to evidence-based medicine, is attractive to most physicians. As Donabedian wrote in his landmark paper on quality assessment in medicine in 1966 : “As such, the definition of quality may be almost anything anyone wishes it to be, although it is, ordinarily, a reflection of values and goals current in the medical care system and in the larger society of which it is a part.”
A recommended addition to the “medical” definition that was offered by Lohr is one that recognizes the importance of process (delivery of an intervention), as well as the structures that support that care in determining the actual outcome of the care. This three-legged stool provides opportunity for measurement (and improvement) of more than just report cards, which leads to a more comprehensive definition of health care quality if we add what is suggested by Bowers and Kiefe : “quality being the extent to which structure and process maximize the likelihood of good outcomes.”
Again, the emphasis is on the “likelihood” of good outcomes because high-quality care and outcomes are not necessarily directly linked. As Chassin and Galvin stated, the vagaries of the human condition mean that good-quality medical care can be followed by a poor outcome and excellent outcomes can occur despite poor care.
Quality Versus Variation in Health Care: Misuse, Overuse, Underuse, or None of the Above?
Another important definition of quality in health care is the absence of misuse, underuse, or overuse of therapy. These three embodiments of poor quality, originally cited by Donabedian in the 1960s, remain a common reference in discussions of health care quality. The process of avoiding or eliminating these three problems is inherent in the standards of evidence-based medicine and in the effort to assess and eliminate unwarranted variation in care patterns. It is important to recognize that variation may be good or bad, or neither. It may indicate significant overuse or underuse of a therapy and thereby signal that either the science of the therapy is poor or it is being applied haphazardly.
However, before practice data can truly be defined as “variation,” it must be assayed for any underlying reasons for diverse practices (including dissimilarities in supply and demand because of geographic economics and climate, for instance). Only if these factors are similar can any variation detected be linked to quality, rather than just being associated with confounding factors unrelated to quality. For instance, an increased number of obstetricians in an area may be associated with a higher birth rate in that area, but physician supply does not cause pregnancy (or birth).
An important example of misinterpretation of variation is found in the influential monograph “The Care of Patients with Severe Chronic Illness: A Report on the Medicare Program by the Dartmouth Atlas.” Here the authors counted the number of physician visits by Medicare beneficiaries with various chronic medical conditions—filtered by hospital and by geographic region—and determined that the geographic regions where there are fewer physician visits are the same locations where the Medicare beneficiaries have higher levels of patient satisfaction and quality of care. From this they extrapolated to the conclusion that we have a surplus of physicians. One example cited is the contrast between the Mayo Clinic, where there are fewer physician visits, and nonintegrated health care systems in New York City and Los Angeles.
In reality, variation in health care practitioner visits may be associated with a diversity of genetic, cultural, and social situations. These three geographic areas are markedly different in their genetic, social, and cultural homogeneity, as well as in the percentage of older adults who live below the poverty level. They vary with regard to availability of nonphysician providers and thus access to preventive care. These three areas are also quite different in their rate of uninsured, so it is unimaginable that health care would be comparable between New York City or Los Angeles and Minnesota.
This study’s conclusion is a reminder of the critical importance of risk adjustment for dissimilitude in population types when analyzing variation in clinical processes or outcomes. The value in finding variation in care is to provide a signal that there may be “best practices” that can be emulated, but best practices can be determined only if patient and disease characteristics are similar. Not all variation is misuse, overuse, or underuse.
The Seven Pillars of Quality
In 1990, Donabedian enumerated seven attributes, or “pillars,” of health care quality :
- 1.
Efficacy—the ability of care to actually improve health
- 2.
Effectiveness—how well care achieves improvement in health in the circumstances of “everyday practice”
- 3.
Efficiency—the cost of any given improvement in health
- 4.
Optimality—the point at which incremental increases in care begin to diminish in their return on investment, such that health may be improved, but in a less efficient manner
- 5.
Acceptability of care to patients—accessibility, the practitioner-patient relationship, amenities of care, patient valuation of care outcomes, patient estimation of care’s economic worth
- 6.
Legitimacy—consideration of the value of care by others than the patient receiving that care, the aspect of societal valuation as mentioned above
- 7.
Equity—the balance between what individuals and what society consider appropriate distribution of care and resources
Donabedian continued, “quality cannot be judged by technical terms, by health care practitioners alone; that the preferences of individual patients and society at large have to be taken into account as well.”
This last lesson should be kept in mind when choosing any single definition of “quality” as most appropriate.
What Forces Propel the Modern Quality Movement?
The concern about quality in health care became more intense in the latter 1980s as the systems of quality research and management that grew out of the business and manufacturing world were first applied to health care. Despite the improved outcomes with increased economy achieved by the application of these methods in some health care organizations, quality science did not widely penetrate the national health care delivery system.
In 1998, the IOM published its first large report detailing the problems in the quality of U.S. health care: “The Urgent Need to Improve Health Care Quality.” This was followed in 1999 by To Err Is Human: Building a Safer Health System. Report from the Committee on Quality of Health Care in America , which focused on patient safety and claimed that between 40,000 and 98,000 patients die annually in hospitals because of defects in the quality of their care. In addition, that report stated that 5.4% of patients suffered perioperative complications, almost 50% of which were due to caregiver error.
In addition to concerns about safety, five forces are driving the resurgence of concern about quality :
- 1.
Cost—health care expenditures have grown well past the trillion dollar mark, and it is now the perception of many in business, government, and the public that more than one third of that sum, or almost $400 billion annually, is a waste of money. Correct or not, this impression is a huge motivator for the modern concern about quality—and value—among those who are paying the bills for health care. This concern extends to pain medicine because it is a high dollar item: in 1998, $26.3 billion was expended on the care of back pain alone. Furthermore, it has been shown that low back pain episodes are associated with increased expenditure for other health conditions.
- 2.
Variation in the application of care—geographic variation exists in cost and care, often without linkage between more care and better quality. This situation is a red flag for those who evaluate quality and who believe that variation represents an absence of evidence of value (see Figs. 6.1 and 6.2 ).
- 3.
The increase in for-profit health care delivery systems, specialty (“boutique”) hospitals, and office-based surgery is a concern for some health care policy makers, who view these as potential drivers of increased cost. In addition, there is a fear that such practice modes “skim” the most profitable segment of income from traditional hospitals, thereby leaving the larger nonprofit and public entities at risk for financial ruin. This same argument was used against ambulatory surgery centers when they were first introduced in the 1970s and is the impetus behind the Certificate of Need requirements that the American Hospital Association was able to lobby for in many states. Furthermore, there is a potential conflict of interest when the providers are stakeholders in these health care entities.
- 4.
The increase in medical malpractice litigation, seen by some as an indicator of poor quality, is viewed by others as a driving force of the defensive overutilization of services that leads to increased spending and exposure of the patient to unnecessary risk.
- 5.
The expanding role of government and industry in scrutinizing health care and regulating its practice is due in part to the importance of these groups as the largest purchasers/consumers of health care, coupled with the business sector’s internal history of quality innovation.
Patient Safety and the “Culture of Safety”
There is reasonable debate on whether a specific patient may be safe and whether a specific error contributes to real morbidity. However, the public, legislators, and payers are convinced that both patient safety and human error are significant problems in today’s health care delivery system. The health care industry is now urged to create a “culture of safety” at all levels of practice, which will put in place processes that will eliminate or decrease the impact of human error.
The majority of error committed in health care is due to system defects rather than individual mishaps, as was pointed out in the IOM’s Crossing the Quality Chasm: A New Health System for the 21st Century . This book also presented a framework for improving health care quality with six specific targets for improvement :
- 1.
Safety
- 2.
Effectiveness
- 3.
Efficiency
- 4.
Timeliness of care
- 5.
Patient-centered care
- 6.
Equitable care
This anxiety about safety has led the Centers for Medicare and Medicaid Services (CMS) to make contracts with Quality Improvement Organizations (QIOs) and spend in excess of $200 million per year on them, despite evidence that QIOs do not make a difference in the quality of care. Congress appropriated more than $300 million to the Agency for Health Care Research and Quality (AHRQ) in the 5 years that followed the publication of To Err is Human , yet there is still much on the IOM list that has not been addressed.
Among these larger forces of business, government, and society, a lesson sometimes lost is that the individual practitioner can improve the quality of health care. If practitioners identify outcomes that are desired, use the tenets of evidence-based medicine to find “best practices,” and measure both their delivery of these therapies (process) and the results of these therapies (outcomes), it is more likely that quality will improve. Adherence to evidence-based “best practice” begins with the individual practitioner and is associated with significant improvement in patient outcomes, including mortality.
Continuous Quality Improvement or Total Quality Management Defined
Continuous quality improvement (CQI) or total quality management (TQM) is a seven-step process that consists of the identification of desired knowledge, design of appropriate measures to obtain the necessary assessments, measurement, investigation of the measurements to find trends and best practices, return of that information to those who can effect change, implementation of change in practice to increase the incidence of best practice, and then remeasurement to assess the program of change. It is an outgrowth of the “total quality control” movement that spread from the business sector to health care in the 1980s. Its origins may be traced to Walter Shewart’s work in the 1920s, including the “plan-do-study-act” cycle that was further amplified by Deming in the 1970s. In medicine, CQI has been valuable in creating significant improvement in practice patterns, even among multiple practitioners in multiple sites across geographically large distances.
Health Care Quality and Business
The health care quality movement has attracted significant attention from the business community, the group that often pays the bill. Indeed, there is a widespread belief that business management tools can provide the solutions to the health care quality problem. Recognition of this potential industry crossover is the reason that the business community, so oddly placed as our nation’s primary provider of health care insurance, has created powerful organizations, such as the Leapfrog Group ( www.leapfroggroup.org/home ), the Bridges to Excellence program ( www.hci.org ), and, in league with insurers, the Integrated Health Care Association ( www.iha.org ). Each of these organizations aims to push health care providers to embrace specific goals and types of behavior that are likely to improve patient safety and outcome. These organizations have recommended financial incentives (pay for performance [P4P]), public disclosure of hospital safety rankings (“report cards”), and institutional changes in quality methodology and behavior, including the use of electronic health records (EHRs) and computerized physician order entry (CPOE). They have spawned interest in these programs within CMS and pushed along initiation of the CMS P4P programs that are growing in parallel with those being created by private payers.
Quality Assessment and Quality Improvement: Risk Adjustment and Report Cards
Success in quality improvement as a result of employer-based initiatives has been mixed. This is in part due to the choice of quality initiatives with insufficient evidence of validity, as occurred when Leapfrog pushed for high-risk surgical procedures to be limited only to hospitals whose volume had already reached a chosen minimum level. In addition, participation has been voluntary and the financial incentives have been either paltry, easily gained, or both. Some revision of the Leapfrog criteria occurred with late recognition of the importance of risk adjustment.
One challenge is that the correlation between process measures (those most easily measured) and outcome measures (more difficult to measure) remains controversial in the most basic and oft-studied clinical circumstances (e.g., outcome of treatment of myocardial infarction by hospital). This critical aspect of the application of quality science must be certain before any real value can be attached to public reporting of hospital (or physician) performance. Moreover, evidence is growing that even when accurate, report cards alone do not induce improved performance.
Application of Continuous Quality Improvement and Total Quality Management: National Surgical Quality Improvement Program
There is a known means of improving quality of care: the simple act of measuring controllable clinical outcomes in a way in which they can be systematically analyzed and the information provided to the caregivers, with guidance on opportunities for change. Using specific measures to guide improvement initiatives is the key to success of the application of quality science. By contrast, monitoring of broadly defined outcome measures (such as mortality in hospital report cards) fails to effectively improve care because their causative factors are so diverse and are often independent of the practice of the caregivers under observation. The failures of a “shotgun” approach to measurement are exacerbated by a lack of risk adjustment.
The Department of Veterans Affairs (VA) National Surgical Quality Improvement Program (NSQIP) was created by surgeons in the VA system and in 1991 began to collect data to allow assessment of surgical outcomes and quality in the many hospitals in the system. Nurses hired specifically to work on the study collected the data. It included variables (e.g., preoperative serum albumin levels) to allow risk stratification of patients. Three important principles have been elucidated:
- 1.
It is an absolute requirement that all evaluations of health care quality be amended by stratification of patient risk factors (risk adjustment). Not doing so introduces an error rate as high as 60% when comparing the unadjusted quality of programs.
- 2.
Provider volume does not necessarily correlate with outcome. This finding disputes the Leapfrog tenet that outcomes are improved by directing surgical procedures only to hospitals with “sufficient” volumes. Instead, there is no clearly “safer volume,” but what is important is the quality of the program. This exemplifies the error of choosing “obvious” quality goals before gathering adequate data.
- 3.
Regular provision to surgeons of the results of their institutions’ outcomes allows identification of “best practice” and is an effective means of improving quality of care. This approach improved patient outcome measures such as mortality and length of stay by as much as 45%.
The VA system experience has further shown that there are aspects of the Leapfrog and 100,000 Life Initiatives that are of value: the use of EHRs, CPOE, financial incentives for compliance of providers with designated performance goals, and routine comprehensive quality measurement has led to improved quality and delivery of warranted care.
However, there are still barriers to comprehensive CQI programs: physician-specific outcome monitoring is unlikely to be accurate because any single doctor treats too few patients for accurate statistical evaluation, in most settings there is still a lack of extant evidence-based benchmarks, and technical barriers still prevent accurate collection of data. In addition, the lack of good data on the appropriate management of patients with multisystem disease means that a physician who holds back from treating one disease with guideline-specific treatments because of concern about disease-disease or disease-drug interactions would be “punished” if scrutinized under the current quality assessment and improvement paradigm. This illustrates the difficulty of attributing specific patient outcomes to “causative” individual provider actions.
Iezzoni, a pioneer in the science of risk adjustment, noted that outcomes in health care cannot be simplistically linked directly to care but are actually part of an equation that she terms an “algebra of effectiveness.” This “algebra” is determined by three factors: the patient’s condition and inherent risk, the effectiveness of the treatment provided, and an unaccountable or uncontrollable set of random variables. The NSQIP has proved that collection of risk-adjusted data regarding surgery site outcomes, rather than physician-specific outcomes measurement, is a successful strategy for improving outcomes. Notably, the NSQIP data collection and analysis that were facility specific were accurate enough that it prevented the planned closure of several “poorly” performing sites once risk adjustment for patient comorbidity was invoked. At this point in time, concentration on evaluation of facilities rather than individual physicians is the only appropriate CQI approach in view of the current limitations of technology and evidence-based medicine.
Quality Improvement in Pain Medicine
Community pain practice quality is improved if a cohesive program of measurement, identification of best practice, education, and reassessment is applied. CQI programs may improve the quality of therapy simply by inducing a more extensive review of a patient’s history. For instance, inquiry about patient risk adjustors may lead to the recognition of mental health issues that could significantly affect the outcomes of any pain treatment plan. The use of baseline measures can both direct care and provide an understanding of the probable potential for meaningful improvement in function and pain scores by identifying risk factors for long-term treatment failure. Verhoef and colleagues made the point that the best set of outcome measures for a pain practice will include some open-ended questions and also to let patients take on part of the role of setting goals for their own therapy. In this way, patients can be expected to be additionally motivated to reach these goals.
An active CQI program will also allow practitioners to evaluate the effects of the introduction of new therapies or treatment algorithms into their practices and either prove or disprove their value. CQI provides a means to achieve accreditation for facilities and may even influence which quality measurements are adopted by regulatory agencies as sources of “grading” quality of care. Finally, the use of a clinical database for monitoring patient outcomes also provides a sense of participation in the process for the staff members whose work is being assessed, a key factor in their acceptance of and enthusiasm for the benefits of CQI.
The most significant hurdle in CQI for pain medicine is a lack of apparent benchmarks or national “best practices” that providers can emulate. Using the literature to establish benchmarks is prone to error because of the significant positive bias in the reporting of success of techniques. Overall, the evidence is not clear which, if any, interventions improve many chronic pain states.
Analysis of practice patterns by either partners or an unassociated local group of pain physicians is potentially an excellent source of evidence in situations in which there is a lack of randomized controlled trials in the peer-reviewed literature. Assessment programs that are under the mantle of “peer review” statutes will allow participating physicians rapid and open means of sharing their own best practices with one another in an attempt to increase the quality of the community’s pain care. Such evaluation may be accomplished with very short questionnaires. Patient interest in and compliance with such assessments are usually avid.
Pain medicine practitioners can find a useful template for CQI programs that can be used in their own practices in Patel and colleagues’ description of such programs in mental health :
Quality improvement programs are practice and system strategies that support efficiency, appropriate care, and acceptable outcome. Comprehensive approaches that support client and provider education; encourage consumers to take a more active role in their recovery; and make use of support structures, such as case management to coordinate care have been shown to improve quality, in terms of both processes and outcomes of care.
Constructing a Continuous Quality Improvement Program for a Pain Practice
There are several steps in creating a CQI program for pain:
- 1.
Identify the practitioners (physicians, nurses, therapists) who will be involved.
- 2.
Gather these caregivers together to align their information goals and to craft a set of measures toward which all can agree that the program should be directed. A good starting point is the list put forth by the IOM :
- a.
Care should be safe.
- b.
Care should be effective and based on proven evidence and science.
- c.
Care should be efficient and cost-effective with no waste.
- d.
Care should be timely , with no waiting or delays.
- e.
Care should be patient centered –respect patient preference and give the patient control.
- f.
Care should be equitable with no unequal treatment.
- a.
- 3.
The most effective CQI program will start with the participants deciding what they personally believe are the most interesting (initial) questions to be answered. For instance, four or five separate practices in a delivery area might decide to evaluate the outcomes of referral to physical and behavioral therapists in the area or the value of a particular invasive pain technique.
- 4.
Pick outcome measures that will allow accurate assessment of the problem. Weinstein and Deyo recommended separation of assessment into four domains: patient health status, cost, patient expectation, and clinical status. Within these areas, indicators are chosen that answer the questions most important to the practitioners. Indicators are of three types :
- a.
Structure measures assess the characteristics of the practice or a facility, such as staff-to-patient ratios or patterns of diagnosis. Such information may alert the practitioner to unforeseen aspects of practice that are causing quality issues. For instance, a growth in the number of return visits of patients with complex regional pain syndrome who demonstrate increased dysfunction after an initial period of improvement could alert the care teams to an unrealized issue of reimbursement denials for the prolonged physical and behavioral therapy required in the care of this syndrome.
- b.
Process measures assess how care is provided in the practice. For instance, the compliance audits regarding documentation by practitioners and therapists that support evaluation and management coding fit into this category. The disadvantage of process measures is primarily in determining a link with outcome, because these measures must be considered surrogates for outcome in the many situations in which direct measurement of the outcomes is impeded by difficulty in risk adjustment or other barriers. Another problem is that process measures are frequently used since they are relatively easy to measure, but they may have little connection to true clinical outcomes. An example is how long it takes for a patient to be roomed after arrival. Finally, quality assessment is best if it considers a continuum of care, but process measures tend to evaluate only small pieces of care.
- c.
Clinical outcome measures are those on which most practitioners focus, although an effective CQI program will monitor all three types. An example of clinical outcome measures is the use of repeated patient health status testing to determine health status longitudinally after interventions. In most practices, it would be expected that some sort of evaluation of patients in this regard would be collated and sorted by diagnosis, demographics, and intervention. This is a critical step in monitoring for overuse or underuse of medical therapies.
- a.
- 5.
The measures chosen should have the following characteristics :
- a.
Relevance. They should relate directly to the goals of the group, and it should be true that the interventions on the group have an effect on them.
- b.
Timeliness. The measures should be collected in a timely manner so that they can be related as closely as possible to the interventions that the practitioners wish to assess.
- c.
Reliability. The measures should be accurate and consistent no matter when or who is assessing and recording them.
- d.
Validity. A valid measure is sensitive to changes that the practitioner can effect.
- e.
Precision. The measures should be clearly defined and leave little potential for individual or erroneous interpretation.
- f.
Cost – effectiveness. A CQI program costs money, and the measures should be significant enough to your patients and your practice that they are worth the time and money expended on the process of collection and analysis.
- g.
Provider control. The variable measured must be one that the provider or organization can actually control or it is not worth measuring. For instance, measuring patient self-reported pain levels 1 year after an intervention for a chronic pain condition is unlikely to reflect a process that the provider can control because of the impact of the intervening health care and patient activity over the course of 1 year.
- h.
Clear meaning. The measure must be one that is easily understood by all concerned.
- a.
- 6.
Terminology—once the practitioners have chosen the outcome measures, the terms must be adjusted to be in congruence with standard nomenclature. The accepted standard is a somewhat moving target, but SNOMED-CT (Systemized Nomenclature of Medicine—Clinical Terms) is produced by the College of American Pathologists (CAP) and has been designated by the National Institutes of Health (NIH) and the National Library of Medicine (NLM) for use in electronic transfer of medical information in the United States. Thus all practitioners should make sure that their own terms are “cross-walked” to SNOMED terms so that they report data that are standardized by accreditation and certification bodies and by payers in the years to come. This crosswalk process may be done in one of (at least) four ways:
- a.
One can download all of SNOMED at the NLM site, which has licensed SNOMED for use by any entity within the United States. This site is www.nlm.nih.gov/research/umls , but the data sets are raw and most practitioners would need help manipulating them.
- b.
The CAP has a complete version for sale at www.ihtsdo.org/snomed-ct , although the price may vary depending on the nature of the entity requesting it.
- c.
Apelon ( www.apelon.com ) has two products that will take care of the translation process. One is Mycroft, which is a translator that is free via Web browser and allows the user to find the crossmatch in SNOMED for any other term. This requires a fair amount of time and may not be useful if there are a lot of terms to look up. The other option is Termworks, a translator that will automatically translate a body of terms (in a spreadsheet format). However, this product is obtained by subscription, which, as of this writing, costs $1000 per month or $10,000 per year. Another product (TermManager) allows you to upload your terms to their server, and the translation is done at that level for a subscription fee of $100 per month per user. Though expensive, in the case of a large volume of terms this may be a good way to cut down on the hunting time that would otherwise be necessary for translation of terms.
- a.
- 7.
Risk adjustment must be included in the design of the monitoring system for the measurements to have any utility in identifying high quality and best practice. In the past, newly minted measures of quality have been introduced without attention to risk or to scientific investigation to ascertain that they were valid estimators of outcome. As the NSQIP proved, without accounting for the severity of patient morbidity, “significant correlation” between care processes and clinical outcomes may prove spurious when such comorbidities are later considered. Risk adjustment is a young and still potentially confounding science, and many of its controversies are yet unsolved.
The vagaries of risk adjustment require that practitioners use tools of measurement that are already validated. For instance, it is true to say that age and diabetes are significant comorbid conditions that should be considered when determining the probable outcome after surgery, but their relative importance can vary depending on which surgical procedure is under consideration, who is determining the severity of the comorbid condition, whether administrative or clinical determinants are used to determine the presence of disease and its severity, or which administrative coding tool is used to identify the condition. If a pain practitioner or group decides to create a completely new measure, careful attention to risk adjustment theory and its evaluation in the literature or appropriate texts is recommended. Nonetheless, it is possible for small groups of clinical experts to choose risk adjustment indices with validity if the process is done with bias controls and if there is involvement of biostatisticians and epidemiologists.
Risk adjustment in chronic pain practice poses very significant challenges in view of the wide diversity in the types of adjustors that are operant in the widely varied population known as “chronic pain patients.” The variegated nature of patients with multiple comorbid conditions ranging from physical disabilities, economic and social burdens, and mental health disorders makes it difficult to allocate risk “cohorts.” Risk adjustment in pain treatment outcome research is also a new science, and more rigorous studies than yet exist in the literature are needed before clarity and certainty exist in this area.
Specific Outcome Tools and Measures for Your Practice
Box 6.1 lists some measures and tools that might be considered for a pain CQI program. They have been used in scientific studies to measure some of the variables that would be valuable to a pain program. The reader is cautioned, however, that a great deal of variation exists in the versions of some of these instruments published in the literature.
Basic Patient Information
Some of this information will be gleaned from the patient chart and some will require further questioning by staff.
Demographics
- •
Gender
- •
Age
- •
Ethnicity
- •
Residential zip code
- •
Referred? Y/N
- •
Live alone? Y/N
- •
Care for self without help? Y/N
- •
Other caregivers/providers (list)
Disability and Litigation
Litigation active? Y/N
Legally disabled? Y/N
—percentage?
Working? Y/N
Mental Health
DSM-IV codes assigned
Psychiatric hospitalization DRGs
Annual psychiatric hospitalization days
Patient Expectations
Expected degree of return of function
Expected job status after treatment
Expected changes in medication after treatment
Expected decrease in pain level after treatment
Costs
Direct Health Care Costs
Physician visits
Emergency department and hospitalization costs
Physical therapist fees
Physical therapy modalities
Occupational therapy
Vocational rehabilitation therapy
Behavioral health therapy
Laboratory studies
Imaging costs
Professional home care
Stimulator or pump and implantation fees
Analgesic and behavioral medications
Alternative therapy costs
Indirect Health Care Costs
Litigation fees
Lost wages
Cost of housekeeping
Cost of other home care
Travel cost for care
Time expended by family, others in care
Tools
Be aware that multiple versions may exist in the literature for some of these tools. When available, the most definitive source is listed below.
Overall Quality-of-Life Status
SF-12 or SF-36 measures: www.qualitymetric.com
U.S. National Health Interview Survey (U.S. NHIS)
Spitzer’s Quality of Life Uniscale—a one-question set that records patient self-assessment of the past week in terms of quality of life
Spitzer’s QOL Index—five items
Pain- and Function-Specific Questionnaires
TOPS (Treatment Outcomes in Pain Survey)
PIQ-6 (Pain Impact Questionnaire): www.qualitymetric.com
PIQ-R (Revised Chronic and Acute Pain Impact): www.qualitymetric.com
ASA 9: http://old.asahq.org/Newsletters/1997/08_97/Outcomes_0897.html
Fibromyalgia Impact Questionnaire
Roland-Morris Back Pain
Patient Specific Functional Scale
Quebec Back Pain Questionnaire
Waddell Disability Index
West-Haven-Yale Multidimensional Pain Inventory (MPI)
Condition-Specific Measures
These are somewhat more specialized tools that may work in pain assessment programs.
Pain Disability Questionnaire
Patient Specific Index/Patient Specific Functional Scale
Problem Elicitation Technique
Patient Generated Index
Canadian Occupational Performance Measure
Schedule for the Evaluation of Individual Quality of Life
Measure Yourself Medical Outcome Profile
Juvenile Arthritis Quality of Life Questionnaire
Risk Adjustment
Functional Comorbidity Index
Patient Satisfaction
Picker Patient Experience Questionnaire —this is a superb assessment of how your practice measures up to the patient’s expectations. It would be worth assessing at intermittent, fixed intervals to watch for trends and administrative areas that could be improved.
Measures Used in Health Quality and Function Studies
Utility measures—patient assessment of the value of the overall health state
Euroqol (EQ-5D) —highly recommended as a short and insightful look at your patient’s attitudes
Health Utility Index
Generic measures—these measures quantify the patient’s self-assessment of overall health
Sickness Impact Profile —progenitor of the Roland-Morris questionnaire
Nottingham Health Profile
SF-12, SF-36: www.qualitymetric.com
Work and Function
Work Limitations Questionnaire
Oswestry Disability Index
Simple Shoulder Test
Neck Disability Index
Short Musculoskeletal Functional Assessment
Pain
Visual analog scale (VAS) or Pain Intensity Difference (PID)—there are many confounding factors in the measurement of pain intensity; also, there are validity concerns regarding the importance of change over time (studies indicate that 2 points may be a valid cutoff for clinically significant improvement).
Von Korff’s Pain Scale
Graded Chronic Pain Scale
Neuropathic Pain Specific tools
Neuropathic Pain Scale
Neuropathic Pain Symptom Inventory
Leeds Assessment of Neuropathic Symptoms and Signs (LANSS)
Neuropathic Pain Questionnaire (NPQ)
Neuropathic Pain Screening Tool (NPST)
Neuropathic Pain Diagnostic Questionnaire (DN4)
Neuropathic Pain Screening Tool (NPST)
Palliative Care
Patient Needs Assessment Tool (PNAT)
DRG, diagnosis-related group; DSM-IV, Diagnostic and Statistical Manual of Mental Disorders , Fourth Edition ; QOL, quality of life; SF-36, Medical Outcomes Study Short-Form 36-Item Survey.
Many organizations are still attempting to codify and make uniform the “dictionary” of terms that are used in quality databases. An excellent source of guidance in this endeavor is a document from the American Society of Anesthesiologists (ASA) titled “Guiding Principles for Management of Performance Measures by the American Society of Anesthesiologists,” available on the ASA website www.asahq.org . Several organizations are involved in the effort to codify terminology in addition to the ASA, including the Anesthesia Patient Safety Foundation (APSF) and its Data Dictionary Taskforce, and international groups such as the World Health Organization (WHO). The number of societies, the importance of the task, and the lack of certainty in the area mean that any database will be mutable over the next decade. The benefit of such malleability of design, which will always cost money, is that the practitioners who use it will be better assured that they will be in compliance with the requirements of accreditation agencies (e.g., The Joint Commission [TJC], Accreditation Association for Ambulatory Health Care [AAAHC], and others), which will also be set and amended in the years to come. A review of the challenges and some tools for measuring outcomes specific to assessing outcomes in back pain therapy has recently been published.
An excellent review is one by Resnik and Dobrykowski that involves outcome measurement in patients with low back pain. The entire December 15, 2000, issue of Spine is highly recommended reading because it contains in-depth reviews of several of the back pain measurement tools. Obviously, not all the measures listed will be applicable to every pain practice, and the practitioner is urged to review the references in Box 6.1 for information on how best to use the various measures.
One important issue is the significance of any change observed in the various measures over time in each patient. Some of the tools recommended were developed primarily to look at patient cohorts rather than an individual patient. This area of research is still emerging, and readers are encouraged to monitor the literature closely to assure them that their use of the tools is valid.
The Tools
The Brief Pain Inventory is adaptable to computer use, although it has not been validated for all pain conditions. Another valuable tool is a modification of the Medical Outcomes Study Short-Form 36-Item (SF-36) survey, the Low-Back SF-36PF 18 . This instrument is also amenable to computer use and includes aspects of the SF-36 and the Oswestry and Quebec back pain questionnaires. A newer instrument that may lend itself to prediction of outcomes is the Pain Disability Questionnaire (PDQ), which is a short list of 15 questions that uses a continuum line for responses, similar to the visual analog scales for rating pain that most chronic pain patients have seen. It may also be used on a computer. TOPS (Treatment Outcomes of Pain Survey) is a valuable tool that includes the SF-36 and additional questions that are specific to pain medicine. It has the added value of having been validated in clinical practice and as a research tool.
Over the next several years, advancement in the nomenclature of clinical outcomes monitoring will occur and lead to more specific definitions of the vocabulary of pain quality databases. This will improve the comparability of data across databases and tools, further refine the science of pain management, and increase the probability that we will be able to fix national benchmarks of quality and best practices.
A Risk Adjustment Tool for Pain Practice Continuous Quality Improvement
As mentioned, it is important within a pain practice to calibrate expectations of improvement in a given patient by factoring in comorbid conditions and any evident functional disabilities and including other confounding conditions, such as ongoing litigation. One new index that allows for this is the Functional Comorbidity Index.
Which Tool to use and How?
Gathering data is best managed in part by using patient surveys before and after provision of services, with timing appropriate to the nature of the patient’s condition and the expected effective duration of the intervention. Several very good reviews of the relative merits of these measurement tools advocate well for certain tools in certain patients or for certain conditions.
Brevity and speed of both information entry (either by the patient or staff) and data analysis are salutatory aspects of an outcome measurement system, and these aspects should be considered when determining which instruments will be used. The use of both a short general health assessment and a specific pain evaluation tool is probably the most basic approach. Using computerized surveys (e.g., in the waiting room) improves data integrity by decreasing omitted responses and improving internal consistency. They should be used instead of written surveys whenever possible.
What is the Value of Continuous Quality Improvement to a Pain Practice?
A QA/QI program can help a pain practice in several ways:
- 1.
Accreditation of a pain practice facility is discussed in the section on safety. However, accreditation bodies are very interested in the nature of any CQI programs. Thus an added benefit of such a program, in addition to improved care for patients, is the approbation of accreditors.
- 2.
Payment for quality is the newest impetus for providers and facilities to participate in quality improvement programs. P4P is the general term used to describe reimbursement programs that tie some portion of provider income to either process or outcome achievements. P4P will also effect changes in patient referral patterns as the information gathered on provider performance is made available to payers, employers, and patients. Therefore, investment in a CQI program will have potential benefits in terms of patient census and reimbursement rates in the coming years.
- 3.
Benchmarking providers in an objective and appropriate manner is another value of a well-crafted CQI program. By evaluating practice patterns of providers at a single site or at many sites, benchmarks can be identified that allow all physicians and staff members to assess their own performances against other practitioners and thus elucidate “best practices.” Allowing practitioners to see their own process outcome data and compare it with others working in the same facility has improved efficiency with no alteration in patient satisfaction.
What Will a Quality Assurance and Improvement Program Cost?
Personnel
A busy practice will require between 0.5 and 1.0 full-time employees to manage data input. This staff person, if not completely occupied by this work, might also serve as the Health Insurance Portability and Accountability Act (HIPAA) compliance officer and oversee all policies regarding the quality initiatives of the practice. Though relatively expensive, a registered nurse is of great value in this role.
Technology
It would be convenient if each practice location were to have access to an EHR to allow automated capture of accurate data to provide evidence of quality. We may be some years away from an EHR in every procedure room and are even further away from an accurate EHR in every procedure room. Sadly, investment in changes in infrastructure that might bolster improved quality is often squelched by the nature of reimbursement in our current medical system.
In the meantime, therefore, it is appropriate to concentrate on inexpensive and already available means of capturing such data and transmitting it to practitioners. Probably the most efficacious single-site approach is to use a computer relational database (e.g., Microsoft Access) to capture information gathered by more traditional methods, such as chart assessment by clerks and patient interviews by nurses. Such databases have been valuable in determining areas of potential cost savings, improvement in efficiency, and potential error and shortfalls in quality.
If the practitioner is not a skilled programmer, it is reasonable to hire one and build his or her own database with modes of risk adjustment and automated capture of demographic and laboratory data. They can be built with a graphic user interface (GUI) that is intuitive for even the most non–technology-savvy staff member and that will allow patients to directly interface with waiting room kiosks. This can all be done with about 60 hours of programming ( Fig. 6.3 ).
Physicians and staff will spend somewhat less time helping with design and testing. Licensing for the software and for use of the American Medical Association Current Procedural Terminology codes will be needed, and the purchase of additional computer monitors or kiosks may also be necessary, depending on the current configuration of technology at the clinic. In all, an investment of $15,000 would create a superb single-office product with automated reporting and measures very specific to the goals and practice patterns of the pain clinic whose practitioners created it.
The Future is Now for Quality Assessment in Pain Management
In 2011 the IOM issued a comprehensive report that underscores the need for an extensive transformation in all aspects of our treatment of pain in America. One of the resultant findings of its work was the lack of existing data on which to support our current practice and advance our evolving field. This influential review calls for a bird’s eye view of pain management, expanded from the practitioner-patient relationship to the care of patients on a societal level.
While we are still striving to implement CQI in individual pain management practices, as detailed earlier, the Stanford Pain Registry has taken this a step further by aggregating data in the form of a National Pain Registry. This effort exemplifies the broader view that is required to improve patient care and facilitate the discovery of novel solutions for pain management. The registry is currently shared by numerous pain centers and will eventually be open to clinics nationwide. Such metadata systems already exist in primitive form online as both not-for-profit and profitable entities that collect quality assessment data on behalf of providers. The practice generally has access to its individual data and analysis, but the business essentially “owns” the collective database. This information can then be subjected to countless analyses. Thus, it is not only used by providers to assess and improve the quality of care in their private clinic but can also be pooled for research purposes. Comparative effectiveness between distant practices and longitudinal data analysis are examples of the many uses of Web-based data collection programs such as these.
Still further, the vision is coming full circle by reintroducing the patient as part of the transformation in pain management called for by the IOM. Patients are already participating in daily sampling of data, such as pain scores or accelerometry activity, via Android cell phones that remotely report to the provider and by extension to the registry. This information will allow the provider to track a patient’s progress, evaluate ongoing treatment, and gather data that can later contribute to evidence-based practice and document accountable care.
Conclusions about Quality Improvement in Pain Practice
This chapter separates patient safety and quality improvement as independent sections. However, both must be considered when reviewing strategies to improve either of them. The following are necessary to improve pain CQI programs:
- 1.
CQI programs should be used at each individual clinic to discover which processes and structures beget the best clinical outcomes.
- 2.
The data for each clinic should be linked to all the others to create national quality benchmarks that will provide a means of identifying and duplicating best practices.
- 3.
Innovations in care systems should be welcomed, but only in settings that allow their study in comparison to the best practices that already exist.
- 4.
Evidence-based approaches to pain medicine must be improved. Practitioners must be scrupulous in evaluating procedures and therapies with rigorous doubt and eliminate those that do not prove valuable in comparison to alternatives that may be more banal and less lucrative.
- 5.
Access to high-quality multidisciplinary care, including mental health care, which may often be expensive and long term, must be a priority of our specialty so that we can advocate for it to payers, health care policy makers, and legislators, who too frequently ignore the benefits of such care.
Finally, pain medicine—and health care in general—needs more robust federal support for inexpensive methods to engender clinical measurement of outcomes, risk adjustment, and clinician feedback if quality improvement is to occur rapidly. Computerized systems should be made available to primary and ambulatory care facilities to allow the identification of national benchmarks of care. Only in this way can we rapidly discover best practices and decrease the waste inherent in the current care model, where individual clinicians often practice in the dark with regard to evidence and the effects of their own interventions.
Section 6.2
Patient Safety
To ERR is Human and Prevention of Harm
The publication To Err Is Human: Building a Safer Health System. Report from the Committee on Quality of Health Care in America in 1999 was the Institute of Medicine’s (IOM) public alarm about patient safety and error in the health care delivery system. The authors estimated—based among others on the Harvard Medical Practice Study of 1991—that the iatrogenic injury rate was nearly 4% in U.S. hospitalized patients. They concluded that more than half of these injuries were due to errors in medical care and that two thirds of all iatrogenic injuries could be prevented. Similar results were presented in a systematic review of eight studies conducted in the United States, Canada, Australia, and the United Kingdom that encompassed nearly 75,000 in-hospital patients: 9.2% encountered adverse events and the authors concluded that 43.5% of such events could be classified as preventable. However, the rate of induced harm can be as high as 25%. The doctor’s fear of punishment for making errors, as well as the difficulty of detecting errors, makes these estimates disputable. Despite other triggered controversies, all these reports expanded the conversation considerably and focused the attention of practitioners, payers, patients, and governments on these preventable errors. Patient safety is, based on the World Health Organization (WHO) definition, “the absence of preventable harm to a patient during the process of health care.” Patient safety, with its fundamentals of collecting and analyzing events, as well as prevention of adverse events in a confidential environment, has become an indispensable discipline and element of health care quality.
A significant hurdle in the effort to improve safety is that there are insufficient data to assess health care safety or even definitively identify uniformly valid indicators of safety in health care. Therefore, which solutions, which information, and which technology will be of value in improving patient safety are still disputable.
To improve patient safety, the IOM motivated hospitals, as well as professional societies, and recommended the creation of centers and organizations for focusing on patient safety. As an effect of the efforts of the IOM, the government was one of the first to support research on safety. As a further consequence, the Patient Safety and Quality Improvement Act signed into law in 2005 also encouraged the development of a Patient Safety Organization (PSO) and defined the role of PSOs. The act called for a confidential culture of patient safety and for the establishment of a Network of Patient Safety Database (NPSD) to provide an interactive evidence-based management resource.
Based on different facets of patient safety and with varied scopes, an increasing number of agencies and organizations have emerged:
- 1.
The Agency for Healthcare Research and Quality (AHRQ) has the federal lead in patient safety; it supports research of causes and the development of new strategies in patient safety, as well as their integration into the health care industry. This center coordinates the PSOs and distributes knowledge about effective practices. The AHRQ offers the Patient Safety Network (PSNet), a Web-based resource of news on patient safety, and the Web M&M, a morbidity and mortality round on the Web.
- 2.
The National Quality Forum helps improve the quality of U.S. health care by building, endorsing, and promoting national consensus about priorities, measurements, and their education.
- 3.
The Joint Commission (TJC: Joint Commission on Accreditation of Healthcare Organizations), advised by a panel of safety experts, has published its national patient safety goals since 2002. These goals are provided as clear, actionable statements, such as “use at least two patient identifiers when providing care, treatment, or services” or “mark the procedure site,” along with the rationale behind these goals, as well as the elements of performance. The main topics are to identify patients correctly, improve staff communication, use medications safely, prevent infection, identify patient safety risk factors, and prevent mistakes in surgery.
- 4.
Another PSO is the National Patient Safety Foundation (NPSF), an independent not-for-profit organization that works on improving patient safety by providing knowledge and developing, as well as enhancing, the culture of safety. Lucian Leape, a contributor to To Err is Human , was one of the founders of the NPSF.
- 5.
Large businesses, understanding the fundamental issue as being poor quality in the face of high prices, organized themselves via the Leapfrog Group and created programs for change while also pushing for new regulation and legislation to improve safety. The goal of the Leapfrog Group, a voluntary program, is to reward the safety efforts of health care providers and thereby trigger big leaps in health care safety. They aim to reduce preventable medical mistakes, improve the quality and affordability of health care, and encourage health care providers to publish their quality and outcomes and reward them for improving quality. This group focuses on four leaps: computerized physician order entry (CPOE), evidence-based hospital referral, intensive care unit physician staffing, and a Leapfrog safe practices score that assesses a hospital’s progress. These measures are variable in their evidentiary support for improving quality, and they are all potentially expensive. Nonetheless, Leapfrog and its proponents quickly declared that the most important next step was and is urging of the public to push for health care professionals to adopt these goals. The Leapfrog Group and its tenets have also imbued the discussion of patient safety in our health care system with an economic and political imperative that makes further objective evaluation of the true nature of health care’s safety more difficult, even as new solutions are recommended.
- 6.
The Institute for Healthcare Improvement (IHI), another independent not-for-profit organization founded in 1991, initiated the 100,000 Lives Campaign (2004 to 2006), which was designed to save lives by introducing six safety guidelines into health care. The institute added another six guidelines and launched the 5 Million Lives Campaign in 2006 ( Box 6.2 ). How many lives could actually have been saved is not clear, but the effect was tremendous: more than 4000 hospitals in the United States enrolled in this project committed to somehow integrate these 12 goals into their clinical routines.