Major Anesthesia-Related Events in the 2000s and Beyond



Fig. 13.1
Six to thirteen new anesthesia related drugs were introduced in each decade between 1940 and 2000. But from 2000 to 2013 only a single medication, sugammadex was introduced (while not available in the US, it was approved in more than 50 countries worldwide.)



Released for clinical use in the late 1990s, the ultra-short acting non-depolarizing neuromuscular blocking agent rapacuronium received an enthusiastic reception because the time to onset of action was similar to that of succinylcholine, succinylcholine-like side effects were absent, and neostigmine easily antagonized residual neuromuscular block. It was considered advantageous for use in patients at risk of aspiration and requiring a rapid sequence induction. However, Organon voluntarily withdrew rapacuronium from the market in 2001, due to concern regarding unexplained fatalities and severe bronchospasm occurring during post-marketing surveillance.

In the 2000s, Organon developed a radically new approach to reverse neuromuscular blockade. By “surrounding” a molecule of a neuromuscular blocking drug (particularly rocuronium), the cyclodextrin sugammadex could inactivate the action of the relaxant. A major advantage of sugammadex was the ability to immediately antagonize “deep” neuromuscular block, including that following rapid sequence induction. Like rapacuronium however, sugammadex was also associated with occasional cases of hypersensitivity reactions [1,2], and while available throughout much of the world, as of 2013 it had not been approved by the FDA for clinical use. Interestingly, case reports describing the use of sugammadex to successfully treat severe rocuronium-induced bronchospasm (presumably by inactivating the molecule provoking the anaphylaxis) provided support for an otherwise unintended use of the drug in patients with anaphylaxis from rocuronium not responsive to other treatment [3].

A neuromuscular blocking drug, gantacurium, is in phase three trials. Like atracurium, gantacurium produced a non-depolarizing blockade, but rather than undergoing Hofmann degradation, inactivation occurred by adduct formation with L-cysteine. Exogenously administered L-cysteine could accelerate inactivation, independent of blood pH, temperature, or depth of block [4].

Reasons for the dearth of new anesthesia related medications in the 2000s included the enormous cost of drug development and the relatively small market for anesthesia related medications. Increasing cost controls placed by third party payers and/or governmental decision makers may have also exerted an effect. But the primary limitation may have been the proven efficacy and safety of already available drugs. Improving on the excellence of presently available anesthetics and anesthetic adjuvants proved difficult. Contrast these circumstances with those in the 1950s to 1970s, a period of considerable drug development. The increasing use of electrocautery reduced the suitability of ether and cyclopropane, and an appreciation of the importance of decreased blood solubility and lesser metabolism, stimulated synthesis of new inhaled agents, first halothane, then methoxyflurane (a mistake) then enflurane, isoflurane, sevoflurane, and desflurane. While none were perfect, it is difficult to imagine development of additional volatile agents. The same might be said of propofol, the intravenous induction/maintenance drug that replaced thiopental, or the anxiolytic drug midazolam that replaced diazepam, or the present array of opioids. Neuromuscular blocking drugs in the 1950s included d-tubocurarine, metacurine, decamethonium, gallamine, and succinylcholine, all relaxants with notable limitations that led to the “oniums” and “uriums” of the 1960s-1990s. While also not perfect, these drugs advanced the field so far that safety and efficacy were less problematic, and other than searching for a faster-shorter-acting non-depolarizer, further “tinkering” would seem to have a low benefit/cost ratio.

Thus safety and pleasantness of anesthetic delivery, rather than newness remained the principal concerns of the anesthesiologist. Improving safety might not depend upon newer and better medications. Instead of developing new drugs, newer applications of older drugs occurred in the 2000s. These included administration of small doses of ketamine at induction of anesthesia and at intervals during surgery to supply analgesia, especially in patients with chronic pain who otherwise required large doses of opioids [5].

Postoperative nausea and vomiting (PONV) has long concerned patient and anesthetist. In the 2000s and before, anesthetists increasingly appreciated the factors predisposing to PONV. Various remedies had been shown to be effective up to a certain dose, beyond which further doses added little to a reduction in PONV. Apfel et al. added another dimension, in 2004 showing that combining 2 or 3 such remedies increased their antiemetic effect [6]. The implication of this finding was not that more drug was effective, but that PONV resulted from effects on multiple receptors. Thus the use of remedies influencing multiple receptors was more effective than increasing the dose affecting a single receptor.

In the 1980s, 2-chloroprocaine (CP) was nearly abandoned because of concern regarding spinal cord injury associated with inadvertent intrathecal injection of a large dose for epidural anesthesia [7]. CP use at much lower doses returned for spinal anesthesia in patients undergoing relatively brief surgery [8]. However, such delivery represents an off label application, and the difficulty of “obtaining FDA approval or exemption to study the use of spinal CP in a randomized-controlled manner, (means that) the safety of CP will likely be determined by large scale clinical practice”[9].

Awareness during anesthesia has long been acknowledged as a risk in severely injured patients not tolerating the circulatory effects of amnestic concentrations of inhaled anesthetics [10]. A parallel concern has arisen from the increased use of total intravenous anesthesia (TIVA), especially for patients receiving neuromuscular blocking agents in doses that prevent movement reflecting pain or awareness. Providing an end tidal anesthetic concentration slightly exceeding MAC-Awake is considered sufficient to prevent awareness in most patients [11]. With TIVA, a surrogate matching MAC-Awake has been sought, developing an advocacy for so-called awareness monitors [the bispectral index (BIS) or entropy monitors] to assure the presence of amnesia [12]. The value of these devices continues to be debated.



Ultrasound-Guided Procedures in Anesthesia’I Can See it Now!


Ultrasound technology has had a place in anesthesia since the 1980s with the introduction of transesophageal echocardiography [13]. Its origins as an adjunct to anesthesia-based procedures began in the 1990s [14,15], but in the 2000s the promise of ultrasound assisted techniques blossomed. Anesthesiologists with their roots in the 1960s remember the anxiety associated with “walking the needle along the first rib” while performing supraclavicular brachial plexus blocks (pray, don’t let me pierce the dome of the pleura!), or drawing lines on the buttocks before sciatic-femoral blocks (where were those nerves anyway?) Remember too, instructing an inexperienced resident in the use of surface landmarks to cannulate the internal jugular vein (especially nerve racking in a small child and still more problematic when the surgeon was nearby). Such terrors receded in the 2000s. More than 95% of the 1200 papers published describing ultrasound assistance for regional blocks, jugular vein and other vessel cannulation (and all papers describing training in these techniques) appeared after 2000.

The 2000s’ use of ultrasound guidance in regional anesthesia increased the ability to observe intravascular or intraneural injection of local anesthetic, lessened local anesthetic systemic toxicity (because a smaller volume of local anesthetic was needed, particularly advantageous in smaller children), and decreased patient discomfort relative to the previous method of nerve identification (paresthesias from nerve stimulation). Studies comparing landmark vs. ultrasound assistance for internal vein cannulation showed that ultrasound assistance led to a more rapid completion of the procedure, a greater frequency of successful performance, fewer attempts, and less morbidity (carotid puncture or hematoma formation) [16]. Recommendations from several societies including the Society Of Cardiovascular Anesthesiologists [17], and the American Society of Anesthesiologists [18] regarding its use, have recently emerged.

While these advantages appeared to indicate the usefulness of ultrasound assistance, many studies used as part of a recent American Society for Regional Anesthesia’Evidence Based Assessment were too small and the complications too infrequent to provide statistically significant evidence of superiority [19]. They relied on surrogate measures such as onset time and number of nerves anesthetized rather than more clinically relevant criteria including the ability to complete surgery without block supplementation or general anesthesia. Furthermore, experts performed many of the randomized control trials included in this assessment and their results may not reflect the success rate of anesthetists occasionally using regional anesthesia. As this clinical area matures, a specific period of training and experience may be expected, perhaps similar to that currently required for certification in trans-esophageal echocardiography. 3-D simulation trainers, with tactile feedback, are being developed.


Scientific Misconduct


The known frequency and magnitude of scientific misconduct increased enormously in the 2000s. Plagiarism and fraud accounted for most of these reports, the latter composed of “made-up” data and patient-related research lacking patient consent and/or Investigational Review Board (IRB) approval. Plagiarism detection probably increased because of newly available software that laid bare the use of another’s words without proper attribution. Shafer, described a taxonomy of plagiarism varying in seriousness from Intellectual Theft (use of ideas, analysis, and large blocks of words without proper attribution’unacceptable) to Self Plagiarism (use of one’s own words to describe perhaps a technical analysis or method not easily lending itself to restatement’OK) [20]. As Shafer stated, “it is unrealistic to expect an author working in a field to generate a novel description every time he or she chooses to write about it (but) extremes of self plagiarism including that approaching duplicate publication are clearly unethical and may be subject to manuscript retraction.” (see cited references as an example) [21,22].

Shafer also described “Plagiarism for Scientific English” used by authors uncomfortable expressing their thoughts in English. While understandable, this is not justifiable and if excessive may also be grounds for retraction [23]. Given that scientific journals increasingly screen every submission for plagiarism, the frequency of retraction for plagiarism will probably approach but not reach zero because plagiarism can escape detection by the detection software [24] It is tempting to speculate on the incidence of plagiarism before the ”plagiarism detection software” era. This remains for others to explore.

Alas, no scientific fraud-detecting software comparable to plagiarism-detecting software exists. In the 2000s three major scandals revealed fraudulent anesthesia-related research published over the past several decades. First, Scott Reuben an investigator at Baystate Medical Center in Springfield Massachusetts published numerous papers describing the benefits of postoperative multimodal pain management. In 2009, more than 20 published papers and abstracts were retracted when individuals in his institution discovered that Reuben fabricated the data in these reports. Importantly, the falsity of the results cast doubt on recommendations regarding several clinically important treatments [25]. Reuben was sentenced for health care fraud to 6 months in prison, fined $ 50,000 and ordered to repay more than $ 300,000 to several pharmaceutical companies.

German investigator Joachim Boldt published more than 200 papers over the past 25 years, many comparing efficacy and safety of colloids, including hydroxyethyl starch and albumin, in humans. In early 2010, several readers noted suspicious data appearing in a paper published in 2009 [26]. The data, particularly the variability in the measurements of several cytokines and of blood gas data was too small. This led to an investigation revealing that no study had been performed! Further investigation of all of Boldt’s research resulted in retraction of 88 papers, most for lack of evidence that patient consent had been obtained. At the time, this comprised the greatest number of retracted papers by a single author.

But wait! Not to be outdone, Yoshitaka Fujii a Japanese anesthesiologist has been accused of falsifying data in 172 (126 supposedly randomized control studies) of 212 papers published between 1993 and 2011. Fujii’s credibility had been questioned in a 2000 Letter to the Editor [27] commenting on one of his published papers [28], that the data were “incredibly nice”. In March 2012, John Carlisle a British anesthesiologist reported that the likelihood “of the data from all of Fujii’s published papers being generated experimentally’as opposed to by fraud’(was) implausibly small, in the order of 1:1×1030 [29].” Using a meta-analysis, Carlisle subsequently assessed the impact of deleting Fujii’s research papers on our understanding of the impact of several antiemetics, showing far less antiemetic effect and a lack of synergy between these medications [30].

Assuming that the concerns regarding Fujii’s reports hold up, of the 2,200 papers retracted from the literature since 1970, anesthesiologists authored nearly 13%. In addition to diminishing the credibility of our specialty, such fraud could adversely affect patient care, if clinicians were misled by the recommendations emanating from the fraudulent papers.


Anesthesia Safety Through the 2000s: Is it a Snake or a Tree?


Defining anesthetic safety is akin to the parable in which blind men touch an elephant and describe what they have touched. “It is like a snake” says the man grabbing the tail. Oh no, says the man hugging the leg, “it is just like a tree”. Context is important in describing the shape of an elephant and in describing the past and present history of anesthesia safety.

Anesthesia-related death rates of 1:500 to 1:2,000 were reported in the 1920s 1930s and 1940s (Fig. 13.2) [3133]. We and others argue that by the 2000s, the safety of anesthesia had dramatically increased. Today’s low mortality probably represents the cumulative result of multiple factors.



A978-1-4614-8441-7_13_Fig2_HTML.gif


Fig. 13.2
Observational data describing anesthesia related death rate suggest a progressive decrease in mortality of two orders of magnitude during the history of modern anesthesia

In 1954, Beecher and Todd published their landmark report of anesthesia-related mortality in a surgical journal [34]. Some fellow anesthesiologists dismissed the study [35]. Nonetheless, it illustrated some of the forces leading to today’s amazing record of safety. Beecher and Todd prospectively studied 600,000 patients receiving various anesthetics. The mortality attributed to anesthesia as the primary cause was 1:2680 and contributory in 1:1560. The authors indicted curare as an important cause of death, finding many more deaths in patients receiving curare, with the greatest incidence of anesthesia-related death occurring in patients given both curare and ether (1:250). A plausible explanation of the greater mortality in this latter group is that patients reached the recovery room still partially anesthetized (because of the high blood solubility of ether), partially paralyzed (by curare and the synergistic effect produced by residual ether and hypercapnia secondary to hypoventilation), and breathing room air rather than air supplemented with oxygen. That is the explanation Cecil Gray supplied in an interview at Oxford in 1996:1



“… when he (Beecher) published the paper it was quite obvious these patients were returned to bed still curarised, and lying there. Now, the secret was when they were still curarised, breathing ineffectively, not only was that not good for them, from the point of view of the collapse of the lung and so forth, but also they accumulated carbon dioxide. Their carbon dioxide tension went up, which potentiated any residual curare, so it became a thing called residual curarisation, which was a thing unknown to us”.

Gray also explains the difference between of the results with curare in the US and England:



“Now, the Americans were frightened of prostigmine because of its effect on the heart: give enough of it and it will just stop the heart, you see. But if you give atropine, atropine blocked that action but didn’t block the action on the muscles, the reversal of the tubocurarine. And we worked out the technique in which we were giving atropine and prostigmine, feeling our way until we got the right sort of dose. Then over the months, I suppose it was’yes it was certainly months’we came to give it absolutely routinely, because there was no trouble.”

Further to the elimination of the problem, monitoring of neuromuscular block began in the 1960s and became increasingly routine by the 1970s.

Subsequent observational studies (counting bodies) in the 1970s and 1980s reported anesthesia related death rates approaching 1:10,000 [36], decreasing further in the 1990s and 2000s to between 1:100,000 and 1:200,000 [37,38]. In 1994, Lucien Leape commented “Whereas mortality from anesthesia was one in 10,000–20,000 just a decade or so ago, it is now estimated at less than one in 200,000. Anesthesiologists have led the medical profession in recognizing system factors as causes of errors in designing fail-safe systems in training to avoid errors [39].” The Institute of Medicine singled out the specialty for its success in reducing anesthesia related mortality [40].

Not all parties (including the authors of one of the essays in this book) accepted a two orders of magnitude decrease in anesthetic related mortality [41,42], and many factors (including variations in surgical care, severity and number of co-morbidities and use or non-use of antibiotics) make comparisons of death rates between different eras difficult. But other measures’perhaps only surrogates of quality’lend support to the conclusion that a remarkable improvement has occurred, attributable to what we do and how we do it. For example, liability insurance rates’corrected for inflation’have decreased by forty percent over the last 40 years (Fig. 11.​1).

Several factors may have decreased mortality over the more than 6 decades of outcome data portrayed in Fig. 13.2. Most of these factors would produce a gradual, progressive improvement. They include better training programs incorporating the use of simulators, recruitment of higher quality trainees, successive introduction of inhaled anesthetics, hypnotics and opioids with more favorable pharmacokinetic and dynamic profiles, the application of specific antagonists of neuromuscular blocking drugs, more effective pre-operative and postoperative management, and the increased use of electronic medical records. They might also include the invention of monitors with proper alerts, anesthetic machines preventing delivery of hypoxic gas mixtures, and the influence of national and international patient safety organizations. Some factors such as perioperative “check lists” (time outs), response to malpractice liability crises (including adoption of monitoring standards), analysis of outcomes from malpractice cases (closed claims studies), and publication of evidence based practice parameters, guidelines, and advisories, might impose more immediate, step-like decreases in mortality because they are rapidly adopted.

Consider the anesthesia-related deaths data from England from 1908 to 1968 (Fig. 13.3). Consistent with an increasing number of operations (data not provided), deaths increased during the first thirty years, reaching a peak in the late 1930s. The number of anesthesia-related deaths declined over the next 30 years, doing so by nearly an order of magnitude. It seems unlikely that this would be explained by a decrease in the number of surgeries per year; if anything, the contrary would be likely. During this latter time, departments of anesthesia arose, anesthesia attracted better students, and halothane with a precision vaporizer for delivery was introduced [43]. None of these explanations provide a provable cause and effect relationship, but we submit that together they provide a compelling story supporting an actual improvement in patient safety meriting the comments of Leape and the Institute of Medicine.



A978-1-4614-8441-7_13_Fig3_HTML.gif


Fig. 13.3
Anesthesia-related deaths in England from 1908 to 1968 increased to a peak in the late 1930s and then dramatically decreased by nearly an order of magnitude. (Data from Scurr et al [43])

In the 1930s, mortality from anesthesia might have killed 1:1000 patients anesthetized. If that applied to an annual delivery of 50 million anesthetics, anesthesia would have killed 500,000 patients, a mortality demanding attention. But assuming a present mortality of 1:100,000 to 1:200,000 and again an annual delivery in the 2000s of 50 million anesthetics, only 250-to-500 people would die in the US from anesthesia-related causes. In contrast, 400,000 people die yearly from smoking, alcohol-related automobile accidents and gunshot related deaths, 800–1,600 times the number of anesthesia-related deaths. Should additional resources be expended to achieve further decreases in anesthesia-related mortality when other more pressing health care related issues have the greater need? In 2009, Daniel Sessler commented:



“An unfortunate consequence of our improvement is that some consider anesthetic safety a more-or-less solved problem. At the very least, the number of intraoperative deaths is now so small that policymakers might reasonably conclude that resources would be better invested elsewhere. This thought process may contribute to the dismally small amount of funding that the National Institutes of Health provides for anesthesiology research” [44].

Only gold members can continue reading. Log In or Register to continue

Mar 21, 2017 | Posted by in ANESTHESIA | Comments Off on Major Anesthesia-Related Events in the 2000s and Beyond

Full access? Get Clinical Tree

Get Clinical Tree app for offline access