A History of Research in Anesthesia



Fig. 39.1
The chronology of the major anesthesia drug discoveries began in 1844 with nitrous oxide. Some drugs were more equal than others, and the game changers are given in larger,bold type



The discovery/development of new drugs, and especially increasingly better anesthetic and adjuvant drugs, has often resulted from a collaborative effort between industry and academicians. Early on, such discoveries were often made by intellectuals. In 1774, Unitarian minister and genius Joseph Priestley (1733–1804) synthesized nitrous oxide. Precocious Humphrey Davy (1778–1829) studied it, in 1800 noting that it might be used for surgery, and dentist Horace Wells (1815–1848) unsuccessfully attempted a public demonstration of its use as an anesthetic in 1845. Valerius Cordus (1515–1554) synthesized ether in 1540, and three centuries later, on 16 Oct 1846 (ether day) Morton (decidedly not an intellectual) used ether to publically demonstrate its anesthetic effect. Some additions arrived by trial and error (wild guesswork). Soubeiran, von Liebig, and Guthrie independently discovered chloroform in 1831, and Jean-Baptiste Dumas named and chemically characterized it in 1834. The obstetrician, Sir James Simpson discovered its anesthetic properties in 1847 by experimenting first on himself and then on his patients.

Ether, nitrous oxide, and chloroform supplied most of the needs for general anesthesia (some would add ethyl chloride) for the better part of a century, and the search for a better inhaled anesthetic was delayed. Discovery went elsewhere. In 1884, Carl Koller, (1857–1944) demonstrated that cocaine produced conjunctival anesthesia and within the year, cocaine was widely used for topical, infiltration and regional anesthesia. However, the toxicity of cocaine prompted a search for a better local anesthetic, leading to the synthesis of procaine by Einhorn in 1904.

The search for a better inhaled anesthetic was revived in the 1920s. Ethylene was discovered to put carnations to sleep and when given to animals put them to sleep, too. In 1923, Luckhardt and Carter introduced ethylene as an inhaled anesthetic at Presbyterian Hospital in Chicago [1]. Some discoveries had the insight of genius; simple things like Prof. Chauncey Leake’s (1896–1978) suggestion to produce an ether with two ethylenes, to obtain the advantageous properties of both ethylene and diethyl ether. This led to divinyl ether in the early 1930s.

August Freund discovered cyclopropane in 1881, and cleverly proposed the correct cyclic structure. George Lucas and Velyien Henderson in Toronto discovered its anesthetic properties in the late 1920s. Henderson was the first human subject anesthetized with the gas. Waters demonstrated its anesthetic properties in patients in the early 1930s, using it in a rebreathing system.

And parallel to the expanding interest in inhaled anesthetics, Weese and Scharpff synthesized the short-acting barbiturate, hexobarbital, in 1932 [2]. It was widely used in Europe. In the early 1930s, Ernest Volwiler (1893–1992) and Donalee Tabern (1900–1974), working for Abbott laboratories, discovered thiopental (Pentothal®). In 1934, Waters gave thiopental to a patient, inducing anesthesia without the unpleasant, sometimes terrifying, suffocating sensation associated with an induction of anesthesia with ether. Thiopental allowed a quick and pleasant transformation from wakefulness to sleep. “Count backwards, please, from 100….” Sleep came before 90.

One problem surgeons faced in the first half of the twentieth century was that, under ordinary circumstances, muscle relaxation was not sufficient to allow certain forms of surgery. Relaxation could be produced with the anesthetics at hand, but only by imposing deeper levels of anesthesia, levels that increased the risk of anesthesia. We knew of Ecuadorian Indians who painted the tips of darts with a substance calledwoorari (‘poison’ in the Carib language of the Macusi Indians of Guyana), now called d-tubocurarine (“curare”). An animal pricked by such a dart would become paralyzed, would die, and would be served for dinner. In 1780, Abbe Felix Fontana found that curare’s action was not on the nerves and heart but on the ability of voluntary muscles to respond to stimuli. In the Nineteenth century, Claude Bernard demonstrated that paralysis resulted from an action on a small portion of the muscle, the so-called myoneural junction [3]. In 1811–1812, Benjamin Brodie (1783–1862) showed that curare does not kill by a direct effect, and that recovery eventually occurs if breathing is maintained artificially. In 1934, Henry Dale (1875–1968) showed that acetylcholine is the messenger mediating neuromuscular transmission, and that curare blocks such transmission.

Of course, one would not wish to permanently paralyze a patient, but what if the dose of curare were adjusted to produce just enough relaxation to facilitate surgery, a dose producing but a temporary paralysis? Stuart Cullen sought to test this possibility, injecting curare into dogs that promptly developed asthma-like symptoms. He abandoned curare. The credit for the discovery of the clinical usefulness of curare goes to the fearless Griffith and Johnson, who bypassed studies in animals, showing in 1942, that the bronchospasm found in dogs rarely applied in humans [4]. A new age had come to anesthesia, and curare was soon widely applied. Although it seemed safe, subsequent events suggested that the dangers implied by the deadly hunters of Ecuador might have some subtle application to humans (see below).

Cocaine and procaine had enabled topical, infiltration, and regional anesthesia, but neither was perfect. In the 1940s, local anesthesia was materially advanced by the introduction of the questionably safer, faster-acting amide, lidocaine. Lidocaine was synthesized by Nils Löfgren (1913–1967) and Bengt Lundqvist, and released for clinical use in 1948 [5].

It took World War II and advances in fluorine chemistry needed to enrich uranium isotopes, to give us modern inhaled anesthetics. The first successful modern anesthetic was halothane, synthesized by the British chemist Charles Suckling (b. 1920) for Imperial Chemical Industries. An inspired guess involved placing three fluorine atoms on one end of a two-carbon ethane, and adding a bromine and a chlorine atom on the other carbon. He thought the fluorine atoms would lend stability to and decrease toxicity of the molecule, and so they did. The bromine and the chlorine were probably included to secure a greater potency. In 1956, Michael Johnstone gave the first halothane anesthetic to a human [6]. Halothane quickly became popular because, unlike ether, the lack of pungency allowed a smooth, rapid, nonirritating induction of anesthesia.

And unlike chloroform’at least in the first years of its use’halothane did not seem to injure the liver. However, in 1958, a case of death in association with halothane use was reported, and more followed, leading to the National Halothane Study (see below). Enter Ross Terrell, a genius of a fluorine chemist, who was employed by the Ohio Chemical Company to find a replacement for halothane that would not injure the liver. He did that and more, synthesizing over 700 compounds in a search for the ideal anesthetic [7,8]. First came enflurane, replacing halothane in the 1970s, particularly in the litigious US. Terrell’s next anesthetic was isoflurane which replaced enflurane in the 1980s, and then he gave us desflurane in the 1990s. In parallel with Terrell, Bernard Regan, at Travenol Laboratories, synthesized sevoflurane [9]. This progression gave us and our patients increasingly safe and controllable anesthetics, nonflammable anesthetics with lesser blood solubilities allowing increasingly rapid awakening from anesthesia, with far less nausea than that from ether.

Further new drugs used in anesthesia resulted from the efforts of diverse clinicians, academicians and industry investigators. These included the muscle relaxants that replaced curare, increasingly safer and more controllable muscle relaxants. These newer muscle relaxants were made ever safer by better techniques for determining and eliminating any residual postoperative effects of such useful but potentially deadly drugs. Most recently we have, or nearly have, the reversal agent sugammadex, a drug eliminating the effects of particular relaxants by “hiding them from the muscle”.

The new drugs included propofol, an intravenous induction agent largely replacing thiopental through greater controllability, secondary to metabolism an order of magnitude greater than that of thiopental. Propofol had some problems at its birth. Being poorly soluble in water, it had to be dissolved (“formulated”) in an organic solvent. The solvent was initially cremaphor EL, and this combination was employed in 1977 in clinical trials. However, we soon learned that cremaphor might produce anaphylactoid (allergic-like) reactions that could be life threatening. Oops! Try again in 1986, using an emulsion of soy lipids to dissolve the propofol. Success!



1846–1920: Theme 2’Defining and Controlling Anesthetic Delivery


The discovery of anesthesia mandated research to address a simple question’how to conduct the patient safely through the operative procedure, and how to satisfy surgical requirements without injuring or killing the patient? We needed a clinically relevant assessment of what we came to call depth of anesthesia, we needed a controllable way to deliver anesthesia to achieve the “correct” depth, and we needed it right away. John Snow gave it to us. Golly, he was good. He invented the notion of depth of anesthesia, and he expressed that as five “degrees”:



“I shall divide the effects of ether into five stages or degrees; premising, however, that the division is, in some measure, arbitrary…In the first degree of etherization I shall include the various changes of feeling that a person may experience, whilst he still retains a correct consciousness of where he is, and what is occurring around him, and a capacity to direct his voluntary movements. In…the second degree, mental functions may be exercised, and voluntary actions performed, but in a disordered manner. In the third degree, there is no evidence of any mental function being exercised, and consequently no voluntary motions occur; but muscular contractions, in addition to those concerned in respiration, may sometimes take place as the effect of the ether, or of external impressions. In the fourth degree, no movements are seen except those of respiration, and they are incapable of being influenced by external impressions. In the fifth degree…the respiratory movements are more or less paralysed (sic), and become difficult, feeble, or irregular.” (pp 1–2) [10]

Snow described how to achieve these degrees in his two books on delivery of the anesthetics of the day, diethyl ether [10] and chloroform [11]. We spent a century tinkering with the basic notions, Arthur Guedel being the greatest tinkerer. In World War I, he gave us not five degrees, but four stages, the third (surgical anesthesia; roughly equivalent to Snow’s fourth degree) containing four planes (Fig. 39.2) [12]. Along the way, in 1895, Cushing (a neurosurgeon) and Codman added measurement of blood pressure and heart rate, and the anesthetic record [13].



A978-1-4614-8441-7_39_Fig2_HTML.gif


Fig. 39.2
The figure shows what constituted most of the characteristics of the first three of four stages (denoted at the furthestleft) of anesthesia that Guedel devised. Four, successively deeper planes lay within the third stage. Stages and planes go from lightest level (topmost) to deepest level of anesthesia.Column 1 characterizes inspiratory and expiratory movements, the depth and smoothness.Column 2 indicates the activeness of movement of the eye in the second stage and the first plane of the third stage.Columns 3,4, and5 display the size of the pupil (innermost circle) and the iris (distance between theinnermost andoutermost circles) without (column 3) and with (columns 4 and5) application of light to the eye. (From Guedel AE: Inhalation anesthesia: A fundamental guide. New York, Macmillan Co, 1937 pp 1-172)


1950–1980: Theme 3’Phenomenology: Pharmacokinetics


What Snow, Cushing, and Guedel taught us required attention to the patient’s response to the anesthetic. What we saw in the patient, things such as the character of breathing, or the size and responsiveness of the pupils to light, or changes in blood pressure, guided administration of more or less anesthetic or dictated cessation of anesthetic administration. Our observations had an enormous subjective element, meaning that delivery of anesthesia was as much an art as a science. This changed with our increased knowledge of anesthetic phenomenology in the 1950s, and the ability to deliver a precisely known amount of anesthetic that, additionally, we could directly measure.

In the mid-1900s we knew little about what we came to call anesthetic pharmacokinetics. We were ignorant about factors governing movement of anesthetics within the body (and thus we were hindered in our understanding of how to control and make use of such limits). Snow knew it was important, but could do little about it. In 1924, Howard Haggard explored the absorption, distribution and elimination of di-ethyl ether in dogs, dividing the distribution into that which went to the brain, and that which went elsewhere [14,15]. In 1950, Seymour Kety (1915–2000) provided the first kinetic models, unintelligible for most mortals, and wrong in details (it failed, for example, to recognize the importance of differential distribution of the delivery of blood to different tissues) [16].

In 1954, John Severinghaus (b. 1922) started us properly down the pharmacokinetic road with his seminal study of nitrous oxide, the first investigation of uptake of an inhaled anesthetic in humans [17]. His investigation led to a more general, but empirical, description of uptake known as the square root of time rule, a rule stating that anesthetic uptake decreases as a function of the square root of time. It applied to anesthetics other than nitrous oxide [18]. Hal Price improved on the concepts of tissue distribution of intravenous anesthetics, noting that tissue bulk and blood flow had to be accounted for in our thinking of the redistribution of an anesthetic like thiopental (Pentothal®) [19]. His concepts led the way with thiopental and injected anesthetics, and prompted a broader view, one applicable to the inhaled anesthetics. In parallel with Price, Eger (b. 1930) studied inhaled anesthetic kinetics, doing so with a vengeance, measuring solubilities and uptake of all clinically-used and many experimental anesthetics [20]. In the late 1950s, he developed an iterative mathematical model explaining how inhaled anesthetics move into and throughout the body [21], a model that with a few modifications applies to all inhaled anesthetics used today. Other investigators used analog models to accomplish a similar explanation [22,23], but these could not describe the effect of the inspired anesthetic concentration on pharmacokinetics.


1960–1980: Theme 3 Continued’Phenomenology, Pharmacodynamics


In 1963, we got a measure of anesthetic potency, a measure that would provide a standard that allowed a comparison among anesthetic effects. The measure of potency was MAC, the minimum alveolar concentration of an inhaled anesthetic producing immobility in 50% of subjects given a stimulus such as a surgical incision [24,25]. Three decades later, we got the equivalent of MAC for injected drugs such as propofol [26]. MAC allowed the comparative measurement of effects of inhaled anesthetics, everything from cardiorespiratory effects to what anesthetics did to brain and muscle and kidney and liver and pregnant ladies and infants and obese people and old people and… [27]. These phenomenology studies (“what does X do to Y?”) poured out and dominated research through the 1970s and 1980s. Such studies continue to the present.


1950-Present: Theme 4: Outcome Studies


For the most part, important outcome studies appeared in the second half of the 20th century. But we must acknowledge earlier work, for example, Snow’s careful dissection of the causes of death associated with chloroform anesthesia [11]. He repeatedly refers to a “sudden paralysis of the heart”, possibly ventricular fibrillation or possibly severe myocardial depression. A less successful outcome study with the same theme was the work of the Hyderabad commission [28]. In 1902, echoing Snow’s observations, Edward Embley provided support for the view that chloroform might cause sudden death by cardiac depression, believing that this resulted from a direct depressant effect of the vapor or by vagal irritability [2931].

In 1954, at about the same time as we began to understand inhaled anesthetic pharmacokinetics, we got the first of the great outcomes studies, the work of Henry Beecher (1904–1976) and Donald Todd [32]. These investigators directed a consortium of 10 University hospitals in an investigation of the factors associated with death after anesthesia. Despite an absence of computers (after all, it was the early 1950s) they collected data on 599,548 patients coming to the astonishing conclusion that if a patient undergoing major surgery received curare, that patient was approximately 6 times more likely to die. The finding raised a ruckus, especially since it was published in a surgical journal and surgeons subsequently ordered anesthetists to stop using curare. No one likes being told what to do. Of course, the main problem was that apparently American anesthetists didn’t understand that curare was apt to leave a residual paralysis that required reversal of the neuromuscular blockade [33]; the anesthetist did not recognize that although the patient might be breathing and appear grossly normal, he was still at risk from a remaining weakness, at risk of inadequate ventilation or an inability to react properly to vomiting’with vomit going into the lungs. You’d think we anesthetists would learn, but half a century later, another outcomes study again found that inadequate reversal of the effects of muscle relaxants grossly increased mortality [34].

Introduction of the inhaled anesthetic halothane, in the 1950s, was a smashing success. It quickly replaced all other potent inhaled anesthetics, including chloroform, cyclopropane and ether (halothane didn’t explode but ether and cyclopropane did, and halothane had the added advantage of being less pungent than ether). Halothane also displaced the now infrequently used chloroform because chloroform injured the liver and halothane didn’t seem to do so. Then in the late 1950s and early 1960s, sporadic cases of severe liver injury appeared after anesthesia with halothane, and we anesthetists feared that we might lose this new friend. The result was the National Halothane Study [35], a study of more than 800,000 patients given various anesthetics. The results suggested a rare, but believable connection between halothane and severe liver injury and death, a connection with signs and symptoms indicating that injury might be a consequence of an allergic response of the liver to halothane (or its metabolites).

The number of studies of outcomes, consequent to the activities of anesthetists, has increased dramatically. Outcome studies differ from phenomenology studies in that the outcomes intend to connect an action (of an anesthetic or a manipulation of some sort such as body temperature [36], oxygen concentration [37], or administration of beta blockers [38]) to either an adverse or beneficial outcome. In contrast, phenomenology studies connect an activity to a change where the change is neither inherently good nor bad. Sometimes outcome studies require data collection from large numbers of patients, as in the Beecher-Todd study or in the National Halothane Study, but more recent outcome studies might include just tens, hundreds, or a few thousand patients. Such small numbers were needed to demonstrate in the 1990s, that beta adrenergic blockage might provide long-term protection against postoperative myocardial injury [38].

One dramatic outcome study in the 1950s, was inadvertently conducted on patients suffering from tetanus [39]. These patients were given nitrous oxide to breathe, in order to decrease the pain of the muscle spasms produced by tetanus. The pain decreased, but after a few days, the number of white blood cells decreased and the patients began to die. The connection to nitrous oxide was clear, although the reason was not. The use of nitrous oxide continued in ordinary surgical patients because it appeared that the problem was only from prolonged breathing of the nitrous oxide. Anesthetists debated whether a more subtle noxiousness of this most ancient of anesthetics resulted from shorter exposures, with a consensus that the case was not proven, and the anesthetic had a long record of benefit and safety [40]. In 2007, an outcomes study of 2000 patients, given or not given nitrous oxide, suggested that nitrous oxide increased morbidity [41], possibly doing so for reasons related to the unique capacity of nitrous oxide to slowly inactivate a vital enzyme, methionine synthase [42]. However, not all recent outcome studies, even of 1000 patients, reveal an adverse effect of nitrous oxide [43]. The noxious effect of nitrous oxide continues to be debated, but the concerns over such noxiousness, including increased postoperative nausea and vomiting, have decreased the use of nitrous oxide in clinical practice.

The numbers of outcome studies are already large and will increase further. They will increase because of the need to supply better patient care, and to provide “evidence-based medicine.” Their application may not only increase patient well-being, but will likely decrease the cost of patient care by causing us to discard ineffective remedies that often have the additional disadvantage of being expensive. This latter issue also concerns research on the comparative effectiveness of medical treatments and whether the incremental benefit of more effective treatments justifies the added costs.


1980-Present: Theme 5’Finally, Studies of Mechanisms


As indicated above, the major anesthesia research in the 2–3 decades following World War II was phenomenological, plus an increasing, somewhat later interest in outcomes. The 1970s and 1980s saw the start of a third major focus: how did anesthetics and anesthetic adjuvants work? A few such inquiries had been made earlier. In 1846, Claude Bernard (1813–1878) demonstrated that curare acted by interfering with the transmission of nervous impulses to muscle, at what is called the motor end-plate, the juncture of nerve and muscle [3]. Around 1900, Meyer [44] and Overton [45] connected anesthetic potency to an affinity for fatty substances, implying that anesthetics acted by doing something to the lipid parts of neurons, perhaps something to the lipid bi-layers that encircle all cells, including the cells of the brain.

But only sporadic discoveries were made until the 1980s, when we began to make use of the great advances in our understanding of receptors, and in genetic engineering. Nick Franks and Bill Lieb deserve credit for shifting our attention away from thinking that anesthetics, especially inhaled anesthetics, must work by some action on lipids in membrane bilayers. As noted above, Meyer and Overton had focused on the correlation of anesthetic potency with affinity to lipid. In 1984, Franks and Lieb demonstrated that an equally good correlation could be made with an effect on a protein [46]. Then the floodgates opened, with many investigators showing that anesthetics affected numerous and specific receptors (proteins) in ways that plausibly explained the actions of anesthetics and anesthetic adjuvants, such as opioids and muscle relaxants.

Only gold members can continue reading. Log In or Register to continue

Mar 21, 2017 | Posted by in ANESTHESIA | Comments Off on A History of Research in Anesthesia

Full access? Get Clinical Tree

Get Clinical Tree app for offline access