Major Anesthetic Themes in the 1970s



Fig. 10.1
The cumulative number of anesthetic societies in the world continued to grow’by 18’in the 1970s. Most arose in developing countries



Anesthesia also attracted women into the specialty. The proportion of women as medical students in the US greatly increased in the 1960s and 1970s. Title IX of the 1972 Higher Education Act banned discrimination in admissions and salaries in any institution receiving federal government funding. Although the number of women accepted into medical school almost tripled from 1971 to 1975, it would be some time before women fully achieved senior roles in specialist medical organisations. There were exceptions. In Australia, women assumed leading roles in the specialty as early as the 1950s. Mary Burnell became the first woman President of the Australian Society of Anaesthetists in 1955, and in 1966, she became Dean of the Faculty of Anaesthetists, just as Patricia Mackay (nee Wilson) was elected President of the Society. In the US, before the 1980s, few women had assumed major roles in anesthesia since Virginia Apgar and Gertie Marx, had done so some 20 or more years earlier.1


Professionalism and Consolidation


At the beginning of the 1970s, many surgeons recognised the value of an anesthesiologist to their own practice but were reluctant to acknowledge the place of another senior physician in the operating room. The operating room is referred to as the operating “theatre” and surgeons had long relished their place in (or at least under) the spotlights as “the captain of the ship.”

In his February 1976 editorial in Anesthesiology, Nick Greene noted that the traditional approach to anesthesia’“the best anesthetic is the one the anesthetist is most familiar with”’was no longer acceptable. He argued that the goal should be to administer an anesthetic optimizing the patient’s care. The growth of professionalism accompanied the maturation of anesthesia departments and societies, with a gradual rejection of subservience of the anesthesiologist to the surgeon. There was a significant economic imperative in this process, since in countries where private practice predominated, there were substantial fee differentials between anesthesiologists and surgeons. Although some societies divested the responsibility for training to sister organisations in the form of Faculties (notably of Colleges of Surgeons or Physicians), most societies maintained a primary educational role while developing an increased profile in political negotiation for better conditions and remuneration.

By 1980, the place of the specialty was firmly established. Younger surgeons recognised the value of a productive working relationship with the emergent specialty, and placed increasing trust in the anesthesiologist, including their involvement in preoperative assessment and postoperative care. Still, some of the “old guard’s” influence remained, and sometimes their prejudices rubbed off on their trainees.


Non-specialist Anesthesia


In many countries, the 1970s saw the gradual disappearance of non-specialists in anesthesia practice. As training programs generated more specialists, full-time anesthesia practice replaced the part-time anesthetist in general practice. In some European (especially Nordic) countries and many developing countries, as well as in the US, nurses continued to play an important role in the provision of anesthesia services. In the US, the traditional place of the nurse anesthetist came under threat when in 1974, the US Office of Education required that a body independent of the American Association of Nurse Anesthetists (AANA) would be needed to accredit training centers. The AANA had been the training and certifying organisation since 1931, and proposed an independent body comprised of Certified Registered Nurse Anesthetists (CRNAs), anesthesiologists and members of the public. The American Society of Anesthesiologists (ASA) sought to establish a competing Faculty of Nurse Anesthesia to accredit CRNAs. The ASA did not succeed in gaining control over CRNA accreditation which was ultimately entrusted to the Council on Accreditation of Nurse Anesthesia Programs (CANAP), as proposed by the AANA.


Sub-specialization


Although most anesthesiologists in 1970 were “generalists”, as the specialty matured and the scope and complexity of anesthesia grew, some developed skills and enthusiasm for particular branches of the specialty. These involved obstetric, pediatric, neurosurgical, cardiac, and regional anesthesia, and intensive care. The desire to share information, and advance the quality of care through focussed research and education, led to the formation of sub-specialty organisations, or special interest groups. The establishment of these groups required sufficient numbers of practitioners to sustain them, and this was initially only possible in places like the US and Great Britain, where formal specialty training and certification had been in place for sufficient time. (Fig. 10.2). The Society of Obstetric Anesthesia and Perinatology (SOAP) was established in 1968, and in the following year, the Obstetric Anaesthetist Association was established in Great Britain. By 1970, Great Britain had also established a society for anesthetic research. Then, in the 1970s, several new subspecialist societies or associations formed in both the US and Great Britain, paving the way for the establishment of similar groups in the 1980s and 1990s in other regions of the world. Although there might have been sufficient numbers of European anesthesiologists to sustain the creation of subspecialty anesthesia organisations, they did not arise until the 1980s. This probably reflected the later establishment of formal training programs. The subspecialty societies or associations formed in Great Britain during the 1970s encompassed critical care, pediatric, cardiovascular, dental and regional anesthesia with several parallel organizations arising in the US (Fig. 10.2).



A978-1-4614-8441-7_10_Fig2_HTML.gif


Fig. 10.2
The cumulative increase in the number of subspecialty anesthesia societies occurred linearly and in parallel in the US and UK


Training and Examinations


Between 1970 and 1980, most countries in Europe established formal training programs in anesthesia with durations commonly of 3 to 5 years. Not all imposed a formal examination process.

Since the 1950s, training in the US had been overseen by the Anesthesiology Residency Review Committee (RRC) comprising representatives of the AMA Council on Medical Education and Hospitals, and the American Board of Anesthesiology. Attempts to rationalise all postgraduate medical training in the late 1960s had failed and in 1972, five bodies, the AMA, the Association of American Medical Colleges, the American Board of Medical Specialties, American Hospital Association, and the Council on Medical Specialty Societies convened to resolve matters. This resulted in two accrediting bodies, only one of which was to survive, the Liaison Committee on Graduate Medical Education.

A further complication arose because there were two examining bodies, the American Society of Anesthesiologists’ American College of Anesthesiology (ACA), and the American Board of Anesthesiology (ABA). The ACA certified some practitioners whom the ABA considered unsuitable, or who had failed the ABA examination. This created uncertainty for hospitals and credentialing committees. Arthur Keats led the resolution of the impasse. In 1971, he became chairman of the ABA examinations committee while he was also editor in chief ofAnesthesiology. Through his efforts, the ACA ultimately ceased awarding certificates. At the same time, Keats led a revision of the examination system, creating the Guided Question (intended to provide a more standardized exam), and a comprehensive analysis of the then scope of anesthesiology practice. Both written and in-service training examinations were revised, and a computerized analysis of the results of the exam was introduced. Further, the ABA (using the Rasch model), oversaw the introduction of analysis of the written examination, to ensure that it was a valid test of required knowledge, and would enable feedback to both examiners and residents.

In Great Britain, the Association of Anaesthetists of Great Britain and Ireland established a formal training program in 1932, some time after Waters established his program in 1923. The first formal examination took place in 1935, the Diploma in Anaesthetics, awarded by the Anaesthetic Section of the Royal Society of Medicine. By the 1970s, the Faculty of Anaesthetists within the Royal College of Surgeons, was well established. Training was of four years duration with one year of “higher professional training” to be undertaken after successful completion of the primary and final examinations. Both examinations had written and oral components.

In Europe, the six signatories to the Treaty of Rome in 1957, established the Union Européenne des Médecins Spécialistes (UEMS) in 1958. Intended as a source of discussion and consultation for European Economic Community (EEC) legislation, it eventually became the umbrella organization for national medical specialist organizations in all EEC countries, as well Norway and Switzerland. Specialist sections were created and the Section of Anesthesiology first met in 1962. In 1963, the Section stipulated a minimum training of 3 years, at a time when training in the six member countries varied from 2 to 7 years. The Section reviewed the directive in 1969, agreeing on a minimum training period of 4 years. In 1973, the periods of training still varied between 3 and 7 years in EEC countries, and between 2 and 6 years in non-EEC countries.

In Canada, the Royal College of Physicians and Surgeons of Canada (RCPSC) was established in 1929, and began by recognizing the need to give recognition to doctors engaged in “special work”. The qualification was initially gained by invitation, with an examination introduced in 1931, leading to the title “Fellow”. It was primarily aimed at physicians and surgeons in an academic role, while anesthesiologists were not considered in the “same league”. In 1937, a less rigorous program of “certification” was introduced, however anesthesia was still not considered as a specialty, although it was eventually included in 1942. Until 1971, the dual system of certification and fellowship persisted, with both fellows and certificants being recognized as specialist anesthesiologists. In 1971, a single training, examination and accreditation system was instituted. This and the four-year training program meant that Canadian specialists were at last treated as equal to other specialists within the RCPSC.

In Australia and New Zealand, the training and examination process for certification as a Fellow of the Faculty of Anaesthetists of the Royal Australasian College of Surgeons, was similar to that in Great Britain, without the requirement for a year of higher professional training.

In Japan in the 1960s, two years training and/or experience in administering anesthesia to 300 patients resulted in a permanent qualification that once obtained, remained in force indefinitely. The duration of training has increased to at least 4 years at the first step of certification (see Chapter 31).

Medical practice in China recovered slowly from the 1966–1976 Cultural Revolution, which had forced many doctors to perform non-medical manual work. Despite these hindrances, in 1979, the Chinese Society of Anaesthesiology was formed with 44 members.

In Mexico, and Central and South America, development of anesthesia lagged behind that in the US, Canada, and Europe. Nevertheless, by 1970, most countries in this region had established specialist societies (see Figs. 6.​2,8.​2, and9.​2), and some had instituted training programs. Through the 1970s, with growth in the number of academic departments of anesthesia, more training programs were established. The 1970s also saw the disappearance of nurse anesthetists from many countries in the region.


Academic Development


In Great Britain, during the 1960s, 15 new academic departments of anesthesia were created, consolidating the advancement of academic anesthesia. Several university research departments were created, and several anesthesiologists acquired MDs or PhDs. This enhanced the status of anesthesia in the region and had the effect of attracting many anesthesiologists from current and former British Commonwealth countries (Australia, Canada, India, New Zealand, Pakistan, and South Africa) to Great Britain. Many of these anesthesiologists returned to their countries of origin to assist in establishing academic departments, although in most cases it took some years before university research departments were created.

In the US, funded academic departments were well established, and the 1970s saw an expansion of anesthesia research, championed by enthusiastic and dedicated people. Theirs was the model that many other countries wished to emulate, but few could afford.

This was also a period when collaboration between academic anesthesiologists and industry reached a new peak. Several companies enthusiastically pursued the development of anesthesia equipment and drugs’Ohio (enflurane), Cyprane (variable bypass, tec-type vaporizers), ICI (propofol), Glaxo (althesin), Burroughs Wellcome (atracurium), Organon (pancuronium), to name a few, and all had close ties with research anesthesiogists.


Journals


By 1970, many of the major anesthesia journals of the world were well established (e.g., see Fig. 6.​1), and the only further one to appear during the decade wasAnaesthesia and Intensive Care, first published by the Australian Society of Anaesthetists in 1972. The 1970s did however see the appearance of subspecialty journals. As might be expected, the earliest subspecialty journals to appear were consistent with the first subspecialties to create organisations, except for obstetric anesthesia.Critical Care Medicine was first published in 1973, and following the formation of the International Association for the Study of Pain (IASP) in 1974,Pain was published in 1975, with Patrick Wall as its first editor-in-chief.Pain eventually achieved the highest impact factor of all anesthesia-related journals in 2011. In 1976,Regional Anesthesia was first published, later to be renamedRegional Anesthesia and Pain Medicine. In 1978, another journal dedicated to regional anesthesia appeared as a supplement toDer Anaesthesist, namedRegionale Anaesthesie.


The Beginning of the End for Halothane


At the beginning of the 1970s, anesthesia commonly relied on the use of thiopental, nitrous oxide, and halothane, with the probable addition of a muscle relaxant and opioid. The notation on many anesthetic records of the time “GOH”, signified the mainstay of practice; “Gas (nitrous oxide), Oxygen and Halothane”.

The documentation of cases of liver dysfunction following halothane had led to widespread investigation in the 1960s. The detailed results, published in 1969, pointed to a rare but real mortality rate from hepatotoxicity of some 1:120,000 [1]. This was enough to reassure some, especially pediatric anesthetists, but the coming demise of halothane was established. In many cases it was not anesthesiologists who condemned it, but physicians (internists), who quickly blamed any episode of postoperative jaundice on halothane, often without justification. Outside the US, some anesthesiologists chose to use alternatives, either methoxyflurane or trichloroethylene, and some even clung to ether. All had their problems, and were to disappear from general use by the mid 1970s. The possibility of renal failure as result of free fluoride ion from degradation of methoxyflurane had been described previously, but was characterised by Mazze and Cousins in 1973 [2]. The search was on for a replacement.

Ross Terrell had identified enflurane in the mid-1960s, a fluorinated methyl ethyl ether with characteristics likely to make it safer than halothane, an alkane. He noted that the ethers had less propensity to cause arrhythmias and that only 2–3% of enflurane was metabolised, resulting in fewer potentially toxic metabolites. Introduced in the early 1970s, enflurane was the answer to many prayers, only to be tarnished by the appearance of tonic and clonic muscle movement, associated with EEG epileptiform activity. This was not a major impediment, and in the US and some other developed countries around the world, enflurane displaced halothane from the operating room. With increasing use of enflurane, it was clear that hepatotoxicity was much less than with halothane [3]. In particular, the number of cases reported in the years after release appeared at a far slower rate than after the release of halothane (Fig. 10.3).



A978-1-4614-8441-7_10_Fig3_HTML.gif


Fig. 10.3
Few cases of hepatic injury were reported in the first 5 years following the release of halothane, but after 5 years, a dramatic increase in published reports of injury followed. This biphasic response would be expected if an allergic response underlay injury. The steep increase would require an initial “sensitization” to halothane. In contrast to halothane, no dramatic increase in published reports of injury occurred in the first 10 years after the release of enflurane for clinical use


The Trouble with Isoflurane


Ross Terrell synthesized the isomer of enflurane, isoflurane, in 1965. It was more difficult to produce and purify, delaying its development. Once a suitable method was found, and animal tests had been done, clinical trials began in 1971. Initial results were encouraging. Isoflurane met many of the criteria of the ideal inhalational agent; it was stable, had low blood solubility, low level of biodegradation (0.2%), and was non-flammable. It was also devoid of epileptogenic activity and allowed a rapid recovery. Just as it was about to be released in 1975, a study by Tom Corbett at the University of Michigan suggested that isoflurane was a carcinogen in mice [4].

Corbett phoned Eger with the news. Eger, Terrell and Jim Vitcha from Ohio Medical (the manufacturer of isoflurane) went to the University of Michigan to discuss the experiment with Corbett, finding several key flaws in the study. In particular, there were too few negative control animals, no dose-response relationship had been determined, and Corbett had not blinded his examination of animals for tumor [4]. Even more frighteningly, there were no control animals given alternative anesthetics. Corbett had assumed the problem was limited to isoflurane, but other anesthetics were far more vulnerable to metabolism and the production of potentially toxic metabolites that might have produced genetic changes. Still, that magic word “cancer” made such objections moot. Clearly the test would have to be repeated with the flaws eliminated. Everyone involved knew this.

But Eger also knew that Ohio would dither while it decided whether to fight the findings or support (at great cost) the required study. Not willing to wait for what they knew would be a long-delayed decision, the impatient Eger and his colleague, Wendel Stevens, elected to initiate the study using discretionary funds Eger had gathered from previous work, believing that Ohio would be forced to pursue and support the study. They would sponge off the university for a few months.

For a time, it went as planned. Ohio capitulated nearly a year later and asked Eger and Stevens to proceed with the study, now clandestinely underway, a study of thousands of animals. But for a phone call requesting a small change in the protocol, the conspirators would have pulled it off, covering up their early start. Vitcha called Eger asking if the study animals might be kept alive three months longer than originally planned after their in utero exposure to the anesthetics, 18 rather than 15 months. Too late; the animals were already in formaldehyde! Eger and Stevens flew to Chicago to meet with their friends from Ohio Medical and confess their sins.

The conspirators then completed the study, convincing Corbett to join them, this time as a blinded examiner of the tissues of the study animals for cancer. The result was that none of the study anesthetics, none of the modern anesthetics, caused cancer except that the most metabolized anesthetic, methoxyflurane tended to show a trend towards an increased incidence [5].

Published in 1978, this negative result allowed the clinical release of isoflurane to proceed. Isoflurane was marketed in another year or two, becoming the dominant inhaled anesthetic of the 1980s.

In the 1970s, pediatric anesthesiologists continued using halothane because it was less pungent than enflurane, and did not have the stigma of epileptogenicity. Anesthesia was commonly induced in children by inhalation, and their induction and recovery was as fast with halothane despite the modestly lower solubility of enflurane. In addition, halothane had not been incriminated in the causation of liver injury in children.


Calibrated Vaporizers


Both the Copper Kettle and the Fluotec Mark 2 (the Mark 1 was recalled shortly after release because of a problem with the control knob) had been available since the 1950s. The Copper Kettle could be accurately used with any volatile inhalation agent, and would be used for halothane, methoxyflurane, and enflurane until it was superseded by variable bypass vaporisers. The Fluotec Mark 2 was a modification of the Tritec, the first temperature compensated vaporiser produced by Cyprane in England, for use with trichloroethylene. The “tec” vaporiser was later modified to accommodate methoxyflurane (Pentec), and enflurane (Enfluratec). There was also an Ethertec designed for use in developing countries.

Despite the availability of calibrated, temperature compensated vaporisers, at the beginning of the 1970s, many anesthetics were still given using the old Boyle bottles, particularly outside North America. Because the Copper Kettle and the Vernitrol had an established place in the US as universal vaporisers that could be used with either ether or halothane (albeit requiring some mathematical calculations on the part of the anesthesiologist), the initial marketing of halothane in the US was accompanied by an aggressive joint campaign by Ayerst Laboratories marketing halothane, and Fraser Sweatman, the agents for the Cyprane “Fluotec” vaporiser. This ensured that hospitals procuring a supply of halothane would receive free vaporisers. Boyle bottles and the Ohio #8 vaporizer quickly became obsolete, and the Copper Kettles and Vernitrols gradually disappeared.

In Germany, Drager had introduced the “Vapor” temperature compensated vaporiser, but elsewhere outside North America, where there was no competition for the Fluotec, Boyle bottles were still in use, as there was no incentive for hospitals to equip their operating rooms with new and expensive devices. These had originally been used for ether, were uncalibrated, and could deliver high concentrations of halothane. Producing new glass bottles labelled “Fluothane” or “Trilene” served to identify them as “new” vaporisers, but they were little better than dripping the agent onto a towel’the anesthesiologist had no idea how much was being administered. The recent establishment of the principle of MAC by Eger and colleagues had little meaning to those forced to use such equipment by the lack of investment by hospitals in the emerging specialty. Boyle bottles were of course convenient for many anesthesiologists in private practice who had to supply their own halothane, allowing them to empty the remaining contents back into their personal container at the end of a case. This advantage was also somewhat dangerous if the anesthesiologist did not label and/or empty the “bottle” and a second person filled it with a different agent.

Cyprane introduced the “Tec” 3 vaporiser in 1969. It had several advantages over the “Tec 2” models. The output was linear at all flow rates, and it eliminated the “pumping” effect of positive pressure ventilation that could silently increase output. By the end of the 1970s, all inhalation agents were delivered through calibrated, temperature compensated vaporisers, usually the Tec 3 or similar devices manufactured by Ohio or Drager. The concept of MAC had informed the 1970s anesthesiologist as to the concentration of anesthetic agent required for elimination of a response to the surgeon’s knife, but until the end-tidal concentration was routinely monitored, could only be interpreted in terms of the inhaled concentration. At least by the end of the decade, the anesthesiologist had a calibrated vaporiser to help.


Induction with Steroids


Thiopental retained its place as the primary induction agent at the beginning of the 1970s, but was to be challenged by alternatives. Thiopental found an additional role after Michenfelder and Theye showed in 1973 that it reduced cerebral metabolism and might protect the brain against hypoxic injury [6]. The use of the drug for cerebral protection by “induced coma” continued for many years. Such virtues notwithstanding, the search was on for a replacement for thiopental, for a drug that avoided the cardio-respiratory depression so characteristic of induction, especially in the patient with vascular disease.

Etomidate, a carboxylated imidazole derivative rather than a barbiturate, was released in the mid 1970s. Because it was hydrophobic, it was formulated in propylene glycol, however the pain and irritation on injection, and myoclonic movements after injection, limited its popularity. It did nevertheless provide greater cardiovascular stability than its rival, thiopental.

Glaxo released a new anesthetic induction agent in Europe in 1973, comprising a mixture of two steroid molecules. Althesin included alphaxalone as the anesthetic, accompanied by alphadolone to increase the solubility in a Cremophor EL solution. Althesin provided a smooth painless induction accompanied by little cardiovascular effect and rapid recovery. It gained a firm place in outpatient anesthesia until reports of allergic reactions began to appear, reactions largely related to the presence of Cremophor EL. Althesin was never released in the US.

Ketamine, a phencyclidine derivative was developed in 1962. It was first used clinically on American soldiers in the Vietnam War in 1970, and although it was devoid of cardiovascular depressant effects, its hallucinogenic tendency, a consequence of its chemical heritage, resulted in what was to be called “dissociative anesthesia”. Such a label was of little comfort to those who experienced the less desirable manifestations including perceived loss of their arms or legs. Ketamine found a place in field use in civil disasters and in low dose infusions for pain management, but that was after the 1970s.

Thus, by the end of the 1970s, thiopental had defended its place as the induction agent of choice. It was now only available as a 2.5% formulation (requiring mixing of powder and sterile water before use) because the previously available 5% solution had been shown to cause skin necrosis after extravasation, and ischemia after arterial injection. Use of the 2.5% solution largely eliminated these complications. The drug was available in multi-dose bottles, commonly 100 or 250 ml. The fear of disease transmission was unheard of, and anesthesiologists of the time recall prepared solutions being used for several days on subsequent patients. Perhaps that wasn’t quite as foolish as it might sound. Alkalinity of thiopental solutions (pH 10–11) made them sterile. By 1970, in most countries, metal intravenous cannulae (needles) had been replaced by plastic disposable cannulae (catheters), often still with a metal hub.

Only gold members can continue reading. Log In or Register to continue

Mar 21, 2017 | Posted by in ANESTHESIA | Comments Off on Major Anesthetic Themes in the 1970s

Full access? Get Clinical Tree

Get Clinical Tree app for offline access