Cardiopulmonary resuscitation (CPR) and advanced life support (ALS) interventions are commonly performed in the intensive care unit (ICU) setting. The recommended CPR and ALS interventions are based on guidelines developed every 5 years by experts from international resuscitation organizations who review the science and weigh new evidence. Members of the International Liaison Committee on Resuscitation (ILCOR, http://www.ilcor.org) developed a Scientific Evidence Evaluation and Review System (SEERS) where interested readers can view the in-depth evidence reviews (https://volunteer.heart.org/apps/pico/Pages/default.aspx). Choose the “PICO Topic” to view the Consensus on Science and Treatment Recommendations (CoSTR) statements, which are freely available. This chapter is based on the 2015 ILCOR recommendations, which designate the current strength of evidence for interventions by using a range of terms from “may be considered” for consensus recommendations when evidence is very weak or lacking, to “suggest” for weak recommendations based on weak evidence, and “recommend” for those interventions that are supported by good evidence (1). This review includes the science and treatment recommendations for basic life support (BLS) (2,3) and ALS (4,5) for adults and children (6,7), but will not cover neonatal resuscitation (8). Based on the CoSTR evidence evaluation, each of the participating international resuscitation councils will publish resuscitation guidelines in 2016. Of note, the specific interventions may differ among these guidelines because each resuscitation council considers the economic, geographic, and system differences in practice, including the availability of equipment and medications that may vary in different parts of the world. This chapter reviews the epidemiology of in-hospital cardiac arrest (IHCA), emphasizes the importance of high-quality BLS, reviews recommended ALS interventions, discusses ways to monitor and improve the quality of resuscitation, reviews postresuscitation management, and outcome prediction. Registry data, such as the American Heart Association’s Get-With-The-Guidelines-Resuscitation (GWTG-R; http://www.heart.org/HEARTORG/HealthcareResearch/GetWithTheGuidelines-Resuscitation/Get-With-The-Guidelines-Resuscitation_UCM_314496_SubHomePage.jsp) and international registries, provide much of our current understanding of the epidemiology and outcome following in-hospital arrests. Annually in the United States, there are an estimated 209,000 treated IHCAs (9). Analysis of the UK National Cardiac Arrest audit from 2011 to 2013 identified an IHCA incidence of 1.6 per 1,000 hospital admissions with an overall unadjusted survival rate of 18.4% (10). Recent analysis of the GWTG-R data—almost 136,000 IHCAs—observed that 58% were men, 74% were white, and 83% were over 50 years of age. Although ventricular fibrillation (VF) or pulseless ventricular tachycardia (pVT) is common in sudden out-of-hospital CA (OHCA), only 20% of the IHCA cases had VF/pVT as the initial rhythm (11). Overall, 64% of the arrests occurred in the ICU. Of note, half of the reporting hospitals were academic teaching hospitals, 89% were in urban settings and 59% were identified as teaching hospitals, so the GWTG-R data may not accurately represent nationwide epidemiology. There is substantial variation in the rate of survival by hospital based on analysis of GWTG-R, even after adjusting for 36 predictors (11). Identified significant survival outcome predictors were female gender (OR 1.10, 95% CI: 1.06–1.13), black race (OR 0.93; 95% CI: 0.87–0.99), comorbidities such as dysrhythmia (OR 1.24; 95% CI: 1.20–1.28), a hematology–oncology condition (OR 0.50; 95% CI: 0.47–0.52), and intra-arrest factors including cardiac etiology (OR 1.18; 95% CI: 1.14–1.23). In both the in-hospital and out-of-hospital setting, an initial rhythm of VF/pVT is consistently associated with better outcome; for IHCA, an initial rhythm of VF/pVT was most strongly associated with higher survival (OR 3.14; 95% CI: 3.02–3.27) (11). Arrests occurring at night (OR 0.75; 95% CI: 0.72–0.76) or on the weekend (OR 0.86; 95% CI: 0.83–0.89) were associated with a worse outcome compared with arrests occurring during the day (11). Conversely, being on a monitor (OR 1.72; 95% CI: 1.64–1.81), or in an ICU (OR 1.60; 95% CI: 1.53–1.68) were associated with a better outcome (12). After adjusting for all of these factors, the analysis demonstrated substantial interhospital variation in outcome following OHCA and IHCA (11). The median adjusted survival to discharge was 11.9% (0%–14.8%, minimum to maximum) in the lowest decile academic hospitals and 22.5% (21.5%–31%) in the top decile. To put this in perspective, the data suggest that there is a 42% greater odds of patients with identical covariates surviving to hospital discharge at one randomly selected high performing hospital compared with another low performing hospital. The take-home message from these data is that the quality of resuscitation varies across institutions. It is likely that there are other important factors, such as training and team performance in providing rapid defibrillation and high-quality chest compressions that explain some of this variation. In addition, high performing centers may excel in postresuscitation care, which may be very important to survival after IHCA (13,14). This outcome variation is not restricted to US hospitals. An analysis of data from 144 acute hospitals in the United Kingdom on 22,628 IHCA patients 16 years of age and older also observed significant variation in the rates of survival to hospital discharge (10). Only 16.9% of patients had an initial shockable rhythm of VF/pVT, which again was associated with a much higher rate of hospital survival (49.0%) compared with patients with nonshockable rhythms, asystole, or pulseless electrical activity (PEA), whose mean survival was 10.5% (10). Unlike the United States, over half of the IHCA occurred on the ward in the United Kingdom. Similar to the US data, crude hospital survival rate data suggested worse outcomes for arrests at night or on the weekend compared with weekday. Analysis of data on over 362,000 ICU admissions from Project IMPACT (Cerner Corporation, Kansas City, MO) provides epidemiologic data on the frequency and outcome following IHCA in the ICU (15). Overall, 1.8% of ICU admissions received CPR; 15.7% survived to hospital discharge. Survival likelihood decreased with increasing age, presence of chronic conditions, diminished functional status on hospital admission, and by admission diagnosis. Patients with sepsis on admission had the lowest post-CPR survival. Conversely, patients with cardiovascular, neurologic, or vascular admitting diagnoses had higher survival. Admission to the ICU from the operating room or recovery room was associated with better survival (22.3% vs. ≤15.5%) for admission from other locations. The type of unit (CTICU, MICU/CCU, MICU/SICU, or SICU/trauma) was not associated with survival to hospital discharge. Multivariate analysis identified four significant poor prognostic factors: having at least one chronic illness, a calculated admission mortality probability of 25% or more, failure of three or more organ systems during the ICU stay, and an admission diagnosis of sepsis/septic shock. It is estimated that more than 4,000 children in the United States receive in-hospital CPR each year, most in the PICU (16,17). Data from the Virtual PICU Systems (VPS, LLC) database from 2009 to 2013, encompassing almost 330,000 children admitted to 108 PICUs, reported a 2.2% rate for PICU IHCA (18). Overall survival was 65%, which contrasts with 43.4% survival reported from the most recent analysis of the GWTG-R data (19). Both databases collected and entered CA events based on the need for chest compressions or defibrillation, yet the reported survival is markedly different between the two populations. There was no association between center volume and mortality on analysis of the VPS data, after adjusting for known factors affecting arrest outcome. Similar to the adult experience, there were unexplained wide variations in mortality (20), suggesting opportunities for improvement. A 10-year analysis (2000–2010) of GWTG-R data identified 5,870 pediatric IHCAs from 315 reporting hospitals. A high proportion of pediatric arrests occur in the ICU (93.3%) (21). Over this period, there was a significant increase in the proportion of IHCA occurring in the ICU compared with the wards. Comparison of the rate of return of spontaneous circulation (ROSC) in the 2000 to 2003 period to the 2004 to 2010 period found a concomitant increase in ROSC in association with a higher proportion of the arrests occurring in the ICU (19). Despite the higher ROSC rate, 24-hour survival was significantly higher in ward arrests (62%) compared with ICU arrests (53%), possibly reflecting that the ICU population was more ill. There was an insignificant trend toward better hospital survival in the ward versus ICU population (49% vs. 39%) (21). Analysis of GWTG-R data found the initial CA rhythm was asystole or pulseless electrical activity in 84.8% of children with an increasing frequency of PEA over 10 years. Despite the high rate of nonshockable rhythms, improved survival to discharge over time was not associated with higher rates of neurologic disability among survivors (19). The explanation for the improved ROSC rate is not clear; it could reflect greater implementation of rapid response teams (RRT) or medical emergency teams (MET) leading to early transfer to the PICU where there is a rapid response to monitored patients; however, as the authors’ note, “the limited data in the GWTG-R database and the study design preclude such attributions” (21). These data do suggest that the focus of ALS training should be concentrated on PICU staff as most IHCAs occur there. Since few IHCAs occur on the ward, training ward staff may be best focused on recognizing physiologic deterioration and activation of the RRT/MET (22). The primary goal of CPR is to generate sufficient oxygen delivery to the coronary and cerebral circulations to maintain cellular viability while attempting to restore a perfusing cardiac rhythm by defibrillation, pharmacologic intervention, or both. Although BLS is often assigned to less experienced health care providers, suggesting that it is easy to perform or relatively unimportant, data show that BLS performance is often suboptimal. In the out-of-hospital setting, the increased focus on high-quality BLS in the 2010 Guidelines (23) improved survival (24), particularly in patients with nonshockable rhythms (25), whereas ALS medications have not been shown to affect outcome. The improvement is even more impressive in children: from 2001 to 2013, rates of ROSC from IHCA increased from 39% to 77% and survival to hospital discharge improved from 24% to 43% (6,19). The importance of high-quality CPR is illustrated by a recent post hoc analysis of the large OHCA study evaluating the effectiveness of the impedance threshold device (ITD) (26,27). The trial suggested no beneficial effect from the ITD (26), but when only patients who received “acceptable” CPR—defined from electronic recording of compression rate, depth, and compression fraction in 6,199 patients—were evaluated, use of an active ITD was associated with an approximately 50% improvement in the proportion of patients who survived to hospital discharge (9.6% vs. 6.4%) (27). The fundamental performance metrics of high-quality CPR include the following: ensure chest compressions of adequate rate; ensure chest compressions of adequate, but not excessive, depth; allow full chest recoil between compressions; minimize interruptions in chest compressions; and avoid excessive ventilation. The CoSTR BLS update for adults (2,3) and children (7) focused on the evidence supporting these elements, which are reviewed below. After many years of teaching the ABC’s of CPR, the emphasis on chest compression led to a recommendation to start with compressions first (compression–airway–breathing, CAB) (23) using a compression-to-ventilation ratio of 30:2 in adults and children for a single rescuer, and 15:2 ratio in children when there are two rescuers in recognition of the importance of ventilation in children. There are no human studies that directly evaluated the impact of this change; manikin studies of low quality reported earlier initiation of chest compressions, leading to the CoSTR suggestion to commence CPR with compressions rather than ventilations (2). It is important to recognize that IHCA, especially in the ICU, is more likely to have multiple rescuers immediately available. Thus, even though BLS algorithms illustrate a stepwise approach, integrated teams of trained rescuers should provide a choreographed simultaneous response where compressions are started, the airway is opened and ventilation is begun, and the rhythm is assessed with shocks provided rapidly if indicated (3). In 1960, Kouwenhoven et al. reported successful resuscitation of dogs and, subsequently, humans with VF cardiac arrest using the combination of closed chest compression, artificial respiration, and electrical defibrillation (28). They hypothesized that the heart was physically squeezed between the sternum and vertebral column, whereby the blood flow generated is similar to the mechanism of spontaneous contraction of the heart; hence, this is called the “cardiac pump” model. This model, however, does not explain several clinical observations that conflict with the cardiac pump theory, such as the ineffectiveness of CPR during flail chest, although theoretically it should be easier to compress the heart in this condition, or the effectiveness of closed chest CPR in patients with a hyperinflated chest due to severe emphysema. The cardiac pump model was challenged in the 1970s by a report of ROSC with cough during VF (29). In 1980, Rudikoff et al. reported that fluctuations of intrathoracic pressures were primarily responsible for blood flow during CPR (30). These findings supported a noncardiac or “thoracic pump” mechanism for blood flow during CPR. This model proposes that increased intrathoracic pressure during chest compression elevates the pressure of blood located in structures within the thorax, creating the gradient for forward blood flow from intrathoracic to lower-pressure extrathoracic arteries. During relaxation, intrathoracic pressure drops, resulting in refilling of the heart with blood; in this model, the heart acts as a passive conduit. Echocardiographic studies to elucidate the pumping mechanism during cardiac arrest failed to resolve the controversy (31,32). It appears that in adults with thin chest walls, direct cardiac compression does occur, whereas in prolonged resuscitation and in patients with thick chest walls or hyperinflated lungs, the thoracic pump mechanism becomes the predominant flow mechanism; it is also likely that both mechanisms are involved in some cases. In infants and children, direct cardiac compression is well supported by observation of CPR performed in children while changing the compression position (33,78). Currently, closed chest compression is the standard method of producing blood flow during CPR for both adult and pediatric victims. In 2015, the evidence for the recommendation to compress over the lower half of the sternum was reevaluated (2,3), as several small clinical studies had yielded conflicting results. One crossover trial in 17 adults with prolonged (>30 minutes) arrest found significant improvements in peak arterial pressure and end-tidal carbon dioxide (ETCO2) with compression over the lower half of the sternum compared with the middle of the chest (34). An out-of-hospital study in intubated CA patients used ETCO2 to monitor the effect of compressing at four different hand positions (35). There was no significant difference with compressing at different locations, but they noted significant interindividual differences, suggesting that using a physiologic feedback tool, such as ETCO2, may help optimize hand position and compression depth during CPR. The lack of new data led to the suggestion to perform chest compressions over the lower half of the sternum (2). An optimal chest compression rate maximizes cardiac output, recognizing that excessive rates may limit time for diastolic filling, which is a passive process during CPR. Too low a rate leads to low cardiac output. Observational data from large clinical trials—representing 13,469 adults—suggest that the optimal rate is still unknown, but appears to be less than 140 beats/min (36,37). The most recent analysis, after adjusting for compression fraction (defined as compression time ÷ total CPR time) and compression depth, found a significant association between a compression rate of 100 to 119 compressions/min and likelihood for survival (37); higher compression rates were associated with worse outcome. In 2010, a compression depth of at least 5 cm (2 in) was recommended for all cardiac arrest victims with compressions performed on a firm surface such as backboard (23). There is no evidence, and some concern for harm, for attempting to place a backboard under a patient with multiple vascular lines at risk for dislodgement. Air-filled mattresses are common in the ICU and routinely should be deflated during CPR. Observational studies suggest that survival may improve with increasing compression depth (38,39). Analysis of electronically recorded data from 9,136 adults with OHCA found that optimal ROSC and survival were associated with a compression depth between 40.3 mm and 55.3 mm, with no difference between men and women. Based on data from one relatively small observational study (170 patients) observing injuries in 63% of patients with compression depth of more than 6 cm versus 31% when compression depth was less than 6 cm (40), the 2015 recommendation is to compress approximately 5 cm while avoiding excessive chest compression depths (i.e., ≥6 cm) (2). Since blood return to the heart during CPR is related to the pressure difference between extrathoracic and intrathoracic venous pressure, it makes sense to limit actions that raise intrathoracic pressure, such as failure to allow full chest recoil. Animal studies show that leaning on the chest precluding full chest recoil reduced the coronary perfusion pressure (CPP) and cardiac index (41,42). These animal data are supported by a clinical study in anesthetized children undergoing cardiac catheterization, which showed that applying sternal pressure comparable to leaning during CPR significantly reduced the CPP (43). These observations support the ILCOR suggestion to avoid leaning on the chest between chest compressions to allow full chest wall recoil (2,7). One advantage of pausing to check the rhythm and pulse every 2 minutes was to also remind rescuers to change the compressor, since data showed rescuer fatigue resulting in reduced compression depth and frequency after 2 minutes (23). In the ICU, the need to pause chest compressions to provide ventilations should be infrequent since the airway is typically secured, but changing compressors every 2 minutes is still recommended. Airway management in the hospital is well covered in Chapter 39 and will not be reviewed in this chapter. When a secure airway is in place, continuous chest compressions should be provided along with a ventilation rate of around 6 to 10 breaths/min. If the patient has an arterial line, it is typically unnecessary to pause to check for pulses since ROSC is often visible by the return of a pulsatile arterial pressure wave. Similarly, as discussed in more detail below, continuous monitoring of ETCO2 during CPR may limit the need to interrupt compressions since the ETCO2 typically rises prior to clinical recognition of ROSC (44). There are no data demonstrating the value of pausing every 2 minutes for a pulse check during IHCA, so no treatment recommendation was made regarding the value of a pulse check (2,3). The emphasis on reducing interruptions of chest compressions is also reflected in the recommendation to immediately resume chest compression after shock delivery (2), which is based on OHCA observational data from three studies enrolling 3,094 OHCA patients showing that pausing to check the rhythm after a shock resulted in harm with respect to discharge from the hospital (45–47). If alternative physiologic evidence suggests ROSC (e.g., arterial waveform or ETCO2), chest compressions can be paused to assess the rhythm. Although some modern defibrillators have electronic filters that attempt to remove motion artifact and potentially permit the monitoring of cardiac rhythm during chest compression, there are no human studies at this time, leading to the ILCOR suggestion against using artifact-filtering algorithms for electrocardiogram (ECG) analysis during CPR unless done as part of a research study (2). To improve the quality of CPR (compression rate and depth, and ventilation rate), various devices such as metronomes and visual cues have been used. Visual displays on the bedside monitor may guide compression quality, but there is little data from IHCA evaluating the utility of these devices in adults. In OHCA, a sizeable (1,586 patients) cluster-randomized trial of trained EMS providers did not find any difference in survival to hospital discharge or ROSC when using audio and visual feedback on the monitor–defibrillator (48). A small (101 patient) in-hospital study using a CPR-sensing device with feedback showed small improvements in meeting guideline-recommended compression rate and ventilation rate, although the latter was still much higher than recommended (18 ± 8 breaths/min) (49). The importance of achieving adequate compression depth, rate, and permitting full relaxation was demonstrated in a relatively small study in children in the PICU or ED who were resuscitated using a feedback device applied to the chest (50). Of note, most of the children were older than 8 years, so they were closer in physiology to adults. Both ROSC and 24-hour survival were significantly improved by providing feedback (adjusted odds ratio [aOR] = 10.3; 95% CI: 2.75–38.8) and aOR = 4.21 (95% CI: 1.34–13.2), respectively. Survival to hospital discharge with good neurologic outcome trended higher in those achieving high-quality CPR (18% vs. 5%) but improvement was not significant because of small numbers. A recent analysis of pediatric OHCA data in the Resuscitation Outcomes Consortium (ROC) Epistry that used an electronic monitoring device noted low compliance with the American Heart Association (AHA) BLS Guidelines (51). Compliance was defined as over 60% of the 1-minute epochs achieving a compression rate of 100 to 120/min, depth of 38 mm or more, and chest compression fraction (CCF) of at least 0.8. Overall, there were 390 children in the registry; 244 were 12 years or older. Overall, only 22% achieved compliance for compression rate and CCF, and 58% for compression depth alone, suggesting there is a need to improve CPR quality. Unfortunately, when adjusting for potential confounders, there was no difference in the rate of ROSC associated with compliance, but if compression depth met the criteria, ROSC was improved (49.4% vs. 29.7%) (51). Based on these data and data from other observational studies mainly in adults, CoSTR suggests it may be reasonable to use real-time audiovisual feedback and prompt devices during CPR in clinical practice (2,3,7). They emphasize that these devices should not be used in isolation and instead should be part of a comprehensive system that includes team training and individual training on BLS skills. Recording CPR feedback devices can be helpful in debriefing to provide feedback on performance. ALS encompasses a range of interventions from mechanical devices to compress the chest, devices to enhance circulation during CPR including the use of extracorporeal membrane oxygenation (ECMO), and devices to secure the airway, delivery of shocks, physiologic monitoring during CPR, drugs during CPR, and postresuscitation care. Defibrillation is defined as delivery of electrical energy resulting in termination of VF for at least 5 seconds following the shock (4). The goal is to quickly depolarize the myocardium, terminating the malignant rhythm and hoping that a sinus rhythm will be reinitiated. In most adults, the initial postshock rhythm is asystole or an organized slow rhythm without a pulse (i.e., PEA). This observation is the basis for the recommendation to immediately begin chest compressions after shock delivery. There were no major changes regarding defibrillation between the 2010 (52) and 2015 (4,5) CoSTR conclusions based on the review of available evidence. Although there are two types of defibrillator devices available, based on the mode and waveform of electrical current delivered—that is, monophasic and biphasic—for more than 10 years newly manufactured defibrillators only produced biphasic waveforms. Although randomized clinical trials have not conclusively documented that biphasic defibrillators save more lives, animal and human data show that biphasic defibrillators have a higher first-shock success in terminating VF compared with monophasic devices and cause less postshock myocardial dysfunction (52,53). Somewhat surprisingly, there is only one study of pulsed biphasic waveforms, in 104 patients, which used a defibrillator that was not impedance adjusted (i.e., the delivered current is not adjusted for chest impedance, which is standard in current defibrillators) (54). In the absence of new data, the 2015 CoSTR recommendation is to use a biphasic waveform defibrillator for both atrial and ventricular arrhythmias in preference to a monophasic defibrillator (4,55). Furthermore, there are no data that define the optimal energy dose, so ILCOR makes a strong recommendation based on weak data to follow the manufacturer’s instructions for first and subsequent energy doses. Biphasic defibrillators can also be further classified based on the delivered waveform into (a) biphasic truncated exponential (BTE) waveform and (b) rectilinear biphasic (RLB) waveform. Based on limited data, the 2010 guidelines recommended a first dose energy of 150 to 200 J for BTE and no lower than 120 J for an RLB waveform defibrillator (52,53). More recent data found single first-shock success of about 88% and first-shock success for recurrent VF of about 84% using 120 J with an RLB defibrillator (56). There was no new data to change the recommendation to use a single shock rather than stacked shocks, and there are no new data evaluating escalating energy doses when the first-shock dose was not successful. In the absence of new data, CoSTR suggests if the first shock is not successful and the defibrillator is capable of delivering higher energy shocks, it is reasonable to increase the energy for subsequent shocks (4). Recurrent fibrillation after successful defibrillation occurs in the majority of patients (56,57). There are no data showing the need for a different energy dose for recurrent fibrillation; the CoSTR suggestion is to use an escalating energy dose protocol to manage this dysrhythmia. In children with IHCA, VF occurs more often during ALS rather than as the initial dysrhythmia (58). The outcome following late-onset VF is much worse than when VF is the initial rhythm, suggesting that the late occurrence of VF may represent the effects of epinephrine given in the setting of an increasingly hypoxic–ischemic myocardium. It is likely that the same physiologic mechanism may occur in some adults when VF develops during ALS therapy. It is unknown if a different energy dose or waveform may be beneficial in treating refibrillation or late-onset VF/pVT. Other important elements of safe shock delivery that were not specifically reviewed or updated include a recommendation to use self-adhesive pads rather than defibrillation paddles, avoiding the use of saline-soaked pads as a conductive medium, and avoiding oxygen in the immediate vicinity during shock delivery. When placing electrode pads or paddles, the rescuer should make sure they are not overlapping and are not on top of implanted devices or transdermal medicine patches if present; if the patient is wet, the chest should be wiped dry before placement. The optimal dose for effective defibrillation in infants and children is not known, with analysis of the GWTG-R IHCA data noting lower first-shock success with 4 J/kg versus 2 J/kg (59). The upper limit for safe defibrillation is also not known, but doses more than 4 J/kg and as high as 9 J/kg have effectively defibrillated children (159,160) and pediatric animal models (161) with no significant reported adverse effects. Recommended manual defibrillation (monophasic or biphasic) doses for children are 2 J/kg for the first attempt and 4 J/kg for subsequent attempts. This section reviews a range of devices designed to improve blood flow during CPR, including the inspiratory impedance threshold device (ITD), active compression–decompression CPR (ACD-CPR), mechanical devices to enhance chest compressions and reduce the time when compressions are not being delivered, and the use of extracorporeal support (ECMO or cardiopulmonary bypass) during CPR (ECPR). Most of the data on the use of these devices, other than ECPR, is derived from OHCA studies, which may limit its application in the ICU; however, patients may arrive with these devices in place and ICU physicians should be knowledgeable about their indications and use. The impedance threshold device (ITD) was developed to enhance venous return during CPR by reducing the intrathoracic pressure during chest wall recoil. The device is placed between the end of an endotracheal tube or face mask and the resuscitation bag, it then limits air entry into the lungs during chest recoil or when the patient spontaneously breathes; in the latter circumstance, the patient needs to overcome the inspiratory threshold pressure for air flow to occur. The resulting modest reduction in intrathoracic pressure enhances venous return to the heart between chest compressions and, thus, cardiac output. Despite extensive animal data showing that the ITD device enhances venous return, cardiac output, and vital organ blood flow, several randomized controlled trials (RCTs) failed to show a beneficial effect on outcomes, ranging from ROSC to survival to hospital discharge, and 1-year neurologic survival (26,60,61). Several trials demonstrated a significant improvement in the ROSC rate, survival to hospital discharge, and survival at 1 year with the use of ACD-CPR compared with standard CPR (S-CPR) (26,62,63). ACD-CPR uses a suction cup device placed on the chest to achieve active chest expansion, which lowers intrathoracic pressure, enhancing venous return and, thus, compression-induced cardiac output. The addition of an ITD to ACD-CPR did not improve any of the important outcome measures, but a recent analysis of the quality data from the Resuscitation Outcomes Consortium (https://roc.uwctc.org/) trial (26) found that when the analysis is repeated, looking only at the more than 1,600 patients who had good-quality CPR, use of an active-ITD increased survival to hospital discharge with a modified Rankin Score of at least 3 compared with sham-ITD (7.2% vs. 4.1%; p = 0.006) (27). Interestingly, when the quality of CPR was not “acceptable,” use of an active-ITD was associated with significantly worse outcome at hospital discharge. Another meta-analysis noted that when the effect of ACD-CPR or the ITD on OHCA outcome is adjusted for whether the arrest was witnessed and for short response time, the ITD device is associated with improved ROSC rates, which are further enhanced by application of ACD-CPR (64). These data illustrate one of the important caveats related to studies evaluating CPR devices—it is often difficult to assure that the quality of CPR is consistent. Since treatment recommendations are generally not based on post hoc analysis of clinical research data (27), CoSTR recommends against the routine use of an ITD in addition to conventional CPR (4). A consensus recommendation could not be reached for the use of the ITD in combination with ACD-CPR. Furthermore, they note that the optimal compression and ventilation rates using ACD-CPR, with or without an ITD, are unknown. The value of ITD plus ACD-CPR for IHCA also remains to be determined. There is potential utility for the ITD device in spontaneously breathing patients with hypotension to help stabilize their blood pressure and perfusion by enhancing venous return (65,66). There are no clinical data on the use of ACD-CPR or an ITD device during CPR in children. Pediatric animal studies show that ACD-CPR improves cardiac output and vital organ perfusion, and that the addition of an ITD does not enhance blood flow in this setting (67). These devices are designed to assure consistent chest compression, sometimes with active decompression, and address the variability observed in clinical trials due to rescuer fatigue and other factors that may affect the depth and rate of chest compression. In theory, these devices would be particularly useful in the OHCA setting when there are relatively few rescuers to provide chest compressions. The best studied device is the Lund University Cardiac Arrest System (LUCAS), now in a version 2, which is an electrically driven piston with a suction cup designed to provide active decompression as well as compression. The device delivers compressions of 40 to 53 mm in depth, depending on patient size, at a rate of 102 compressions/min with equal time in the compression and relaxation phase. Despite its theoretic benefits, a pragmatic large (4,471 nontraumatic OHCA patients) cluster-randomized trial conducted in four UK Ambulance Services failed to observe any benefit from the use of the LUCAS-2 device over manual CPR in 30-day survival (6% with LUCAS-2 and 7% with manual CPR) (68); ROSC rate was also not different between groups. A meta-analysis of the UK study (68) combined with the two prior RCTs of LUCAS (69,70) failed to show any benefit in 30-day survival, survival to discharge, or neurologic function at 3 months. An automatic load-distributing band device produces mechanical chest compression without an active relaxation phase. In animal studies, the device produced significantly higher blood flow compared with manual chest compression, but the initial multicenter RCT was halted prematurely because of reduced survival to hospital discharge associated with the use of the device (71). It was thought that higher mortality resulted from the time required to place the device, resulting in less overall chest compressions. A more recent RCT with three US and two European EMS systems compared high-quality manual CPR with an integrated automated load-distributing band. More than 4,000 nontraumatic OHCA patients were included and, again, there was no difference in sustained ROSC, 24-hour survival, or survival to hospital discharge (72). Based on the current evidence, CoSTR suggests against the routine use of automated chest compression devices in place of high-quality manual chest compression, but they note that these devices may be a reasonable alternative to manual chest compression where maintaining sustained high-quality chest compression may be impractical or puts the rescuer at risk (4). Rescuer risk includes, an unrestrained provider delivering sustained chest compressions in the back of a moving ambulance, or CPR during certain procedures (e.g., coronary angiography or preparing the patient for ECPR); these are situations where mechanical devices have been used successfully (73). There are no data on the use of these devices in children and, based on the ALS recommendations, they should not routinely be used. This refers to the use of ECMO or cardiopulmonary bypass to restore blood flow during CA and provide support until the underlying cause of the arrest is corrected or stabilized. ECPR was first used in children more than 30 years ago as reported in a case series of 33 children with IHCA after open-heart surgery (74). Its use has expanded beyond children to adults for IHCA (75–78) and OHCA (79). Clearly, implementation of ECPR requires a substantial investment in both equipment and manpower with extensive training to reduce the time between request and implementation. This limits its application to major medical centers with appropriate staffing, although the technology continues to evolve making it easier to implement. There are no RCTs of ECPR; reported studies may reflect reporting bias, but they all note improved survival to hospital discharge, 30-day survival, and intact neurologic survival—defined as cerebral performance category (CPC) score of 1 or 2 (75,77–79). One trial used propensity-matched samples and reported a significantly higher survival with favorable neurologic outcome based on an analysis of 975 patients with IHCA who underwent CPR for more than 10 minutes. Of these, 113 receiving conventional CPR were propensity matched to 59 who received ECPR. The ECPR group had significantly better outcome (RR, 4.67, 95% CI: 1.85–4.26) (78). A more recent trial evaluated ECPR as part of an organized approach to refractory CA. The CHEER trial (mechanical CPR, Hypothermia, ECMO and Early Reperfusion) was conducted at a single Australian center. They enrolled patients after either IHCA or OHCA with refractory CA. Recognizing there was likely a selection bias in this high-risk group, they achieved an impressive 54% survival to hospital discharge with intact neurologic recovery (CPC 1–2) in 26 patients (77). Of note, 11 of these patients were OHCA and, overall, 42% of the patients underwent percutaneous coronary intervention (PCI) and one had a pulmonary embolectomy. This rate of good neurologic outcome survival is similar to a report of 32 patients with IHCA of whom 10 were placed on ECPR after ROSC, but with persistent cardiogenic shock; overall 47% survived with CPC 1 to 2 neurologic status (76). The benefit of ECPR after OHCA is uncertain since there are limited data. A case series of 26 patients at two urban centers who had ECPR implemented in the ED or within 1 hour of arrival reported survival to hospital discharge in four patients (15%), three of whom had CPC scores of 1 to 2 at 6 months (79). A population-based analysis of 320 OHCA patients from 2009 to 2013 who received ECPR in Korea (representing about 1% of eligible OHCA patients) found no significant difference in neurologically favorable survival to discharge after adjusting for covariates (80). Similarly, although cold-water drowning is thought to provide some neurologic protection, an 11-year cohort of ECPR provided to 20 patients with hypothermic (core temperature below 30°C) OHCA associated with drowning found that despite ECMO, only four survived more than 24 hours and two survived to hospital discharge; only one had good neurologic survival (81). It is also noteworthy that cannulation was attempted in 41 patients, but achieved in only about half of them, illustrating the complexity of applying this therapy rapidly. These OHCA data suggest that the role of ECPR for asphyxial OHCA is unclear. A recent analysis of the GWTG-R data for IHCA in children (<18 years old) who received CPR for 10 minutes or longer compared outcomes of children who also received ECPR to a propensity-matched control group. Over 11 years (2000–2011), there were 3,756 IHCAs that met entry criteria; 591 (16%) received ECPR (82). After adjusting for covariates, patients receiving ECPR were more likely to survive to discharge (40% vs. 27% with conventional CPR), and survive with favorable neurologic outcome (27% vs. 18%); similar findings were observed when the ECPR group was compared with a propensity-matched cohort with significantly better survival to discharge and favorable neurologic outcome, respectively (OR, 1.70; 95% CI: 1.33–2.18) and (OR, 1.78; 95% CI: 1.31–2.41). Based on the available data, CoSTR suggests ECPR is a reasonable rescue therapy for selected patients with cardiac arrest when conventional CPR fails to achieve ROSC (4,6); implementation is limited by financial and logistical limitations. Moreover, intensivists recognize that ECPR introduces potential ethical issues, such as when a patient’s cardiovascular status is stabilized but significant neurologic injury is present. There are also many unknown factors that need to be elucidated to further improve outcome with this technology. These include defining the optimal subgroup of patients who would benefit, the optimal flow rate for ECPR, the duration of ECPR, and the desired target temperature on ECPR. Physiologic monitoring may provide feedback to guide resuscitation efforts and improve the quality of CPR-induced blood flow and oxygen delivery. The focused attention on delivering high-quality chest compressions, limiting excessive ventilation, and using devices to support circulation would be enhanced by monitoring the achievement of explicit hemodynamic goals during CPR. Unfortunately, hemodynamic measurement is often limited during CPR, but ICU patients often have invasive central venous and arterial pressure monitors; once the patients is ventilated, ETCO2 is measured using bedside capnography. Additional sources of potentially valuable data to guide resuscitation efforts include measurement of cerebral oxygen tension by use of near-infrared devices. As Dr. Max Harry Weil—one of the fathers of modern resuscitation—stated 20 years ago, “Performing CPR without measuring the effects is like flying an airplane without an altimeter” (83). The guideline recommendations for depth, rate, and location for compression are just that—guidelines—there is no reason to think that a single compression depth, rate, and location will be optimal for all patients. Similarly, giving epinephrine by the clock rather than based on documented hemodynamic parameters will likely be considered somewhat primitive 10 years from now. It has long been recognized that CPP is a primary determinant of resuscitation survival since it is a major determinant of myocardial blood flow, which is the parameter we are most interested in restoring and preserving (84,85). Achieving good myocardial blood flow is the main rationale for minimizing the time without effective chest compressions and avoiding excessive ventilations that compromise venous return (86–88). In the ICU, the CPP can be estimated from the difference between the diastolic arterial blood pressure and central venous pressure. Based on animal data, a minimal CPP of 20 mmHg was critical to ROSC (84). More recent animal data showed that using a hemodynamically driven resuscitation approach to achieve a CPP greater than 20 mmHg, rather than using one of two fixed compression depths and epinephrine boluses per AHA Guidelines, resulted in significantly greater ROSC and short-term survival in asphyxial (89) and VF arrest (90). Of note, the animals resuscitated based on hemodynamic monitoring did not receive more epinephrine, suggesting that the improved outcome did not result from more epinephrine, but instead from giving epinephrine at the right time when needed to maintain the CPP. There are only limited human data at present, and no prospective human study, that show targeting the CPP during resuscitation improves survival. A study in 1990 of 100 OHCA patients who had right atrial and arterial catheters placed after failure of ROSC documented that those who eventually developed ROSC (25 patients) had significantly higher CPPs (91). Only those patients whose CPP was at least 15 mmHg achieved ROSC; there are no data on the optimal CPP for children. In view of the lack of any clinical data, other than observational studies, describing changes in CPP or diastolic arterial pressure with different CPR techniques (e.g., interposed abdominal compression-CPR and ACD-CPR) CoSTR made no treatment recommendations on using this approach. In the ICU where this monitoring is readily available, it is the author’s opinion that this is a reasonable approach to guide BLS and bolus epinephrine administration. ETCO2 measurement evaluates the partial pressure of CO2 at the end of an exhaled breath. CO2 is delivered to the lungs for elimination based on pulmonary blood flow. In low blood flow states, as long as minute ventilation is relatively fixed, the ETCO2 reflects effective pulmonary blood flow (i.e., cardiac output) (92). During CPR, ETCO2 is typically low, characterizing the low cardiac output during CPR and the relatively high minute ventilation relative to pulmonary blood flow. Multiple animal and adult studies show a strong correlation between ETCO2 concentration and interventions that increase cardiac output during CPR in cardiac arrest or in shock models (35,93–96). Similarly, animal models demonstrate that changes in ETCO2 correlate directly with controlled changes in cardiac output (92,97–99). Recently, an analysis of chest compression rate, depth, and ETCO2 in 583 OHCA and IHCA patients showed a linear relationship between chest compression depth and increases in ETCO2 (100). By extrapolation, these data support using ETCO2 monitoring to assess the effect of efforts to increase cardiac output during CPR. Multiple observational studies performed in adults over more than 25 years, across different countries, both with IHCA and OHCA (101–107) report an association between higher ETCO2 and ROSC, but there is no specific ETCO2 threshold that accurately predicts ROSC. This is partly due to the lack of standardization among studies with analysis of ETCO2 at various time points during resuscitation. Substantial animal and human data (103–105,108,109) and two pediatric prospective cohort studies (93,110) noted an association between persistently low ETCO2 and failure to achieve ROSC, usually after 20 minutes of CPR. Part of the difficulty in identifying a target value of ETCO2 to guide resuscitation or to be used for prognosis is that the value depends on the etiology and duration of the arrest and resuscitation. Interpretation of ETCO2 concentration during resuscitation is affected by the quality of the measurement, the minute ventilation delivered during resuscitation, the presence of lung disease that increases anatomic dead space, and the presence of right to left shunting, as may occur in children with congenital heart disease (111–113). In addition, therapeutic interventions may affect the reading; specifically, sodium bicarbonate transiently increases ETCO2 concentration since CO2 is generated as it buffers excess protons (114–116). The time delay between the appearance of increased ETCO2 following sodium bicarbonate administration is directly related to the cardiac output (116), and is consistently observed in ventilated patients, leading to the suggestion to use ETCO2 monitoring to confirm intraoperative central venous line insertion in children (115). Epinephrine, or other vasoconstrictive agents, improves CPP through intense vasoconstriction, which decreases global blood flow; thus, it is not surprising that a transient fall in ETCO2 is observed following epinephrine administration (117–119). The take-home message is to cautiously interpret ETCO2 within 1 to 2 minutes of resuscitation drug administration. An important limitation of using the initial ETCO2 as a prognostic indicator is that there is a significant difference between patients with asphyxial cardiac arrest versus VF/pVT arrest; those with asphyxial arrest are characterized by a period of inadequate ventilation and accumulation of CO2 (105,108,120). The initial ETCO2 concentration in asphyxial arrest is often much higher than the concentration after the first few minutes of CPR. In a relatively small IHCA study of patients with PEA (n = 50), the initial ETCO2 was associated with ROSC, but not with survival to discharge (121). In the absence of high-quality evidence, and in view of the caveats in interpretation noted, CoSTR did not suggest using capnography to guide resuscitation efforts. Because of differences in study methodology, emergency response systems, and resuscitation guidelines, the data do not support a specific threshold of ETCO2 to reliably predict failure of ROSC. It is noteworthy that clinical studies consistently observe a sudden rise of ETCO2 with the onset of ROSC, which often precedes recognition of ROSC by clinical signs (44,104). This observation suggests that continuous monitoring of ETCO2 could replace the need to check for ROSC during CPR in the ICU or other areas where capnography is available. The technology is available, but not consistently used, during CPR in the ICU, since patients are often removed from mechanical ventilation and manually ventilated without capnography being maintained. Even when available, a recent survey found that capnography was used less than half the time to monitor the effectiveness of CPR during IHCA and it was uncommonly used to provide prognostic information (122). Ultrasound (US) is increasingly being used for a wide range of indications as the technology becomes more widely available in the ICU (see Chapter 25) (123). The value of US in identifying a potentially reversible cause of CA such as pericardial tamponade or pulmonary embolus is uncertain since there are only case reports or biased case series. Echocardiography may be helpful to document the effectiveness of mechanical compression devices, such as the LUCAS-2 device, where echocardiography documented ineffective cardiac compression in a recent case report (124). One observational study suggested that using a focused echocardiographic examination to identify pseudo-PEA during resuscitation (n = 19) was helpful in improving ROSC by changing the therapeutic approach to the patients’ resuscitation, although it is notable that the comparison group was historical controls (125). An RCT evaluated 100 OHCA patients in a convenience sample randomized to standard care or care with the addition of echocardiography (126). They identified the presence of pseudo-PEA in 78% of the 50 patients who underwent echocardiography during resuscitation and all of the patients with ROSC (43%) were in this group; no patient with absent mechanical cardiac activity had ROSC. Despite the identification of pseudo-PEA, however, there was no difference in the overall rate of ROSC between the groups. A systematic review of focused echocardiography identified 12 trials encompassing 568 patients and found that the absence of cardiac activity is associated with a significantly lower, but not zero, likelihood of ROSC, leading to the CoSTR recommendation that echocardiography should not be used alone, but rather as an adjunct with clinical assessment and history to determine if ongoing resuscitation efforts should be discontinued (127). Based on these limited data, the CoSTR suggestion is that cardiac US may be considered as an additional diagnostic tool to identify potentially reversible causes of CA, provided that it does not interfere with standard ACLS interventions, including limiting the need to interrupt chest compressions (4,5). The acidosis gradient between arterial and central venous blood—largely determined by PCO2—reflects the effectiveness of blood flow during low-flow states such as CPR (128). Similarly, animal studies and case reports describe similar arteriovenous differences in PCO2 between arterial and venous blood at the organ level (e.g., heart) and the entire organism (129,130). These studies suggest that measurement and comparison of a venous blood gas (VBG) and arterial blood gas (ABG) may be predictive during CA. Furthermore, it suggests that the ABG does not accurately reflect the severity of tissue hypoxia, acidosis, or hypercarbia during CPR. Instead, an ABG documents the effectiveness of ventilation during CA, which typically reflects overventilation relative to pulmonary blood flow (i.e., hypocarbia). As expected, both animal and human data show a significant worsening of the ABG values for acidosis and oxygenation with ROSC, representing the washout of built-up tissue acids. Antidysrhythmics, vasopressors, atropine, sodium bicarbonate, calcium, and magnesium are commonly given in some combination to patients who fail to achieve ROSC with BLS interventions. Although these agents have been used for decades, the value, as well as potential harm, from these agents remains a source of controversy and debate. What is clear is that these medications should be delivered as close to the heart as possible since they have to traverse the pulmonary circulation and be delivered to the systemic circulation to exert their pharmacologic effect. Thus, when a central venous line is available, it is the preferred route over a peripheral venous site. The intraosseous route is an acceptable alternative, but is uncommonly required in the ICU or in-hospital setting. If IV or IO access cannot be achieved, then several resuscitation medications can be administered via instillation through an endotracheal tube (ET) or tracheostomy. Lipid-soluble medications that can be delivered via ET are lidocaine, epinephrine, atropine, naloxone, and vasopressin. This route, however, results in much lower blood concentrations compared with IV administration. Optimal doses of the medications delivered via the ET route are not known, but it is recommended to administer at least 2 to 2½ times the recommended IV doses. Animal data suggest that using standard IV epinephrine doses via the ET route may not achieve high enough plasma concentrations, and in fact may not only be ineffective, but may be harmful by reducing systemic vascular resistance through a greater β2-adrenergic effect rather than α-adrenergic effect (131,132). This is the most commonly used medication during CPR. Epinephrine’s primary action in CA is to increase the CPP through systemic vasoconstriction, mediated by its α-adrenergic effects; the β-adrenergic effects are relatively unimportant. Indeed, even when complete β-adrenergic blockade is used in an animal cardiac arrest model, epinephrine is effective, whereas α-adrenergic blockade completely eliminates epinephrine’s effects (133). Epinephrine is used primarily during CA due to asystole or PEA. It is a second-line agent used for shock-refractory VF or pulseless VT. There is substantial animal data, and anecdotal clinical experience, that epinephrine elevates the CPP and, thus, myocardial blood flow, leading to its effectiveness in various models of CA. But, as reviewed below, the clinical trials suggest that epinephrine may have adverse effects on outcomes that are of significance: that is, survival to hospital discharge with good neurologic outcome. Since epinephrine’s beneficial effect is through systemic vasoconstriction, there was some enthusiasm for using higher doses to produce a greater increase in coronary and cerebral perfusion pressure; this hypothesis was supported by animal data (134), but subsequent clinical trials in OHCA in adults (135–137) showed that high-dose epinephrine (HDE) increased ROSC and survival to hospital admission, but had no beneficial effect on survival to hospital discharge. In an RCT of HDE (0.1 mg/kg) versus standard-dose epinephrine (SDE, 0.01 mg/kg) in 68 children with IHCA who failed to respond to an initial standard dose, the HDE did not result in more ROSC and trended toward a worse 24-hour survival (138). Since the last evidence review in 2010, there have been two RCTs; both were OHCA studies. One compared no drug to drug administration (139), and the other compared SDE to placebo (140). The former compared outcome in patients with IV access and any drug given in OHCA (139) versus no IV access in a European EMS system; thus, the study was not specific to epinephrine and the outcome may have been impacted by the difference between groups in the time spent obtaining vascular access rather than focusing on providing chest compressions. A subsequent post hoc analysis of just those patients who received epinephrine in this trial observed a higher likelihood of hospital admission if the patient received epinephrine, but there was a nonsignificant trend toward a lower rate of survival to hospital discharge in those patients who received epinephrine and a lower functional outcome, as measured by the CPC score (141). The Australian RCT of epinephrine versus placebo in OHCA also observed a significantly higher likelihood of ROSC (23.5% vs. 8.4%), and a trend toward higher survival to hospital discharge, which was low in both groups (4.0% vs. 1.9%) (140). Combining the data from these two trials with data from previous publications, including a very large adjusted observational trial in OHCA (142), and a smaller study (143), showed that the use of epinephrine was associated with a lower adjusted likelihood of survival to discharge, and functional survival, when compared with no epinephrine in the OHCA population (144). Within the hospital, there is limited data, but a recent analysis of more than 25,000 IHCA patients in the GWTG-R database provided an interesting observation that may impact outcome. The patients in this analysis had a nonshockable rhythm and patients were excluded from the analysis if the arrest occurred in the emergency department, ICU, or operating room (145). The primary outcome was survival to hospital discharge. They noted that the median time to the first dose of epinephrine was 3 minutes, and there was a stepwise decrease in survival to discharge with increasing interval of time to epinephrine administration when analyzed in 3-minute increments of time. There also was a significantly higher likelihood of ROSC, 24-hour survival, and neurologically intact survival—defined as CPC score of 1 to 2 at discharge—with early epinephrine administration. An analysis of the GWTG-R database found similar outcomes in 1,558 children with a nonshockable rhythm (146). Longer time to epinephrine administration was associated with lower likelihood of survival to hospital discharge after adjusting for known confounders. When comparing patients with first dose more than 5 minutes after arrest to those who received an epinephrine dose within 5 minutes, survival to discharge was 21.0% versus 33.1% (p = 0.01) (146). These apparently contradictory results within the hospital need to be put into context. In-hospital providers typically arrive at the bedside very quickly whereas OHCA is often associated with a delay of 10 minutes or more before vasoactive drugs may be given. Thus, early in CA, such as within the ICU or hospital, early epinephrine administration appears to be beneficial, leading to the CoSTR recommendation to give epinephrine as soon as feasible for IHCA associated with a nonshockable rhythm in adults, but no recommendation was made for children (4,6). HDE may be considered in clinical conditions characterized by poor adrenergic responsiveness, such as severe septic shock, β-blocker overdose, neuraxial anesthesia, or systemic bupivacaine overdose. Epinephrine should be given intravascularly whenever possible since intratracheal doses are erratically absorbed as noted above. Vasopressin is an endogenous antidiuretic hormone that, when given at high doses, causes vasoconstriction by directly stimulating vascular smooth muscle V1 receptors (147). Vasopressin improves CPP but, unlike epinephrine, offers theoretical advantages of cerebral vasodilation, possibly improving cerebral perfusion. Its lack of β1-adrenergic activity potentially avoids unnecessary increases of myocardial oxygen demand, which may reduce the likelihood of developing postresuscitation dysrhythmias. Additionally, unlike epinephrine, vasopressin reduces pulmonary vascular resistance, making it potentially useful in patients with pulmonary hypertension (148,149). Vasopressin has a longer half-life of 10 to 20 minutes compared to the 3 to 5 minutes observed with epinephrine. Human studies in the early 2000s suggested that vasopressin achieved a comparable rate of ROSC in CA, although no additional benefit was seen, compared with epinephrine (150,151). A single RCT of low quality, because of a 37% rate of postrandomization exclusion, evaluated repeated dose of SDE with multiple doses of standard dose vasopressin (40 IU) in the ED after OHCA (152). A total of 336 patients were randomized; there was no advantage of vasopressin in terms of the rate of ROSC or survival to hospital discharge. Data from animal models suggested that the combination of vasopressin and epinephrine together may achieve better CPP and myocardial blood flow, leading to a higher rate of ROSC and survival. Several clinical trials, however, failed to show that ROSC, survival to hospital admission for OHCA, and survival to hospital discharge were improved by the combined administration of these two agents (151,153–155). One RCT evaluated the combination of vasopressin (20 IU) with epinephrine (1 mg) every 3 minutes of ongoing CPR in 268 patients with IHCA (156). Additionally, patients in the vasopressin group received 40 mg of methylprednisolone and, if in postresuscitation shock, stress dose hydrocortisone was given for 7 days. Unlike the data on vasopressin during OHCA, this IHCA study reported significantly higher rates of ROSC (83.9% vs. 65.9%) and survival to hospital discharge with CPC score of 1 or 2 (13.9%) versus those treated with epinephrine only (5.1%) (156). These limited data suggest that IHCA is different than OHCA and, just like early administration of epinephrine is associated with improved outcome, the addition of vasopressin and steroids may be beneficial when used for IHCA. There are little pediatric data on the use of vasopressin. An analysis of GWTG-R data for 1,293 IHCA children found that vasopressin was used infrequently (5% of cases), most often in the ICU with prolonged arrest (157). Its use was associated with a lower rate of ROSC, but no difference in the rate of ultimate survival to hospital discharge when compared with children who did not receive vasopressin. Animal data also do not support the use of vasopressin in pediatric asphyxial CA, where the rate of ROSC was higher in animals who received epinephrine alone compared with vasopressin alone or vasopressin together with epinephrine (158). In a pediatric model of VF arrest, vasopressin either alone or with epinephrine produced better myocardial and cerebral blood flow compared with epinephrine (159). On review of the available evidence, CoSTR suggests against the use of vasopressin alone in adults and children (4,6). They also suggest against adding vasopressin to SDE during CA. For IHCA, there are too few data to make a recommendation for or against the combined use of vasopressin, epinephrine, and corticosteroids (4). There are no data supporting the combination therapy for OHCA. Extensive clinical data emphasize the early administration of shocks in patients with VF/pVT, which has been a major focus of efforts to increase access to AED devices in public locations. Within the hospital, the prevalence of an initial shockable rhythm in IHCA is decreasing. When patients have shock-refractory VF/pVT, antidysrhythmic drugs continue to play an important role. Amiodarone remains the main drug used in this setting, but recent pediatric data resulted in a change regarding the use of lidocaine. This complex pharmacologic agent has effects on sodium, potassium, and calcium channels on myocardial cells, as well as α- and β-adrenergic blocking properties. Since the 2005 guidelines, amiodarone has been the preferred agent for both atrial and ventricular dysrhythmias, especially in the presence of impaired cardiac function (160). Amiodarone is recommended for narrow-complex tachycardias that originate from a reentry mechanism (reentry SVT); ectopic atrial focus; control of hemodynamically stable VT, polymorphic VT with a normal QT interval, or wide-complex tachycardia of uncertain origin; and control of rapid ventricular rate due to accessory pathway conduction in pre-excited atrial arrhythmias with AV nodal blockade in patients with preserved or impaired ventricular function (161). Limited data form the basis for the use of amiodarone over lidocaine in OHCA; a single RCT in 504 shock-refractory patients were randomized to amiodarone (300 mg) or placebo. The rate of ROSC and survival to hospital admission was significantly improved, but there was no impact on survival to hospital discharge (162). A new RCT is underway intending to randomize 3,000 patients, but the results are not yet available (163). There are no adult clinical trials of amiodarone for IHCA. In children, an analysis of the GWTG-R data in 889 IHCAs with VF/pVT found that 19% received amiodarone, 33% received lidocaine, and 10% received both (164). Interestingly, lidocaine was associated with a higher rate of ROSC and 24-hour survival, but not hospital discharge; amiodarone was not associated with a higher rate of ROSC or 24-hour survival. Since this is a retrospective analysis of IHCA, it does not address potential bias related to the selection of one agent over the other. Based on the available data, amiodarone is suggested in adults with shock-refractory VF/pVT to improve the rates of ROSC (4), but in children, lidocaine is now recommended as an alternative agent to amiodarone, which is a change from the 2010 guidelines (6). The major adverse effects of amiodarone are hypotension and bradycardia, which can be minimized by slowing the rate of drug infusion. In addition, amiodarone can increase the QT interval, and therefore its use should be carefully considered when the patient receives other drugs that can prolong the QT interval. This is a sodium channel blocker that decreases ectopic electrical myocardial activity by raising the electrical stimulation threshold of the ventricle during diastole. In ischemic myocardial tissue after infarction, it may suppress reentrant dysrhythmias such as VT or VF. There is good evidence that amiodarone is superior to lidocaine in terminating VT (165,166); hence, lidocaine is not considered a first-line agent. As noted, the 2015 Guidelines suggest that lidocaine is an acceptable alternative to amiodarone (4,5). Procainamide suppresses both atrial and ventricular dysrhythmias with similar mechanisms of action to those of lidocaine. It may suppress an ectopic irritable focus and block reentrant dysrhythmias by slowing electrical conduction. Procainamide may be superior to lidocaine in terminating VT (167). Procainamide is used in the management of PVCs, VT, and persistent VF, but amiodarone is usually preferred. Procainamide can have profound myocardial depressant effects, especially after myocardial infarction; therefore, continuous ECG and arterial blood pressure monitoring are mandatory. End points limiting therapy include hypotension and a greater than 50% widening of the QRS complex. This is a class III antidysrhythmic agent according to the Vaughan Williams classification that selectively blocks the rapid component of the delayed rectifier K+ current. It is not available in the United States, but has been studied and used in Japan for treatment of dysrhythmias associated with CA. Unlike amiodarone, it has a short half-life and is easily soluble for intravenous administration (168). Unfortunately, it tends to prolong the QT interval, leading to a risk of torsades de pointes. When compared with lidocaine in adults with shock-refractory OHCA, there was a significantly higher 24-hour survival with nifekalant, but the 30-day favorable neurologic outcome was similar. Within the hospital, a relatively small study (55 patients) found that nifekalant more effectively led to termination of shock-resistant VF/pVT than lidocaine (169). A systematic review in 2013 concluded that amiodarone, nifekalant, and lidocaine were effective during initial resuscitation as assessed by ROSC and survival to hospital admission, but there is no evidence these agents change the rate of ultimate hospital survival (170). In view of the relatively limited data, CoSTR suggested that lidocaine or nifekalant is an alternative to amiodarone in adults with shock-refractory VF/pVT (4,5). They also noted that the early trials of amiodarone may have been biased by the use of polysorbate solvent in the placebo group since this agent is known to reduce blood pressure. Metabolic and respiratory acidosis develops during CA resulting from anaerobic metabolism, leading to lactic acid generation. Inadequate ventilation along with reduced pulmonary blood flow during CPR leads to inadequate pulmonary delivery of carbon dioxide for elimination. Thus, CA patients have a combined respiratory and metabolic acidosis at the tissue level. Untreated acidosis suppresses spontaneous cardiac activity, decreases the electrical threshold required for the onset of VF, decreases ventricular contractile force, and decreases cardiac responsiveness to catecholamines, such as epinephrine. An elevated PCO2 tension probably is more detrimental to myocardial function and catecholamine responsiveness than metabolic acidosis. CO2 readily diffuses across myocardial cell membranes, causing intracellular acidosis; likewise, cerebrospinal fluid acidosis may occur secondary to the diffusion of CO2 across the blood–brain barrier, producing post–arrest cerebral acidosis. As noted in the section on ETCO2 monitoring, the action of sodium bicarbonate to buffer excess protons transiently increases CO2 production and thus the partial pressure of CO2; sodium bicarbonate administration without sufficient ventilation and circulation to remove the CO2 that it produces is more detrimental than helpful in animal models of post-hypoxic–ischemic acidosis (171,172). Although acidosis is presumed to be harmful and giving sodium bicarbonate to correct the acidosis seems rationale, clinical studies failed to show any beneficial effect, even when used in prolonged cardiac arrest (173). Since 2005, the Guidelines have not recommended the routine use of sodium bicarbonate except in special circumstances such as the treatment of hyperkalemia, to reverse the effects of hypercalcemia, and in the treatment of tricyclic antidepressant overdose (160,174). A recent analysis of patients with IHCA and hyperkalemia, with serum potassium over 6.5 mEq/L measured during CPR, found that sodium bicarbonate administration was significantly associated with ROSC if the potassium concentration was below 7.9 mEq/L (175). Administration of calcium and sodium bicarbonate together was associated with a higher ROSC rate if serum potassium was below 9.4 mEq/L. Prevalence of hyperkalemia in IHCA patients over a 6-year study period was 12% (109 patients) (175). In this population there was low (3.7%) survival to discharge with 92.7% of the patients having PEA or asystole. This study provides support for bicarbonate in the setting of hyperkalemia, but an accompanying editorial notes that the low rate of survival is not supportive and that insulin–glucose solutions are more effective in lowering serum potassium than bicarbonate and calcium, which have little effect on the serum potassium concentration (176,177). In pediatric IHCA, a recent analysis of the GWTG-R database over 10 years (2000–2010) examined the association between sodium bicarbonate use and patient outcome in 3,719 events (178). Despite the Guideline recommendation to not use bicarbonate, this agent was given in 68% of the IHCA events, although there was a small decrease in the rate of use comparing the first 5 years to the last 5 years. After adjusting for known confounding factors, sodium bicarbonate use was associated with significantly decreased 24-hour survival and decreased survival to discharge. When the analysis was limited to children with metabolic/electrolyte abnormalities, hyperkalemia, or toxicologic diagnoses, sodium bicarbonate use was not associated with worse outcomes. CoSTR did not update their 2015 evidence review or recommendations on the use of sodium bicarbonate. Atropine was used during CPR for its vagolytic actions, but data showed no benefit, leading to its elimination as a routine agent in adult or pediatric resuscitation. Atropine is used to manage hemodynamically significant bradycardia and has been used to reduce the risk of vagal-induced bradycardia during pediatric intubation. The only new data regarding atropine debunked a long-taught recommendation to use a minimal dose in pediatric patients since small doses in infants were thought to produce paradoxical bradycardia. A study in 60 unpremedicated healthy hospitalized infants used an atropine dose of 0.005 mg/kg while continuously monitoring the ECG; no bradycardia was observed (179). A prospective observational study in 264 neonates and children undergoing emergency intubation was propensity adjusted and noted significantly lower ICU mortality in those children who received atropine (180). Another observational study of atropine use in 327 children undergoing emergency intubation reported a lower rate of dysrhythmias, particularly bradycardia (181). The pediatric CoSTR review could not make a recommendation regarding atropine’s use because the available studies are biased, but they did recommend weight-based atropine dosing in infants with no lower dose limit (7). The adult CoSTR review in 2015 did not evaluate the use of atropine. This ion plays a critical role in myocardial contractility and action potential generation, but studies have shown no benefit of calcium administration in CA (182,183), and therefore calcium is not recommended for OHCA or IHCA (160). As reviewed above, it may be considered along with sodium bicarbonate during CA associated with acute hyperkalemia (175). It is also indicated when hypocalcemia is suspected, or calcium channel blockers toxicity or overdose is suspected. Magnesium is recommended for the treatment of torsades de pointes VT with or without cardiac arrest. Although magnesium is a calcium channel blocker and theoretically may protect ischemic cells from calcium overload, there are no data supporting its routine use in cardiac arrest. The data are limited by small numbers in the available clinical trials as reviewed in the CoSTR statement (4,184). Based on the low-quality evidence and lack of documentation of benefit, they recommended against the routine use of magnesium in adults (4,5). Rapid administration may result in hypotension and bradycardia. Magnesium also should be used cautiously in patients with renal failure. Tracheal intubation is indicated when the rescuer is unable to adequately ventilate or oxygenate the arrested or unconscious patient with bag-mask ventilation, or if prolonged ventilation is required and airway protective reflexes are absent in the patient with a perfusing rhythm. A properly placed endotracheal tube (ET) is the gold standard method for securing the airway, although supraglottic airways, such as an LMA or Combitube can be effective. Since there are no unique issues with in-hospital airway management in CA, the reader is referred to Chapter 39 for more details on this topic. It is important to use an ETCO2 detector in addition to careful auscultation and observing symmetric chest rise following intubation. ICLOR recommends using capnography to confirm tube placement and continuous monitoring during CPR to ensure appropriate position of the tracheal tube (4,5). Successful ROSC is only the first step toward the goal of complete recovery from cardiac arrest. The complex pathophysiologic processes that occur following whole body ischemia and subsequent reperfusion that characterizes ROSC is called the post–cardiac arrest syndrome (14). Depending upon the cause of the arrest, and the severity of the post–cardiac arrest syndrome, many patients require complex multiple organ support (185). The treatment they receive during this postresuscitation period can significantly influence the overall outcome and particularly the quality of neurologic recovery. The effectiveness of this care in the ICU likely explains some of the observed variation in survival to hospital discharge between different centers (11,20,186,187). The postresuscitation phase starts at the location where ROSC is achieved; once stabilized, the patient should be transferred to the most appropriate high-care area for continued diagnosis, monitoring, and treatment. Transport should include continuous monitoring of pulse oximetry and ETCO2 in the intubated patient. Significant improvements in outcome have occurred over the last 20 years so that as many as 40% to 50% of comatose patients admitted to ICUs after cardiac arrest survive to hospital discharge, depending on the cause of arrest, EMS and hospital system, and quality of care. Of the patients who survive to hospital discharge, most are reported to have good neurologic outcome (CPC 1 or 2), although many have subtle cognitive impairment (188–190). The therapeutic approach to management of the postarrest patient includes attention to assuring there is a secure airway, providing appropriate ventilation and oxygenation, rapidly identifying and treating the cause of the arrest, supporting the circulation, protecting the brain from further injury, and monitoring and supporting the function of other organs. These topics are detailed below, but the reader should recognize that this is a rapidly evolving field with little high-quality evidence for many of the postarrest therapies, such as who to cool, and if cooled, to what core temperature and for how long. Following arrest-induced global hypoxia–ischemia with ROSC, there is often variable reperfusion of all organs, depending on the patient’s cause of arrest and underlying cardiac function. This reperfusion syndrome often stimulates a host of inflammatory mediators resulting in a clinical picture that mimics the systemic inflammatory response syndrome (SIRS) (191,192). In addition, the arrest period produces myocardial injury of varying severity depending on the duration of arrest and the effectiveness of CPR to maintain coronary perfusion (193–195). Drugs given during the arrest, such as epinephrine, may add to the myocardial injury. The major components of the post–cardiac arrest syndrome consist of the following: The severity of this syndrome varies with the duration and cause of the CA; indeed, it may not occur at all if the CA is brief. Post–cardiac arrest brain injury may manifest as coma, seizures, myoclonus, varying degrees of neurocognitive dysfunction, and/or brain death. Among patients surviving to ICU admission after OHCA but subsequently dying during hospitalization, brain injury is the cause of death in approximately two-thirds; this is the case in approximately 25% after IHCA arrest (196,197). Cardiovascular failure accounts for most deaths in the first 3 days in the ICU after OHCA, while brain injury accounts for most of the later deaths (197). Withdrawal of lifesustaining therapy is the most frequent cause of death—seen in approximately 50%—in patients with a poor prognosis (198), emphasizing the importance of having reliable prognostic indicators (see Prognosis, below). Post–cardiac arrest brain injury may be exacerbated by microcirculatory failure, impaired autoregulation, hypotension, hypercarbia, hypoxemia, hyperoxemia, pyrexia, hypoglycemia, hyperglycemia, and seizures. Significant myocardial dysfunction is common after cardiac arrest, but typically begins to recover 2 to 3 days post-ROSC, although full recovery may take significantly longer (195). The whole body ischemia reperfusion of cardiac arrest activates immune and coagulation pathways, contributing to microcirculatory changes leading to multiple organ failure and increasing the risk of infection. Thus, the post–cardiac arrest syndrome has many features in common with sepsis, including increased vascular permeability leading to intravascular volume depletion, which may be exacerbated by inappropriate vasodilation and maldistribution of blood flow, endothelial injury, and abnormalities of the microcirculation (192,195,199,200). Patients with a brief cardiac arrest who respond immediately to appropriate treatment may achieve an immediate return of normal cerebral function, and thus do not require tracheal intubation and ventilation, but should be given oxygen via a facemask if their SpO2 is less than 94%. In patients with diminished consciousness or coma, there is little controversy about the importance of securing the airway and assuring appropriate oxygenation and ventilation. However, maximizing PaO2 following an ischemic insult is not only not helpful but appears to be harmful. The harm is thought to result from cellular exposure to high PaO2 following an ischemic injury, resulting in greater oxygen radical–mediated organ injury (201). Several animal studies indicate that hyperoxemia early after ROSC causes oxidative stress and harms postischemic neurons (202). A metaanalysis of 14 observational studies noted significant heterogeneity across studies, with some studies showing that hyperoxemia was associated with a worse neurologic outcome and others failing to show this association (203). A recent retrospective analysis from an ICU database of 184 patients who survived more than 24 hours following ROSC found a significant association between severe hyperoxemia (>300 mmHg) and increased mortality (204). Limited pediatric data did not find an association between hyperoxemia or hypoxemia and outcome (205,206). Note that an FiO2 of 1.0 is still recommended during CPR in adults and children (4–6). Provided that the hemoglobin concentration is adequate, once stable ROSC is achieved, the FiO2 should be titrated to achieve an oxygen saturation of 94% to 99%, which provides adequate arterial oxygen content and tissue oxygen delivery while assuring that excessive PaO2 is avoided. If the patient’s oxygen saturation is 100%, the provider does not know if the PaO2 is 110 or 500 mmHg. Since either hypoxemia or hyperoxemia increase the likelihood of a further cardiac arrest, and may contribute to secondary brain injury, they should be avoided by continuous oxygen saturation monitoring. Hypocarbia causes cerebral vasoconstriction leading to decreased cerebral blood flow (207). After cardiac arrest, hypocapnia induced by hyperventilation causes cerebral ischemia (208). The mechanism may be related to hypocarbia-induced cerebral vasoconstriction, but it also is likely that the increased positive pressure ventilation used to achieve hypocarbia elevates intrathoracic pressure, thus reducing venous return and CO. Observational studies using large cardiac arrest registries document an association between in-ICU hypocapnia and poor neurologic outcome (209,210). Hypercapnia or hypocapnia was also associated with worse outcome based on analysis of a pediatric IHCA registry (206). Conversely, two observational studies documented an association between mild hypercapnia and better neurologic outcome among post–cardiac arrest patients in the ICU (210,211). Until prospective data are available, it is reasonable to adjust ventilation to achieve normocarbia and to monitor this using ETCO2 and ABG analysis. Although there are no specific studies of protective lung ventilation strategies in post–cardiac arrest patients, given that these patients often develop a marked inflammatory response, it is rational to apply protective lung ventilation using tidal volumes of 6 to 8 mL/kg ideal body weight and positive end expiratory pressure of 4 to 8 cm H2O (212). To facilitate ventilation, a gastric tube is often needed to decompress the stomach, which may be distended from mouth-to-mouth or bag-mask ventilation. Sedation in the form of benzodiazepines or propofol plus opioids is often used to facilitate ventilation and manage shivering if therapeutic temperature management is used, but the effects of different agents on brain recovery is not known. It seems reasonable to use agents that also have anticonvulsant effects because of the high risk of seizures. Adequate doses of sedatives may also be beneficial by reducing oxygen demand. Intermittent doses of a neuromuscular blocking agent (NMBA) may be required, particularly if using targeted temperature management (TTM) (see below). Although there is concern about the potential adverse effects of continuous NMBA infusions, limited evidence suggests that short-term infusion (≤48 hours) of short-acting NMBAs given to reduce patient-ventilator dyssynchrony and risk of barotrauma in patients with acute respiratory distress syndrome (ARDS) are not associated with an increased risk of ICU-acquired weakness, and may improve outcome in such patients (213). Furthermore, there are some data suggesting that continuous neuromuscular blockade is associated with decreased mortality in post–cardiac arrest patients (214); however, infusions of NMBAs may mask seizures, and they eliminate the ability to perform neurologic checks. Since status epilepticus (SE) including nonconvulsive status epilepticus (NCSE) may occur in postarrest patients (215–218), continuous electroencephalography (EEG) is recommended to detect seizures in these patients, especially when neuromuscular blockade is used (219–221). Because acute coronary disease is a significant cause of OHCA, once ROSC is established and the patient is stabilized, early identification of coronary obstruction is critical to reverse ongoing myocardial ischemia. Acute coronary syndrome (ACS) is a frequent cause of OHCA as documented in a recent metaanalysis noting the prevalence of an acute coronary artery lesion ranged from 59% to 71% in OHCA patients without an obvious noncardiac etiology (222) (see Chapter 94 for details on ACS and myocardial infarction). Many observational studies showed that emergent cardiac catheterization, including early PCI, is feasible in patients with ROSC after cardiac arrest (223,224). Invasive management—that is, early coronary angiography followed by immediate PCI if deemed necessary—of these patients, particularly those with prolonged resuscitation and nonspecific ECG changes, is controversial because of the lack of high-quality evidence and significant demands on hospital and EMS system resources, including transfer of patients to PCI centers. In patients with ST segment elevation (STE) or left bundle branch block on the post-ROSC ECG, more than 80% will have an acute coronary lesion (225). Although there are no RCTs, many observational studies reported increased survival and neurologically favorable outcome with early invasive coronary artery management in STE patients (226). Immediate angiography and PCI, when indicated, should be performed in resuscitated OHCA patients whose initial ECG shows ST elevation, even if they remain comatose and ventilated (227,228). Observational studies also indicate that optimal outcomes after OHCA are achieved with a combination of TTM and PCI, which ideally are included in a standardized post–cardiac arrest protocol as part of an overall strategy to improve neurologically intact survival (187,229). In contrast to the usual presentation of ACS in non–cardiac arrest patients, the standard tools to assess coronary ischemia in cardiac arrest patients are less accurate. Several large observational case series showed that the absence of STE may also be associated with ACS in patients with ROSC following OHCA (230,231). There are conflicting data from observational studies in these non-STE patients regarding the potential benefit of evaluation by emergent cardiac catheterization (230,232,233). CoSTR suggests it is reasonable to discuss and consider emergent cardiac catheterization after ROSC in patients with the highest risk of a coronary cause for their cardiac arrest. Factors such as patient age, duration of CPR, hemodynamic instability, presenting cardiac rhythm, neurologic status upon hospital arrival, and perceived likelihood of a cardiac etiology for the CA can be weighed in making the decision to undertake the intervention in the acute phase or to delay it until later in the hospital stay. Postresuscitation myocardial dysfunction causes hemodynamic instability, which may manifest as low cardiac index, hypotension, and dysrhythmias (195,234,235). To detect the degree of myocardial dysfunction and adequacy of preload, early echocardiography is recommended. Postresuscitation myocardial dysfunction often requires inotropic support, at least transiently. In addition, the systemic inflammatory response that occurs frequently in post-cardiac arrest patients may cause vasoplegia and severe vasodilation, exacerbating hypotension caused by reduced cardiac function (195,235). The optimal agents to use in this setting should balance the adverse effects of stimulating increased myocardial oxygen demand and raising afterload in the context of a poorly contracting ventricle versus maintaining adequate coronary and cerebral perfusion pressure. Currently, norepinephrine, with or without dobutamine, and fluid are usually the most effective treatments for poor cardiac function with or without hypotension. When inappropriate vasodilation is documented or suspected, the controlled infusion of relatively large volumes of fluid is generally tolerated well by patients with post-cardiac arrest syndrome. Treatment may be guided by achieving target systolic and mean arterial blood pressure, heart rate, urine output, rate of plasma lactate clearance, and central venous oxygen saturation. Traditionally, the targeted mean arterial pressure (MAP) is 65 mmHg with a minimal central venous oxygen saturation of about 70%, but recent data suggest that a higher MAP may produce better outcomes. A recent analysis of 920 patients managed with TTM found an inverse relationship between MAP and mortality (234). A prospective observational study that included simultaneous measurement of cerebral oxygen saturation (CSO2) by near-infrared spectroscopy with hemodynamic data observed a strong linear relationship between MAP and CSO2 (236); similarly, there was a strong linear relationship between central venous O2 saturation and CSO2 (236). The optimal MAP target is unknown, but excessive vasoconstriction induced by vasopressors is likely to impair cardiac output and cerebral blood flow, especially in the setting of poor ventricular contractile function. This is supported by the observation that an MAP greater than 100 mmHg was associated with a fall in CSO2 (236). Maintaining central venous O2 saturation of 67% to 72% was associated with maximal survival in post-CA patients, suggesting that this can be used as a surrogate marker of adequate tissue oxygen delivery (236). In addition, although a restrictive transfusion policy is often used in the ICU, hemoglobin concentrations less than 10 g/dL were associated with lower CSO2 and central venous O2 saturation in postarrest patients, suggesting that a one-size-fits-all approach to transfusion may not be appropriate in all patients (237,238). Serial echocardiography may also be helpful to assess cardiac function, especially in hemodynamically unstable patients. In the ICU, an arterial line for continuous blood pressure monitoring is essential; cardiac output monitoring may help guide treatment in hemodynamically unstable patients, but there is no evidence that its use affects outcome. The role of supporting the circulation using an intra-aortic balloon pump (IABP) in patients with cardiogenic shock was popular, but the IABPSHOCK II Trial failed to show that use of the IABP improved 30-day mortality in patients with myocardial infarction and cardiogenic shock (239). In the absence of definitive data to guide specific hemodynamic targets, it is reasonable to target achieving a MAP that results in an adequate urine output (1 mL/kg/hr), and normal or decreasing plasma lactate values with a central or mixed venous oxygen saturation around 70%. Since pre-existing hypertension may shift the cerebral autoregulatory curve to the right, a higher MAP target may be appropriate in this population (240). During mild induced hypothermia the normal physiologic response is bradycardia. Recent retrospective studies observed that bradycardia during TTM is associated with a good outcome (241,242). As long as blood pressure, lactate, and urine output are sufficient, a bradycardia of no more than 40 beats/min may be left untreated; the bradycardia is likely tolerated during TTM since oxygen delivery requirements are reduced. Immediately after a CA there is often a period of hyperkalemia. Subsequent endogenous catecholamine release and correction of metabolic and respiratory acidosis promotes intracellular movement of potassium, often causing hypokalemia. Since hypokalemia may predispose to ventricular dysrhythmias, potassium should be given to maintain the serum potassium concentration between 4.0 and 4.5 mmol/L (219). Cardiac causes of OHCA have been extensively studied in the last few decades; conversely, little is known about noncardiac causes. The UK Guidelines suggest that early identification of a respiratory or neurologic cause can be achieved by performing a brain and chest CT scan at hospital admission, before or after coronary angiography (219). In the absence of prearrest signs or symptoms suggesting a neurologic or respiratory cause (e.g., headache, seizures, or neurologic deficits for neurologic causes, and shortness of breath or documented hypoxemia in patients suffering from a known and worsening respiratory disease) or if there is clinical or ECG evidence of myocardial ischemia, undertake coronary angiography first, followed by a CT scan in the absence of causative coronary artery lesions (219,243,244). Animal studies show that immediately after ROSC there is a period of hyperemia followed by multifocal cerebral hypoperfusion (245). The pattern of blood flow abnormalities is also determined by the type of arrest, with significant differences seen in models of VF versus asphyxial arrest (246). The initial hyperemia phase is followed by up to 72 hours of cerebral hypoperfusion while the cerebral metabolic rate of oxygen gradually recovers (247,248). After asphyxial cardiac arrest, brain edema may occur transiently after ROSC, but few studies documented whether this is associated with clinically relevant increases in intracranial pressure (ICP) (249). Increased ICP was documented after near drowning in children, and when present, was associated with high mortality (250). It is likely that the occurrence of increased ICP following asphyxial arrest is a marker of more severe cellular injury; there are no data showing that aggressive ICP therapy improves outcome in this setting. As noted above, autoregulation of cerebral blood flow is often impaired (absent or rightshifted) for some time after cardiac arrest, which means that cerebral blood flow may vary with cerebral perfusion pressure instead of being linked to neuronal activity (i.e., metabolic activity) (240). In one study, autoregulation was disturbed in 35% of post–cardiac arrest patients and the majority of these had been hypertensive before their cardiac arrest (251). Although it was common practice to sedate and ventilate patients for at least 24 hours after ROSC, there are no high-level data to support a specified period of ventilation and sedation with or without neuromuscular blockade after cardiac arrest. Patients need to be sedated adequately during treatment with TTM; therefore, the duration of sedation and ventilation is influenced by the duration of this treatment. A combination of opioids and sedative-hypnotics are usually used. Short-acting drugs—propofol, alfentanil, remifentanil—enable more reliable and earlier neurologic assessment and prognostication (252). Adequate sedation reduces oxygen consumption, which may be beneficial, but their use may also increase the need for vasoactive drug infusion to overcome hypotension, which may result from sedation-induced suppression of the patient’s endogenous catecholamine stress response (252). Seizures are common after CA, occurring in approximately one-third of patients who remain comatose after ROSC. Myoclonus is most common, occurring in 18% to 25%, the remainder having focal or generalized tonic-clonic seizures or a combination of seizure types (253,254). Clinical seizure activity, including myoclonus, may or may not be of epileptic origin. Other motor manifestations could be mistaken for seizures; additionally, there are several types of myoclonus, the majority being nonepileptic (255). In comatose cardiac arrest patients, EEG commonly detects epileptiform activity: post-anoxic SE was detected in 23% to 31% of patients, and epileptic activity was seen in nearly 50% of patients using continuous EEG monitoring (217,256,257). Seizures increase the cerebral metabolic rate (258) and have the potential to exacerbate brain injury caused by cardiac arrest. Despite that, there are no data showing that prophylactic anticonvulsive therapy improves outcome; furthermore, it is unclear if systematic detection and treatment of EEG epileptic activity improves patient outcome. Routine seizure prophylaxis in post-cardiac arrest patients is not recommended because of the risk of adverse effects and the poor response to antiepileptic drugs among patients with clinical and electrographic seizures (4,5,219). If seizures are documented or suspected, treatment may include sodium valproate, levetiracetam, phenytoin, benzodiazepines, propofol, or a barbiturate, with no documented advantage of one agent over the other; phenytoin therapy is complicated by varying binding to albumin leading to the need to monitor free drug levels. Myoclonus can be particularly difficult to treat. Phenytoin is often ineffective, whereas propofol is often effective to suppress post-anoxic myoclonus (259). Clonazepam, sodium valproate, and levetiracetam are also antimyoclonic drugs that may be effective in post-anoxic myoclonus (260). Myoclonus and EEG-positive seizure activity, including SE, are associated with a poor prognosis, but individual patients may survive with good outcome, limiting its utility as a single outcome predictor (217,218,254,256). Prolonged patient observation with repeated examination may be necessary after treatment of seizures with sedatives, since they decrease the reliability of a clinical examination (261). Patients with EEG-positive SE may or may not have clinically detectable seizure manifestations (i.e., NCSE), and seizures may be masked by sedation. Because of this, one should consider continuous EEG monitoring of patients, especially if NMBAs are required, or the patient is diagnosed with SE, to monitor the effectiveness of therapy. There is a strong association between high blood glucose concentrations after resuscitation from cardiac arrest and poor neurologic outcome (262–264). The exact mechanism is not known, but may reflect a combination of decreased utilization by the brain and other injured organs, and increased epinephrine and endogenous glucocorticoids stimulating gluconeogenesis after the severe stress of a CA. Whether the hyperglycemia is a marker of organ injury or contributes directly to organ injury continues to be debated. Animal and human data suggested that tight glucose control was beneficial in ICU patients; however, in general ICU patients, a large RCT of intensive glucose control (4.5–6.0 mmol/L) versus conventional glucose control (10 mmol/L or less) reported increased 90-day mortality in patients treated with intensive glucose control (265,266). Severe hypoglycemia was more common in the tight glucose control group and is associated with increased mortality in critically ill patients (266); comatose patients are at particular risk from unrecognized hypoglycemia. Irrespective of the target range, variability in glucose values is associated with increased mortality (267). Compared with normothermia, mild induced hypothermia is associated with higher blood glucose values, increased blood glucose variability and greater insulin requirements (268). Increased blood glucose variability also is associated with increased mortality and unfavorable neurologic outcome after cardiac arrest (268). Since efforts to achieve tight glucose control may increase the risk of further brain injury from hypoglycemia, the current ILCOR recommendation is to maintain the blood glucose concentration 10 mmol/L or less and avoid hypoglycemia (219). A period of hyperthermia (hyperpyrexia) is common in the first 48 hours after cardiac arrest in adults (269) and children (270). Several studies document an association between post–cardiac arrest fever and poor outcomes (271). The development of hyperthermia after a period of mild induced hypothermia (rebound hyperthermia) is associated with increased mortality and worse neurologic outcome (272,273). There are no RCTs comparing the effect of pyrexia (defined as ≥37.6°C) treatment compared to no temperature control in patients after CA. An elevated post-ROSC temperature may represent a greater SIRS response as well as dysregulation of temperature control due to greater brain injury. Based on recent data from RCTs of hypothermia in adults (274) and children (275), where the control group was maintained at or just below normal temperature and hyperthermia was avoided, the lack of benefit from hypothermia may reflect the beneficial effects of preventing hyperthermia. Therefore, it is reasonable to aggressively treat hyperthermia occurring after cardiac arrest with antipyretics and to consider active cooling in unconscious patients (219). The term targeted temperature management (TTM) or temperature control is now preferred over the previous term therapeutic hypothermia. There was a great deal of enthusiasm for the potential benefits of induced hypothermia after global hypoxia–ischemia based on animal and human data, which demonstrated that mild induced hypothermia is neuroprotective and improves outcome. Cooling suppresses many of the pathways leading to delayed cell death, including apoptosis (276). Hypothermia also decreases the cerebral metabolic rate for oxygen (CMRO2) by about 6% for each 1°C reduction in core temperature and this may reduce the release of excitatory amino acids and free radicals, as well as achieving a better match of oxygen delivery to oxygen demand. Hypothermia blocks the intracellular consequences of excitotoxin exposure—for example, high calcium and glutamate concentrations—and reduces the inflammatory response associated with the post–cardiac arrest syndrome. To date, all studies of post–cardiac arrest-induced hypothermia only included patients in coma following ROSC. One randomized trial, and a pseudorandomized trial, demonstrated improved neurologic outcome at hospital discharge or at 6 months in comatose patients after OHCA due to VF (277,278). Cooling was initiated within minutes to hours after ROSC and a temperature range of 32° to 34°C was maintained for 12 to 24 hours. In the TTM trial, 950 OHCA patients with any rhythm were randomized to 36 hours of temperature control at either 33°C or 36°C (comprising 28 hours at the target temperature followed by a slow rewarm) (274). There was no difference in mortality and neurologic outcome at 6 months (189,279). With respect to avoidance of hyperthermia, patients in both arms of this trial had their temperature well controlled so that fever was prevented in both groups. Similar results were recently reported in a large, multicenter RCT of OHCA in children (non-neonates) (275). Within 6 hours of ROSC, comatose children (2 days to 18 years) were randomized to hypothermia (target temperature 33°C) or normothermia (target 36.8°C). The therapeutic target was maintained for 48 hours and then normothermia was actively maintained for a total duration of 120 hours. Survival to hospital discharge and neurologic status at 1-year follow-up was not different between the hypothermia and controlled normothermia groups (275). A simultaneous RCT in children experiencing IHCA completed enrollment and the results from the 1-year follow-up should be available toward the end of 2016 (www.THAPCA.org). The optimal duration for mild induced hypothermia and TTM is unknown although it is currently most commonly used for 24 hours in adults and 48 hours in children. Previous trials treated patients with 12 to 28 hours of TTM (274,277,278). The TTM trial maintained strict normothermia (<37.5°C) after hypothermia until 72 hours after ROSC (274). Based on a detailed review of evidence as of early 2015, the various ICLOR ALS committees had the following treatment recommendations regarding TTM (4–6,219,280,281): One of the ongoing management controversies regarding TTM is how to best control the patient’s temperature. The practical application of TTM is divided into three phases: induction, maintenance, and rewarming (282). External and/or internal cooling techniques can be used to initiate and maintain TTM. Animal data indicate that earlier cooling after ROSC produces better outcome, but this has yet to be demonstrated in humans (283,284). External and/or internal cooling techniques can be used to initiate cooling. If a lower target temperature (e.g., 33°C) is chosen, an infusion of 30 mL/kg of 4°C saline or Hartmann solution decreases core temperature by approximately 1.0 to 1.5°C and is probably safe in a well-monitored environment (285,286). Prehospital cooling using this technique is not recommended because of reports of an increased risk of pulmonary edema and rearrest during transport to hospital (287). Methods of inducing and/or maintaining TTM include: In most cases, it is easy to cool patients initially after ROSC as the temperature normally decreases within this first hour (269,291). Admission temperature after OHCA is usually between 35°C and 36°C, as seen in adults (274) and children (275). If a target temperature of 36°C is chosen, it is reasonable to allow a slow passive rewarm to 36°C; if a target temperature of 33°C is chosen, initial cooling is facilitated by neuromuscular blockade and sedation, which prevents shivering (292). In the maintenance phase, a cooling method with effective temperature monitoring that avoids temperature fluctuations is preferred. This is best achieved with external or internal cooling devices that include continuous temperature feedback to achieve a set target temperature (293). The temperature is typically monitored from a thermistor placed in the bladder and/or esophagus. Even though intravascular devices appear to better maintain the target temperature (293,294), as yet, there are no data indicating that any specific cooling technique increases survival when compared with any other cooling technique. Plasma electrolyte concentrations, effective intravascular volume, and metabolic rate can change rapidly during rewarming, as they do during cooling. Rebound hyperthermia is associated with worse neurologic outcome (272,273), thus, rewarming should be achieved slowly. Although the optimal rate is not known, the consensus is currently about 0.25° to 0.5°C of rewarming per hour (295); choosing a strategy of TTM at 36°C reduces the risk (274). The well-recognized physiologic effects of hypothermia need to be managed carefully (282): Generally recognized contraindications to TTM at 33°C, which are not applied universally, include the following: severe systemic infection or preexisting medical coagulopathy; fibrinolytic therapy is not a contraindication to mild induced hypothermia. Evaluating prognosis is often divided into two decision phases: when to stop resuscitation during CA, and when supportive technology should be limited or discontinued in patients with ROSC who remain comatose. With respect to the former, patient-specific IHCA predictors of survival include age (<60 years), and an initial rhythm of VF/VT; with these predictors, there is a 32% survival to discharge compared with only 7.2% in patients with an initial rhythm of asystole, and 4.8% in patients with PEA (312,313). Comorbid risk factors for poor likelihood of survival include hepatic failure, renal insufficiency, sepsis, and malignancy (312). Although these are statistically significant associations, no one factor is considered definitive. The decision to stop CPR requires clinical judgment and knowledge of the patient’s wishes; the decision should be openly discussed with the team. If the patient has a potentially correctable lesion and recurrent VF/VT, there is no specific duration that can be used to decide that ongoing resuscitation efforts are futile. Indeed, more recent data from IHCA registries finds that survival with good outcome is achieved even with prolonged CPR. An analysis of more than 64,000 IHCA from the GWTG-R database found that the median time to achieve ROSC was 12 minutes of CPR. However, when the data were analyzed by centers based on their median duration of CPR, centers in the top quartile of CPR duration (median 25 minutes) had a significantly higher rate of ROSC and survival to discharge, suggesting that decisions to stop efforts earlier may predetermine the outcome (314). If monitoring of CPR effectiveness, such as capnography, documents good CPR-induced cardiac output, prolonged CPR (>60 minutes) has achieved good outcome (315). Furthermore, if ECPR is available, prolonged resuscitation may lead to good outcome as long as the patient’s underlying condition is correctable. Using ECMO in IHCA achieved a nearly two-fold to six-fold improvement in survival and good neurologic outcome based on observational studies and propensity analyses (75,78). In children, analysis of the GWTG-R data noted that the rate of survival to hospital discharge fell linearly over the first 15 minutes of CPR, with 15.9% of children receiving more than 35 minutes of CPR surviving compared with 44.1% of those who had ROSC with less than 15 minutes of CPR (316). Neurologic outcome was favorable in 70% of those who required less than 15 minutes, but was still favorable in 60% of those survivors who received more than 35 minutes of CPR. This data shows that the previous recommendation to stop CPR for futility if there was no ROSC after 20 minutes is no longer valid. Moreover, a separate analysis of the GWTG-R data found that if ECMO is provided to children even after prolonged CPR, the outcome was significantly better than children who did not receive ECMO, especially if the patient had a surgical cardiac condition (82). There is an expanding database of evidence regarding prognostic indicators in patients who remain comatose after ROSC. The increasing use of TTM adds new challenges to applying these prognostic indicators. A comprehensive review of neurologic prognostication in comatose survivors of CA was published in 2014 by the European Resuscitation Council and European Society of Intensive Care Medicine (317). Below we delineate highlights of these data as incorporated into the UK Guidelines (219), adding several new references that were not included in that review. Families expect that advice to limit or withdrawal support will be based on objective and reliable outcome predictors, but few predictors have 100% specificity (i.e., 0% false-positive rate; FPR). Often, a higher degree of prognostic confidence is achieved by combining the results from tests and clinical signs. In most cases, prognostication is not considered to be reliable until at least 72 hours following ROSC or following the completion of TTM (219,317). Bilateral absence of pupillary light reflex at 72 hours from ROSC predicts poor outcome with close to 0% FPR; unfortunately, this test has poor sensitivity as, of those who eventually have a bad outcome, only about 20% have fixed pupils at 72 hours. Similar prognostic performance was documented for bilaterally absent corneal reflexes (318,319). An absent or extensor motor response to pain at 72 hours from ROSC has a high (about 75%) sensitivity for predicting a poor outcome, but the FPR is also high (about 27%). This sign’s high sensitivity suggests it can be used to identify the population with poor neurologic status needing further prognostic testing. Since the corneal reflex and motor response can be suppressed by sedatives or neuromuscular blocking drugs from residual sedation or paralytic agents (261), it is appropriate to prolong the duration of observation of these clinical signs beyond 72 hours from ROSC or normalization of temperature to minimize the risk of obtaining false-positive results (219,317). A prolonged period of continuous and generalized myoclonic jerks is commonly described as status myoclonus. Although there is no definitive consensus on the duration or frequency of myoclonic jerks required to qualify as status myoclonus, in prognostication studies in comatose survivors of cardiac arrest the minimum reported duration is 30 minutes. While the presence of myoclonic jerks in comatose survivors of cardiac arrest is not consistently associated with poor outcome (FPR 9%), status myoclonus beginning within 48 hours from ROSC is consistently associated with a poor outcome (FPR 0% [95% CI: 0%–5%]; sensitivity 8%–16%). However, there are several case reports of good neurologic recovery despite early onset, prolonged and generalized myoclonus. Patients with post–arrest status myoclonus should be evaluated off sedation whenever possible; in those patients, EEG recording can be useful to identify EEG signs of awareness and reactivity to light or sound stimuli, and to show if there is coexistent epileptiform activity. There are limited data on the prognostic association of clinical signs in children after CA. Reactive pupils at 24 hours post-ROSC were associated with improved outcome (320). Other cohort studies note higher likelihood of survival with reactive pupils at 24 hours, but there was a high FPR making it an unreliable single sign (16,321,322). While predictors of poor outcome based on clinical examination are inexpensive and easy to use, their results cannot be concealed and may potentially influence clinical management and cause a self-fulfilling prophecy. In post–arrest comatose patients, bilateral absence of the N20 somatosensory evoked potential (SSEP) wave predicts death or vegetative state (CPC 4–5) with high reliability (FPR 0%–2% with upper 95% CI of about 4%). The few cases of false reports observed in large patient cohorts were due mainly to artifacts. SSEP recording is technically demanding and requires appropriate skills and experience; the utmost care should be taken to avoid electrical interference from muscle artifacts or from the ICU environment. In most prognostication studies, bilateral absence of N20 SSEP was used as a criterion for deciding on withdrawal of lifesustaining treatment, with a consequent risk of becoming a self-fulfilling prophecy. Background reactivity means that there is a change in the EEG in response to a loud noise or a noxious stimulus such as tracheal suction. Absence of EEG background reactivity predicts poor outcome with an FPR of 0% to 2% (upper 95% CI of about 7%). Limitations of EEG reactivity include lack of a standardized stimulus and modest interrater agreement. Data from two small pediatric observational trials showed that a continuous and reactive EEG performed in the first 7 days after ROSC was associated with a significantly higher likelihood of good neurologic outcome at hospital discharge, whereas a discontinuous or isoelectric tracing was associated with a poor neurologic outcome at discharge (323,324). There are no long-term follow-up studies in children evaluating EEG prediction of outcome after hospital discharge. In TTMtreated patients, the presence of SE is almost invariably, but not always, accompanied by poor outcome (FPR 0%–6%), especially in the presence of an unreactive or discontinuous EEG background. Burst suppression was recently defined as more than 50% of the EEG record consisting of periods of EEG voltage less than 10 µV, with alternating bursts. However, most prognostication studies do not comply with this definition. In comatose survivors of cardiac arrest, burst suppression is usually a transient finding. During the first 24 to 48 hours after ROSC, burst suppression may be compatible with neurologic recovery, whereas a persistent burst suppression pattern at 72 hours or more from ROSC is consistently associated with poor outcome. Neuron-specific enolase (NSE) and S-100B are protein biomarkers that are released following injury to neurons and glial cells, respectively. Their blood concentrations after cardiac arrest are likely to correlate with the extent of hypoxic-ischemic neurologic injury and, therefore, with the severity of neurologic injury. Advantages of biomarkers over both EEG and clinical examination include quantitative results and likely independence from the effects of sedatives. Their main limitation as prognosticators is that it is difficult to identify with a high degree of certainty that a specified threshold is useful to identify patients destined to have a poor outcome. Since the serum concentrations are continuous variables that depend not only on the degree of neuronal injury, but also on the degree of disruption of the blood–brain barrier and maintenance of blood flow to all brain regions, as well as the systemic clearance of the biomarker, it is not surprising that the serum concentrations do not function well as a marker for a dichotomous outcome, especially when a threshold for 0% FPR is desirable. There is limited data on the value of biomarkers in children. One observational, prospective cohort of 43 children following OHCA or IHCA had repeated measurement of NSE, S-100B, and myelin basic protein over 7 days (320). They found good discrimination, but noted that the concentrations changed over time suggesting that a single sample may not be adequate. Similar variation of concentrations over time was observed in another single center study enrolling 35 children (325). The main CT finding following a global hypoxic-ischemic cerebral insult is cerebral edema, which appears as a reduction in the depth of cerebral sulci (sulcal effacement) and an attenuation of the gray matter/white matter (GM/WM) interface, due to reduced GM density, which is quantitatively measured as the gray:white ratio (GWR) between the GM and the WM densities. The GWR threshold for prediction of poor outcome with 0% FPR in prognostication studies ranged between 1.10 and 1.22. The methods for GWR calculation were inconsistent among studies and quantitative measurements are rarely made in clinical practice. Brain MRI is more sensitive than CT for detecting global hypoxic-ischemic brain injury caused by cardiac arrest; however, because it is a time-consuming study, its use can be problematic in the most clinically unstable patients. MRI can reveal extensive changes when results of other predictors such as SSEP are normal. All studies on prognostication after cardiac arrest using imaging have small sample sizes with consequent low precision, and very low evidence quality (326). Most studies are retrospective, and did not systematically include all at-risk patients; instead, brain CT or MRI typically is requested at the discretion of the treating physician, which may cause a selection bias and overestimate the tests’ performance. A careful clinical neurologic examination remains the foundation for prognostication of the comatose patient after cardiac arrest. The clinical examination should be completed daily to detect signs of neurologic recovery, such as purposeful movements or to identify a clinical picture suggesting that brain death has occurred. It is thought that brain recovery following a global postanoxic injury is completed, in most patients, within 72 hours of arrest. However, in patients who received sedatives 12 hours or less before the 72-hour post-ROSC neurologic assessment, the reliability of a clinical examination may be reduced (261). Before decisive assessment is performed, major confounders must be excluded (327,328); apart from sedation and neuromuscular blockade, these include hypothermia, severe hypotension, hypoglycemia, and metabolic and respiratory derangements. Sedatives and neuromuscular blocking drugs should be discontinued for long enough to avoid interference with the clinical examination. Short-acting drugs are preferred whenever possible. When residual sedation and/or paralysis is suspected, consider using antidotes to reverse the effects of these drugs. Neurologic prognosis is typically assessed in all patients who remain comatose with an absent or extensor motor response to pain at 72 hours or more from ROSC. Results of earlier prognostic tests are also considered at this time point. The most robust predictors should be assessed first; these predictors have the highest specificity and precision (FPR <5% with 95% CIs <5% in patients treated with controlled temperature) and were documented in greater than five studies from at least three different groups of investigators. These predictors include bilaterally absent pupillary reflexes at 72 hours or more from ROSC, and bilaterally absent SSEP N20 wave after rewarming; this last sign can be evaluated at 24 hours or more from ROSC in patients who were not treated with controlled temperature. Based on expert opinion, the Guidelines suggest combining the absence of pupillary reflexes with those of corneal reflexes for predicting poor outcome at this time point (219,317). Ocular reflexes and SSEPs maintain their predictive value irrespective of target temperature (329,330). If none of the preceding signs are present to predict a poor outcome, a group of less accurate predictors can be evaluated, but the degree of confidence in their prediction is lower. These have FPR below 5% but wider 95% CIs than the previous predictors, and/or their definition/threshold is inconsistent in prognostication studies. These predictors include the presence of early status myoclonus (within 48 hours from ROSC), high values of serum NSE at 48 to 72 hours after ROSC, an unreactive malignant EEG pattern (e.g., burst suppression or SE) after rewarming, the presence of a marked reduction of the GWR or sulcal effacement on brain CT within 24 hours after ROSC, or the presence of diffuse ischemic changes on brain MRI at 2 to 5 days after ROSC. Based on expert opinion, the Guidelines suggest waiting at least 24 hours after the first prognostication assessment and confirming unconsciousness with a Glasgow motor score of 1 or 2 before using this second set of predictors (219); the Guidelines also suggest combining at least two of these predictors for prognostication (219). No specific NSE threshold for prediction of poor outcome with 0% FPR is recommended at present. Ideally, every hospital laboratory assessing NSE should create its own normal values and cutoff concentration based on the test kit used. Sampling at multiple time points is recommended to detect trends in NSE concentrations and to reduce the risk of false-positive results (331); avoid hemolysis when sampling NSE. Although the most robust predictors showed no false positives in most studies, none of them singularly predicts poor outcome with absolute certainty when the relevant comprehensive evidence is considered. Therefore, the Guidelines recommend that prognostication should be multimodal whenever possible, even in presence of one of these predictors. Apart from increasing safety, limited evidence also suggests that multimodal prognostication increases sensitivity (332–334). When prolonged sedation and/or paralysis is necessary, for example, because of the need to treat severe respiratory insufficiency, the Guidelines recommend postponing prognostication until a reliable clinical examination can be performed (219). Biomarkers, SSEP, and imaging studies may play a role in this context, since they are insensitive to drug interference. In view of the limited data and its low quality in children, the CoSTR recommendation is to consider multiple factors when predicting outcome in children after ROSC (6). No specific prognostic variable can be recommended. When dealing with an uncertain outcome, clinicians should consider prolonged observation; with time, the absence of clinical improvement suggests a worse outcome. If brain death and/or organ donation is being considered, the reader is referred to Chapter 119. A detailed list of controversies was included in each section of the 2015 CoSTR statements. The following lists the main controversies related to CPR and postresuscitation management of importance to the ICU clinician:
CHAPTER 51
Cardiopulmonary Resuscitation in the ICU
ARNO L. ZARITSKY
INTRODUCTION
CARDIAC ARREST EPIDEMIOLOGY
Adult
Pediatric
BASIC LIFE SUPPORT
Starting CPR
Chest Compressions
Feedback to Improve CPR Quality
ADVANCED LIFE SUPPORT
Defibrillation Strategies for VF and pVT
Defibrillator Device
Circulatory Support during CPR
Impedance Threshold Device
Mechanical CPR Devices
Extracorporeal CPR
Physiologic Monitoring during CPR
Hemodynamic Monitoring
End-Tidal Carbon Monoxide Monitoring
Ultrasound during CPR
Arterial Blood Gas Analysis
Drugs during CPR
Epinephrine
Vasopressin
Antidysrhythmic Agents
Amiodarone
Lidocaine
Procainamide
Nifekalant
Miscellaneous Drug Therapy
Sodium Bicarbonate
Miscellaneous Drug Therapy
Atropine
Calcium
Magnesium
Miscellaneous Drug Therapy
Advanced Airway Management
POSTRESUSCITATION CARE
Post–Cardiac Arrest Syndrome
Airway, Ventilation, and Oxygenation
Ventilation
Circulatory Support
MINIMIZING ORGAN INJURY AND COMPLICATIONS
Hemodynamic Management
OPTIMIZING NEUROLOGIC RECOVERY
Cerebral Perfusion
Sedation
Seizure Control
Glucose Control
Temperature Control
Targeted Temperature Management
Physiologic Effects and Complications of Hypothermia
Contraindications to Hypothermia
PROGNOSIS
Clinical Examination
Electrophysiology
Short-Latency Somatosensory Evoked Potentials
Electroencephalography
Biomarkers
Imaging
Suggested Prognostication Strategy
CONTROVERSIES
Key Points
Full access? Get Clinical Tree