A historical view of quality concepts and methods

Chapter 71
A historical view of quality concepts and methods


Dia Gainor and Robert Swor


Introduction


Anyone on a journey of learning about quality will find the roadside littered with acronyms and mysterious terms; ISO, SPC, and Six Sigma are just a few. This chapter will provide the reader with an understanding of the evolution and types of quality initiatives that have appeared during the 20th century. In addition to literacy about common quality improvement (QI) systems in contemporary use, EMS system leaders and others involved in quality should have a fundamental understanding about the setting, concepts, and progression of quality initiatives in recent history. This chapter will highlight the origins and approach of quality systems found in the United States through the hospital, manufacturing, and government influences that shaped them into what they are today.


Key differences between the approaches undertaken in the industrial and manufacturing settings and their resulting systems illustrate the opportunities from which EMS and other components in the health care system may benefit. This chapter will describe the genesis and “systems” of quality assessment and improvement that have been or can be adopted by organizations interested in enhancing their performance. To the extent that a person, agency, or tool was instrumental in the discovery or development of the system, they will be addressed in context.


The evolution of quality conceptsand methods


Prior to the 20th century, activities most closely related to quality assurance emerged in medicine and manufacturing in very similar ways: societies or “guilds” of like practitioners or craftsmen formed on a community or jurisdictional basis [1,2]. These societies set standards and reviewed the performance of individual members, acting against or expelling members for unacceptable performance or behavior. Early in their legislative formation (some as early as the late 1700s), many states yielded the authority to credential physicians to medical societies [2]. Although this was largely an effort to protect the profession, it likely assured some level of quality through the development of community standards of care.


The landscape of medicine and manufacturing began to change in different ways in the early 20th century. By 1900, in the United States it was common for states to establish boards and effect physician licensure as a function of the state. This migration marked the beginning of one approach to assuring quality in health care: regulation. The regulatory approach was marked by the states’ decisions to mandate licensure of hospitals by their state health departments, beginning early in the 1900s [2]. Licensure activities typically include some form of application, inspection to assure conformity with minimum standards, correction of conditions that fail to meet the minimum standard, issuance of a credential for a time-limited duration, and a cyclical repetition of these steps in order to perpetuate the license.


During the same time frame, medicine in the United States was revolutionized by the work of Sir William Osler, attributed with the “learning science” approach to assuring quality in health care [3]. While Osler’s work did not label him as a quality pioneer per se, the nature of his work at the University of Pennsylvania and then as the first professor of medicine at Johns Hopkins University led to his recognition as an expert diagnostician who viewed consideration of the patient’s state of mind and the underlying disease as equally important. His lasting effect was through changes in learning and curricula for physicians: increased patient contact while in medical school, use of laboratory findings, and authorship of his novel principles in a text that was considered a cornerstone in physician education through the 1920s [4]. Dr Osler’s work channeled the focus of medical institutions and physicians towards education as a means of improving quality evidenced by morbidity and mortality reviews, grand rounds, and clinicopathological conferences that abounded in the health care industry as a result [3].


The field of medicine witnessed other advances in the early 1900s indicative of a learning science predilection. In 1910 the Carnegie Foundation published the Flexner Report, which accused the industry of educational malpractice through “enormous over-production of uneducated and ill trained medical practictioners” [5]. Although this report has been questioned in more modern times in terms of both methods and comprehensiveness, the report is acknowledged as creating a significant focus on improving medical education quality and causing fundamental changes in medical education and practice structure [6].


Another health care quality history landmark was a 1910 proposal by a physician named Ernest Codman. Dr Codman’s concept, called the “End Result System of Hospital Standardization” [7], involved tracking every patient outcome by the attending physician and investigation into the causes of poor outcomes. This was viewed as an antagonistic evaluation of surgeons’ competencies and Harvard University withdrew Codman’s medical staff privileges at Massachusetts General, with the leadership refusing to implement the system. Although other publications describe Codman resigning in disgust and establishing a private hospital where the end-result system was aggressively implemented and published [8], assessments of the effect of Codman’s concepts agree on one fact: they became a founding objective of the American College of Surgeons (ACS) [7–9].


The founder of the ACS, Dr Franklin Martin, was a colleague of Codman’s who embraced his proposal; the concept of minimum standards for hospitals became part of the ACS’ objectives at the outset. Within 5 years, the “Minimum Standard for Hospitals” was published and the ACS began inspecting hospitals; only 13% of the nearly 700 initially inspected met the five-point criteria [7,9]. Presumably, hospitals were willing to undergo this form of peer review, and with a shift in focus from the individual physician to the facility as a whole, the ACS process met less resistance than Codman’s system. Since hospitals were expected to modify their practices based on experiences exploring the minimum standards, this process is characterized as another example of a “learning science” tradition within the health care environment [3].


In the meantime, a completely different approach to assurance of quality was evolving in the early 1900s in the manufacturing sector of the United States: treating management as a “science.” As the Industrial Revolution entered its second wave of impact in the United States, formally trained engineers and other scientists were common in the workplace. Attention shifted from exclusive focus on mechanical issues such as conveyor belt function and scrap management to more elusive issues such as worker productivity and human motivation. Frederick Taylor, an American industrial engineer, made a compelling statement in his 1911 treatise: “We can see and feel the waste of material things. Awkward, inefficient, or ill-directed movements of men, however, leave nothing visible or tangible behind them” [10]. Frederick Taylor’s work was considered a foundation for the field that is now referred to as scientific management [11].


Manufacturing and other industries found value in the work of Taylor and his contemporaries, laying the cornerstone of scientific management deep within businesses. Their approach placed greater value on the scientific assessment of operations using quantitative approaches, including the use of mathematical models, rules of motion, and standardization of tools and implements. While there was recognition that cooperation had to exist between employees and supervisors, scientific management introduced change by observing work processes and redesigning the steps, tools, and human actions associated with the task [12].


By 1920, the Flexner Report had been credited for medical schools having more tailored entrance requirements, more diverse medical student bodies, and refined curricula; more striking was the fact that 60 out of 155 medical schools in the nation closed during this period [6]. Fundamentally, the contemporaneous efforts of both Mr Flexner and Dr Osler brought dramatic changes to the thinking and process associated with learning, and both yielded perceptions and beliefs that the quality of medical care was improved strictly as a result of the change in education that took place.


Another milestone in the history of US health care that ultimately became a mechanism for assuring quality was the licensure of hospitals, evolving during this timeframe as a Department of Health function in individual states. The 1920s was the only decade of exclusive regulation (primarily in the form of police powers to protect the health and safety of patients) by the states before the federal government began preempting states’ laws related to governance of hospitals [2].


Meanwhile, scientific management rapidly grew as the favored approach in the post-World War I industrial environment that was experiencing increasing demand for goods and services and growing organizations [12]. Fortunately, this predominantly engineering focus was complemented by the birth of the behavioral school of management thought when what is now known as the Hawthorne Studies took place at Western Electric’s Hawthorne plant in suburban Chicago [13]. The industrial sector of the United States was laying the groundwork for quantitative workload management and performance considerations tempered by an understanding of human relations and workplace psychology.


In the 1920s, Western Electric was also incubating several other processes and pioneers that ultimately made significant contributions to quality science evolution. Primarily a manufacturer of electrical and telephone system components, Western Electric was one of the largest corporations in the United States and one of the few with international presence [14]. Walter Shewhart, an engineer, carefully developed and tested methods that forced leadership to rethink inspection of finished products as the sole means of assuring quality. He devised a statistical method of monitoring and analyzing processes, allowing for the correction of conditions before a defective product was made. His original, elegantly simple concept proposal of a control chart was presented to Western Electric management on a single page of paper in 1924 [15].


Within 2 years, Western Electric had established a “quality” department, and appointed Joseph Juran, a young engineer, to lead the unit. Other Shewhart contemporaries were perfecting sampling techniques and by 1931, Shewhart published Economic Control of Quality of Manufactured Product [16]. Regarded today as a foundational text for the study of quality engineering through statistical process control, the work had relatively little impact outside Western Electric and its research branch, known as Bell Telephone, in its first 10 years. Another scientist exposed to these principles during work at Western Electric ultimately carried the first banner on the value of quality management to the outside world: W. Edwards Deming [14].


After his experience at Western Electric, Deming invited Shewhart to lecture with him in the late 1930s at the US Department of Agriculture Graduate School [17]. Shewhart had continued his statistically centered focus on quality (which also led to his definition of the Plan-Do-Check-Act cycle) and published Statistical Method from the Viewpoint of Quality Control [15]. Deming transferred to the US Census Bureau to assist with sampling techniques and in 1940, he implemented the first statistical process control use in an environment outside manufacturing as he managed clerical operations in the US Census Bureau during the 1940 census [17].


Meanwhile, Juran continued his work in quality management in the Bell system, training and publishing handbooks for employees [14]. In the 1940s Juran published his conceptualization of the “Pareto Principle,” hypothesizing that management challenges could be classified and prioritized following an 80%/20% pattern [1]. This principle had actually been published more than 40 years earlier in a series of texts on economics by the politically riddled economist Vilfredo Pareto, who postulated that a universal logarithmic formula governed the distribution of wealth [18]. Professor Pareto’s formula asserts that 20% of the people in a jurisdiction (any jurisdiction, anywhere in the world) hold 80% of the wealth. While empirical studies conducted since have reinforced this, the concept was adopted by Juran to distinguish between the issues (80%) over which supervisors had control, versus the reminder (20%) over which the workforce had control. Juran’s and others’ adoption of Pareto’s assertion for other management beliefs drew criticism [19] but Juran maintained that studies performed in the 1950s and 1960s also supported his application, which further evolved into the reference during root cause analysis to the “vital few” and “trivial many” [1].


These three already productive parents of quality management – Shewhart, Deming, and Juran – may have fathered a very different beginning for total quality management than they eventually did had their lives and the work of the US government not been detoured by the onset of World War II. The federal “War Department” supporting the deployed armed services had two needs requiring them to reach out into the private sector: materiel and expertise. Subject matter experts were solicited from large businesses to assist with federal administrative functions, including Joseph Juran [20]. Deming’s transfer from census work to the War Department resulted in the concepts of sampling and control charting as requirements for materiel suppliers [17]. These standards, essential for quality assurance and conformity of goods and services provided throughout the country, evolved into formal specifications that became commonly referred to as “MIL STD” (military standards) or MIL specs. These MIL procedures dictated sampling, machine calibration, schematic, and quality control practices [21] in the interest of avoiding defects, or errors, in products or processes. The counterpart sentiment of employee value and involvement was evidenced by the introduction of the practice of “quality circles” on factory floors by the late 1940s [22].


Medicine learned and benefited “on the job” at war: physics and genetics (courtesy of the atom bomb), antibiotics, and unprecedented organized care behind the lines were key improvements but nonetheless individual discoveries or accomplishments, not the result of an overall strategy to improve quality per se. Military publications emphasize that the overall specialization of providers in subsets of skills, acceleration of training, and provider preparation for disease and trauma care not routine in typical practice settings became the focus as the military became the single largest producer or preparer of medical providers. In 1939 alone, the prewar mobilization of the “Medical Department,” a subordinate entity within the War Department, required an explosive increase of enlisted medical personnel and officers from less than 11,000 to 140,000 [23].


After the war, the American public’s definition of quality health care was abundant health care [24]. The ACS continued to assess hospitals’ conformity to basic minimum standards, having assessed over 3,000 hospitals by the early 1950s [25]. In a transition not unlike that of the shift for physician credentialing from societies to state regulatory bodies, ACS joined forces with the American Hospital Association, the American Medical Association, and others to form the Joint Commission on the Accreditation of Hospitals (JCAH) in 1952 [7]. The perspective was retrospective and geared towards confirming a standard of care through conformity of practices [26], not error prevention. It would be nearly 20 years before the JCAH retuned its standards to achieve “optimal achievable” versus “minimum essential” levels of quality [7].


A more commonly known aspect of quality management history in the late 1940s was the impact of Deming on the Japanese manufacturing sector. Less commonly known is that Deming went there as part of an entourage sent by the War Department to help rebuild Japan’s postwar infrastructure – by studying agricultural challenges [17]! Another source cites the reason as assisting with Japanese population estimation for the US government [27] but it is nonetheless often overlooked that Deming did not go to Japan as a quality guru initially. It was during his visits that he convinced a rising Japanese statistician of the value of using statistics in the industrial sector. Deming returned to Japan five times to teach and consult with the blessing of the Supreme Commander of the Allied Forces; Shewhart was the preferred instructor, but he was unavailable.


Deming’s presentations, ultimately referred to as the “Deming Method,” urged a statistical approach to managing quality. Juran followed, doing numerous presentations for Japanese executives as well [1]. His training centered on his professional conclusions at that point: systems of quality management were an absolute necessity for a successful organization. “Made in Japan” had historically meant low-cost, shabby products. By the 1960s, the influence of Juran and Deming was clear as the Japanese achieved market leadership in the automobile and electronic sectors [28]. Engineering schools in the US were incorporating statistical quality control classes into their curricula, but the US was behind the quality curve.


While other new thinking evolved in the regulatory and industrial sector of the United States, such as “good manufacturing practices” and “management by objectives,” an international organization that would have a lasting effect on quality in the US was being birthed: “ISO.” The International Organization for Standardization published its first technical standards (numbering in the hundreds) in the 1950s. The value and purpose of the ISO, presently composed of representatives of 148 countries’ national standards institutes, are best stated in its own “What if Standards Didn’t Exist?” literature:

Jun 14, 2016 | Posted by in EMERGENCY MEDICINE | Comments Off on A historical view of quality concepts and methods

Full access? Get Clinical Tree

Get Clinical Tree app for offline access