Ultrasound Simulator Training
Paul E. Bigeleisen
Eric Stavnitsky
Karl Reiling
Developing competence in ultrasonography and applying the results to clinical care is a complex process. It requires psychomotor and optimal image window acquisition skills. Once an optimal image window is acquired and correctly interpreted, the information needs to be correctly applied to patient care. The opportunity cost of training health care providers on ultrasonography is extremely high. Optimal training requires (1) a qualified instructor; (2) trainees; (3) an ultrasound machine; and (4) patients with a variety of anatomic norms, variations, and abnormalities. All of these elements must come together in one physical space and time, and the process must repeat with a new patient presenting these normal and variable conditions over an extended period of time. It may take months to years before a care provider is able to scan a sufficient number of patients with certain conditions to develop competence because some clinical presentations are rarely encountered. The inability to train on sufficient numbers of variant cases is a recognized impediment to ultrasound competency.
Many of the currently used training methods have significant limitations. Traditionally, these consisted of clinical bedside teaching and attending hands-on training courses. Simple phantoms models were developed quickly along with didactic videos available on the Internet and from commercial vendors. These were followed by interactive virtual reality simulators and then high-fidelity ultrasound simulator workstations. Each of these training devices has a different cost-benefit ratio. Examples of simple phantoms, virtual reality simulators, and high-fidelity simulators are described in the following sections.
High-fidelity ultrasound simulators
High-fidelity simulators require relatively expensive large modules that may require the user to visit a simulation center. Most devices with good haptic feedback cost between $30,000 and $100,000. Specific training simulators involve bulky training platforms that require trainees to visit a simulation. These ultrasound-training solutions employ dedicated computer hardware, software, and mannequins that do not deploy over the Internet. An example of such a simulator is described in the following text.
Method
Volumetric Description of Virtual Patient
The method starts with a volumetric description of the virtual patient. Popular sources include computed tomography (CT), magnetic resonance imaging (MRI), or even computer-aided design. Our data come from the Visible Human Data Set.
In November 1994, the Center for Human Simulation (CHS) at the University of Colorado Medical School delivered the Visible Human Male (VHM) to the National Library of Medicine (NLM) as part of a contract under the Visible Human Project.1 In November 1995, it delivered the Visible Human Female. The Visible Humans represent complete
submillimeter visual anatomic descriptions of a male and female. The voxel (volume element) resolution of the male is 1/3 mm in x and y (orthogonal coordinates in the axial plane) and 1 mm in the axial direction. The female has the same x and y resolution with three times the axial resolution, giving her 1/3-mm voxels. The full-color voxel anatomy data are supplemented with full-body CT and MRI images. The CHS has since cut multiple specimens having -mm resolution in all three directions. Figure 9.1 is a zoomed-in image taken from a foot and ankle that were cryosectioned at mm.
submillimeter visual anatomic descriptions of a male and female. The voxel (volume element) resolution of the male is 1/3 mm in x and y (orthogonal coordinates in the axial plane) and 1 mm in the axial direction. The female has the same x and y resolution with three times the axial resolution, giving her 1/3-mm voxels. The full-color voxel anatomy data are supplemented with full-body CT and MRI images. The CHS has since cut multiple specimens having -mm resolution in all three directions. Figure 9.1 is a zoomed-in image taken from a foot and ankle that were cryosectioned at mm.
Segmentation and Classification
Touch of Life Technologies (ToLTech), working in conjunction with the CHS, has been developing fundamental tools that are necessary for both visual and haptic interaction with the Visible Human and similar datasets. These fundamental tools provide the foundation for robust anatomically accurate simulators. A major part of this effort deals with segmenting and classifying the data.
Together, ToLTech and the CHS have developed multiple strategies for assigning to each voxel a number that can be related to a specific structure. We refer to this three-dimensional augmentation as the alpha channel. These methods fundamentally produce borders of structures, either through hand drawing, automatic edge detection, or surface splines. The tissue within the border is then assigned an identifier. Figure 9.2 shows a surface spline being used to outline a kidney in a single image of the VHM.
ToLTech and the CHS have worked together to segment and classify the entire VHM. Although the effort to refine the data is ongoing, most, if not all, structures that have a threedimensional extent of over a millimeter are now in the database. The alpha channel has become the foundation for ToLTech’s three-dimensional display, both haptic and graphic, of individual anatomic structures.
Simplified Physics
In the early part of 2000, ToLTech developed algorithms to create simulated ultrasound from the VHM alpha channel. The basic idea was to send rays out from the simulated probe through virtual tissue and determine the expected energy return. The method assumes that
the information contained in a clinical ultrasound can be closely approximated by superposition of the following:
the information contained in a clinical ultrasound can be closely approximated by superposition of the following:
Impedance mismatch between structures
Attenuation of signal based on the normal of the interface compared with the position of the transmitter/receiver
Energy loss proportional to material attenuation
Statistically described angle-independent texture associated with anatomic structures.
Christian Lee was instrumental in all aspects of this development, from distilling the primary acoustic properties through writing the associated software.
ToLTech currently associates impedance, sound velocity, and attenuation with the following tissue types: blood, muscle, air, connective tissue, lung, water, fat, bone, brain, and nerve.
This abbreviated list has served well for ToLTech’s initial prototypes and can be continually refined. Creating virtual ultrasound from the VHM data brings the following attributes:
Currently, the entire VHM is available with high enough resolution to simulate today’s ultrasound. This allows the user to seamlessly interrogate all areas of interest.
Having a segmented and classified foundation for the simulation allows us to provide interactive interrogation of the tissues displayed in the simulated ultrasound (the user can place a cursor on the simulated ultrasound and have the structure identify itself). This is imporTant for feedback-based mentoring and testing.
Deformations of the data due to the probe or anatomic motion can be immediately rendered in ultrasound.
The simulated ultrasound can be combined with the virtual anatomy. Allowing for the practice of ultrasound-guided needle insertion.
Modifying Posture
ToLTech and the CHS have collaborated to develop off-line techniques for altering the posture of the virtual patient. These techniques utilize finite element modeling (FEM) to deform
tissues. With this technique, tetrahedra(*
tissues. With this technique, tetrahedra(*