NESPS - Northeastern Society of Plastic Surgeons NESPS - Northeastern Society of Plastic Surgeons
 
Members Only
Username
Password
Forgot Password?
NESPS 27th Annual Meeting Abstracts

Back to Program


Visual Data Integration in Computer Graphics-Based Planning for Facial Transplantation
Darren M. Smith, MD, Vijay S. Gorantla, MD, PhD, Gerald Brandacher, MD, Stefan Schneeberger, MD, W.P. A. Lee, MD, Joseph E. Losee, MD.
University of Pittsburgh, Pittsburgh, PA, USA.

BACKGROUND:
Facial transplantation is being performed with increasing frequency, and progressively complex composite craniofacial defects are being addressed. Functional and aesthetic outcomes are difficult to optimize in these three-dimensionally complex procedures. Imaging modalities ranging from MRI to 3D CT to tractography offer powerful visualization of skeletal, soft tissue, and neurovascular structures. However, these data are relegated to disparate platforms and are often not compatible with real-time user interaction and modification. Here, we present a method for the integration of data from multiple imaging sources into a single 3D representation of donor or recipient anatomy that supports real-time user interaction and modification.
METHODS:
The craniofacial skeleton is treated as the framework for the anatomical model. A raw polygonal model of the craniofacial skeleton is generated from thresholding and "stacking" dicom images. Decimating algorithms are then employed to reduce the number of polygons in the model. Soft tissue structures such as facial muscles can be modeled in a method analogous to that used for skeletal structures. Muscles can be extracted from the same dicom dataset as the bone data if the threshold is appropriately adjusted. By generating soft tissue and bone models from the same scan dataset, concerns about registering multiple datasets are obviated: all structures derived from the same dataset will necessarily be positioned properly in 3D space with respect to one another. If muscle data is not captured in sufficient detail on CT, MRI data sets may be utilized. If thresholding is prevented by insufficient data quality, key slices can be imported as planes into a 3D package of choice, and the structures of interest outlined. The outlines are then lofted between planes to build 3D mesh representations. Similarly, blood vessels relevant to the model can be extracted by applying an analogous procedure to dicoms from CT angio if data quality permits. Again, if the dataset is not of sufficient quality for this approach, key slices can be imported into a 3D package and models are generated as above. Finally, nerves are modeled as polygons based on tractography data.
RESULTS:
Data from once disparate and unwieldy CT, CTA and surface scans have been integrated to develop detailed 3D polygonal anatomical models compatible with real-time end-user manipulation and modification. Integration of MRI and tractography data into these 3D models is underway.
CONCLUSIONS:
Facial transplantation is visually complex in three dimensions. Powerful imaging techniques offer critical insight into relevant patient anatomy. In this study, we advance a workflow designed to integrate classically disparate and inaccessible data into a single interactive 3D representation of donor or recipient anatomy. Such data integration may enhance procedural planning by allowing preoperative virtual interaction with patient skeletal, soft tissue, and neurovascular anatomy.


Back to Program