WO2016126934A1 - Methods and systems for navigating surgical pathway - Google Patents

Methods and systems for navigating surgical pathway Download PDF

Info

Publication number
WO2016126934A1
WO2016126934A1 PCT/US2016/016555 US2016016555W WO2016126934A1 WO 2016126934 A1 WO2016126934 A1 WO 2016126934A1 US 2016016555 W US2016016555 W US 2016016555W WO 2016126934 A1 WO2016126934 A1 WO 2016126934A1
Authority
WO
WIPO (PCT)
Prior art keywords
representation
interest
objects
implemented method
computer
Prior art date
Application number
PCT/US2016/016555
Other languages
French (fr)
Inventor
Kristen S. Moe
Blake Hannaford
Randall BLY
Nava AGHDASI
Yangming Li
Angelique M. BERENS
Original Assignee
University Of Washington
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Washington filed Critical University Of Washington
Publication of WO2016126934A1 publication Critical patent/WO2016126934A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions

Definitions

  • Endoscopic surgical procedures are increasingly performed for skull based surgeries as a less invasive alternative to open surgical procedures.
  • complex sinus and skull base anatomy, the density of blood vessels and nerves in the skull, and a narrow surgical field of vision increase the difficulty of an endoscopic surgical procedure.
  • Inadvertent contact with various critical structures in the skull can cause blindness, cerebral spinal fluid leaks, and even death.
  • pre-operative planning involves determining outlines of organs and structures based on a large number of images, and manually drawing the individual contours on a set of two-dimensional (2D) slices of the images. This process is time consuming, labor intensive, and prone to errors as it is a subjective process.
  • IGS uses sensors to provide a display that shows the location of a surgical instrument relative to the images obtained before surgery.
  • documentation of the surgical procedure generally takes the form of providing an "operative report" that the surgeon dictates within 48 hours of the operation.
  • the operative report is prone to recall bias and is often the only record to communicate the steps of the surgical procedure.
  • a system and a method are defined for navigating a surgical pathway in a body of a subject.
  • the computer-implemented method may comprise defining positions of objects of interest relative to locations of a plurality of landmarks on a representation of the body.
  • the objects of interest comprise at least one target and at least one entry portal.
  • the method further comprises marking the defined positions of the objects of interest on the representation, and generating one or more simulated surgical pathways on the representation for navigation of a tool based on the positions of the defined objects of interest.
  • Defining the positions of the objects of interest may comprise estimating a first search region based on spatial relationships between the plurality of landmarks and at least one critical structure, and generating a 2D slice from the representation that comprises the first search region.
  • the at least one critical structure may comprise one or more of an orbit and nerve, in some example embodiments.
  • Defining the positions of the objects of interest may further comprise determining locations of a second plurality of landmarks on the 2D slice, estimating a second search region based on spatial relationships between the second plurality of landmarks, generating a pixelated image of the second search region, defining an outline of an object of interest within the pixelated image, generating a three-dimensional (3D) shape from the outline, and marking the 3D shape on the representation as one of the objects of interest.
  • defining the positions of the objects of interest may further comprise determining intensity values for pixels within the pixelated image and classifying a region within the 2D slice comprising pixels having an intensity above a threshold value as being a critical structure. The critical structure may then be segmented on the representation.
  • the representation is a 3D anatomically accurate digital model created from a plurality of computerized tomography (CT) scans, a plurality of magnetic resonance imaging (MRI) scans, and a plurality of ultrasound (US) scans.
  • CT computerized tomography
  • MRI magnetic resonance imaging
  • US ultrasound
  • the method may further comprise receiving an indication of a selected pathway from the one or more pathways and responsively displaying the selected pathway on the representation.
  • the method may further comprise receiving signals indicating position and movement data for the tool with respect to the representation and overlaying an image representing the position and movement data onto the representation to show space occupied by the tool over time.
  • the received signals may include data such as 3D coordinates, pitch, yaw, and/or roll data for the tool at periodic intervals.
  • an average linear velocity of the tool for a predetermined time period, an average angular velocity of the tool for a predetermined time period, and/or an average jerk of the tool for a predetermined time period may be calculated.
  • the calculated average linear and/or angular velocities, the calculated jerk, and the overlaid image may be displayed on the representation.
  • the display may be updated in a continuous manner.
  • the data may be compared for a plurality of procedures. Common pathways traveled may be determined, as well as differences in the data between procedures.
  • a non-transitory computer readable medium is provided.
  • the computer readable medium has stored therein instructions executable to cause a computing device to perform functions comprising detecting locations of a plurality of landmarks on a representation of a body of a subject, defining objects of interest based on the locations of the plurality of landmarks, the objects of interest comprising at least one critical structure, at least one target, and at least one entry portal, generating one or more pathways on the representation for a tool to travel based on the defined objects of interest, receiving an indication of a selection of a pathway from the one or more pathways, receiving signals indicating position and movement data for the tool with respect to the representation, and overlaying an image representing the position and movement data onto the representation.
  • a computer-implemented method comprises detecting locations of a plurality of landmarks on a representation of a body of a subject, defining objects of interest based on the locations of the plurality of landmarks, the objects of interest comprising at least one critical structure, at least one target, and at least one entry portal, generating one or more pathways on the representation for a tool to travel based on the defined objects of interest, receiving an indication of approval of a pathway of the one or more pathways, receiving signals indicating position and movement data for the tool with respect to the representation, and overlaying an image representing the indicated position and movement data onto the representation.
  • Figure 1 depicts a simplified flow diagram of an example method that may be carried out to navigate a surgical pathway in a body of a subject, in accordance with at least one embodiment
  • Figure 2 depicts an example display of a system that navigates a surgical pathway, in accordance with at least one embodiment
  • Figure 3a depicts a 2D axial view image of a skull, in accordance with at least one embodiment
  • Figure 3b depicts a 2D sagittal view image of a skull, in accordance with at least one embodiment
  • Figure 4a depicts a 3D representation of a skull of a subject, in accordance with at least one embodiment
  • Figure 4b depicts a 3D representation of a portion of the skull of Figure 4a, in accordance with at least one embodiment
  • Figure 4c depicts a representation illustrating a smaller region of interest overlaid on the slices used for the representation of Figure 4b, in accordance with at least one embodiment
  • Figure 4d depicts a 2D view of the overlaid region of interest of Figure 4c, in accordance with at least one embodiment
  • Figure 5a depicts a pixelated image of the region of interest of Figure 4d, in accordance with at least one embodiment
  • Figure 5b depicts an outlined image generated from the pixelated image of
  • Figure 5c depicts a 3D graph of a sphere generated within a contoured region of Figure 5b, in accordance with at least one embodiment
  • Figure 5d depicts an image showing the sphere of Figure 5c overlaid on a representation of the skull, in accordance with at least one embodiment
  • Figure 6a depicts a region of interest used to detect an optic nerve, in accordance with at least one embodiment
  • Figure 6b depicts a graph plotting pixel intensity over distance, in accordance with at least one embodiment
  • Figure 6c depicts a detected orbit, optic nerve and muscles overlaid on the 3D representation of Figure 4c, in accordance with at least one embodiment
  • Figure 6d depicts a 2D image displaying the detected orbit, optic nerve, and muscle, in accordance with at least one embodiment
  • Figure 7a depicts a table illustrating accuracy results using Equation 1 for the nine subjects, in accordance with at least one embodiment
  • Figure 7b depicts a table illustrating sensitivity results using Equation 2 for the nine subjects, in accordance with at least one embodiment
  • Figure 7c depicts a table illustrating specificity results using Equation 3 for the nine subjects, in accordance with at least one embodiment.
  • Figure 8 depicts a display comprising a series of views of a representation of a skull, in accordance with at least one embodiment.
  • the systems and methods discussed herein provide for automated generation of a simulated surgical pathway on a representation of a body to aid in pre-operative surgical planning.
  • positions of objects of interest are defined and marked relative to locations of a plurality of landmarks on the representation.
  • the objects of interest include a target, which is a structure or feature designated to be accessed during the surgery.
  • the objects of interest also include an entry portal providing access into the body of the subject, e.g., a nostril, a mouth.
  • the object of interest may also include a critical structure.
  • a critical structure as used herein, comprises any structure or feature designated to be avoided during the surgery.
  • a critical structure may be a structure which, if inadvertently contacted, could result in damage to the structure that would have a deleterious effect on the patient.
  • a critical structure may be an orbit, an optic nerve, a carotid artery, or a lesion.
  • segmentation of critical structures facilitates generating a successful surgical pathway.
  • the methods and systems described herein provide an automatic segmentation process that is reliable and efficient for the segmentation of critical structures.
  • a simulated surgical pathway may be overlaid on a representation of a body of a subject and then can be used by a surgeon or a surgical trainee to practice navigating a surgical procedure via the simulated surgical pathway.
  • the simulated surgical pathway may thereafter be implemented into a navigation system to help guide a surgeon or surgical trainee during a surgical procedure.
  • the navigation system may additionally receive and record position and movement information of a surgical instrument as the surgeon or surgical trainee manipulates the surgical instrument through the body.
  • the navigation system may additionally receive and record position and movement information of a surgical instrument as the surgeon or surgical trainee manipulates the surgical instrument through the body.
  • analysis of the motion of the surgical instrument may be performed and evaluations of surgical kinematics can be processed. Such data may be used to link surgical kinematics to patient outcomes.
  • Figure 1 depicts a simplified flow diagram of an example method 100 that may be carried out to navigate a surgical pathway in a body of a subject, in accordance with at least one embodiment.
  • a subject may be a human subject, and may be an adult human subject, an adolescent human subject, an infant human subject, or a newborn human subject.
  • each block may represent a module, a segment, or a portion of program code, which includes one or more instructions executable by a processor for implementing specific logical functions or steps in the process.
  • the program code may be stored on any type of computer readable medium, for example, data storage including a one or more computer-readable storage media that may be read or accessed by the processor, and may be a fixed or removable hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above.
  • the computer readable medium may include a physical and/or non-transitory computer readable medium, for example, such as computer-readable media that stores data for short periods of time like register memory, processor cache and Random Access Memory (RAM).
  • the computer readable medium may also include non-transitory media, such as secondary or persistent long term storage, like read only memory (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example.
  • ROM read only memory
  • CD-ROM compact-disc read only memory
  • the computer readable medium may also be any other volatile or non-volatile storage system.
  • the computer readable medium may be considered a computer readable storage medium, a tangible storage device, or other article of manufacture, for example.
  • program code, instructions, and/or data structures may be transmitted via a communications network via a propagated signal on a propagation medium (e.g., electromagnetic wave(s), sound wave(s), etc.).
  • a propagation medium e.g., electromagnetic wave(s), sound wave(s), etc.
  • the method 100 allows for navigating a surgical pathway in a body of a subject.
  • the method 100 may be used to generate a simulated surgical pathway for pre-operative visualization and preparation, guidance during an operation, and post-operative comparison and evaluation.
  • the method 100 includes defining positions of objects of interest relative to locations of a plurality of landmarks on a representation of a body of a subject, at block 110.
  • the objects of interest comprise at least one target and at least one entry portal.
  • the objects of interest may also comprise critical structures wherein contact is to be avoided.
  • the representation may be a 3D representation of a body of a subject.
  • the 3D representation may be made from images obtained from a plurality of CT scans, a plurality of MRI scans, or a plurality of US scans, for example.
  • the images may be obtained from other imaging modalities as well.
  • the plurality of landmarks include various features whose location is determined on the representation.
  • a landmark may be the frontal bone of a skull.
  • a landmark may be a nasal bone of a skull.
  • Other example landmarks are possible as well.
  • the method 100 then includes marking the defined positions of the objects of interest on the representation, at block 120.
  • the defined positions may be overlaid on the 3D representation, in one example embodiment.
  • the method 100 then includes generating one or more simulated surgical pathways on the representation for navigation of a tool based on the positions of the defined objects of interest, at block 130.
  • the simulated surgical pathways may be overlaid on the 3D representation, in one example embodiment.
  • a simulated surgical pathway may be automatically and quickly generated by a computing device and used for pre-operative review and analysis, during an operation to guide a surgeon or surgical trainee to a target, and after an operation for analysis and comparison with an actual path taken by the surgeon and surgical trainee.
  • An automated segmentation process may be carried out as part of the computer-implemented method for navigating a surgical pathway.
  • Figure 2 depicts an example display 200 of a system that navigates a surgical pathway, in accordance with at least one embodiment.
  • the display 200 may be a flat panel display, a 3D display, or a holographic diffraction-based display, for example. Other displays may also be envisioned.
  • the system may include a computer readable medium to execute instructions to carry out the method 100, as described with reference to Figure 1, for example.
  • a mode setting 210 wherein a user can choose to operate the system in "plan” mode, "virtual surgery” mode, "intra-op” mode, "post-op” mode, or "3D print” mode.
  • the display 200 in Figure 2 shows the virtual surgery mode.
  • the representation of the skull in display 200 may be manipulated via received inputs from a user. Thereby, the representation may be rotated and visualized with surgical pathways and various parts of the skull added or removed from the representation. 2D views 220 of the representation may be showed on the display as well.
  • segmenting an image of one or more critical structures on a representation is performed to aid in navigating a surgical pathway, by demarcating structures to avoid during a surgical procedure. Segmentation first involves location of a critical structure. To locate critical structures in the body, the computing device limits the search space to image slices which are determined with high probability to contain the critical structures. To limit the search space, a computing device marks landmarks on a representation of the body. Then, from the representation a new, smaller region of interest is determined to further pinpoint the location of the orbits. To detect the orbits and orbital nerves, a pixelated image of the region of interest is isolated and then processed.
  • the orbits were detected using landmarks on the skull and shape detection, and then the orbits were used to detect the optic nerves.
  • the segmentations were performed on nine subjects using MATLAB ® software on a computer readable medium. Thereafter, the results were analyzed for accuracy, sensitivity, and specificity.
  • Figure 3a depicts a 2D axial view image 300 of a skull, in accordance with at least one embodiment.
  • the image 300 is used to detect the nasal bone.
  • the nasal bone was marked as the highest point 310 on the skull in the image 300.
  • Figure 3b depicts a 2D sagittal view image 350 of a skull, in accordance with at least one embodiment.
  • the frontal bone is marked as the first maximal point 360 from the nasal bone 310 to the top of the skull 370.
  • These two example landmarks aid in limiting the search area to detect the orbits and optic nerves.
  • additional landmarks may be used to further limit the search area for critical structures. Additional landmarks may include, for example, the centroid and the left and right bone structure Zygoma.
  • Figure 4a depicts a 3D representation 400 of a skull of a subject, in accordance with at least one embodiment.
  • the 3D representation in the present example was made from a plurality of CT scans. In other examples, however, the 3D representation may be made from a plurality of CT, MRI, or US images, or a combination thereof. Landmarks, such as the frontal bone and the nasal bone of the skull, are determined on the representation in the manner described with reference to Figures 3a-b.
  • Figure 4b depicts a 3D representation 410 of a portion of the skull of Figure 4a, in accordance with at least one embodiment.
  • slices of the representation 400 are selected from the nose tip 412 to the frontal bone 414 to narrow a search region containing with high probability based on the landmark detection, the orbits, which are deemed to be critical structures for the present example.
  • Figure 4c depicts a representation 420 illustrating a smaller region of interest 422 overlaid on the slices used for the representation 410 of Figure 4b, in accordance with at least one embodiment.
  • the region of interest 422 contains the right orbit. The same process can be used to obtain a region of interest for the left orbit.
  • Figure 4d depicts a 2D view 430 of the overlaid region of interest 422 of Figure
  • Figure 5a depicts a pixelated image 5 500 of the region of interest 422 from Figure 4d that has been isolated from the selected slices of Figure 4c, in accordance with at least one embodiment.
  • Orbits are detected from the pixelated image.
  • Figure 5b depicts an outlined image 510 generated from the pixelated image 500 of Figure 5a, in accordance with at least one embodiment.
  • Canny edge0 detection was applied to the image 500 to specify an edge of the orbit
  • Hough circle transform was used to detect the circular object representing the orbit.
  • the result of canny edge detection and Hough circle transform is a contoured line or circle 512 in image 510 depicting the estimated circumference of an orbit. This process was repeated for a number of slices.
  • Figure 5c depicts a 3D graph 520 of a sphere 522 generated from a contoured region of Figure 5b, in accordance with at least one embodiment.
  • the sphere 522 is generated to fit within the circles such0 as circle 512, from various slices of the representation 400 depicted in Figure 4a.
  • the sphere 522 is created to demonstrate the volume of a critical structure to be worked around during a surgical procedure.
  • the methods discussed for Figures 3a-5c may be used to detect a target or other structure in the skull, instead of a critical5 structure.
  • Figure 5d depicts an image 530 showing the sphere 522 of Figure 5c overlaid on a representation of the skull 524, in accordance with at least one embodiment.
  • an optic nerve the center of the detected orbit was used as an additional landmark to redefine a region of interest.
  • Figure 6a depicts a region of interest 600 used to detect an optic nerve, in accordance with at least one embodiment.
  • a line 610 is drawn along the length of the region of interest 600, and the intensity of pixels are determined and plotted based on distance from the line 610. Additional lines 612 indicate the distance from the landmark.
  • Figure 6b depicts a graph 620 plotting pixel intensity over distance of the estimated location of the landmark (here, the optic nerve), in accordance with at least one embodiment.
  • Specific intensities are known to have a high probability of accurately identifying certain structures.
  • Figure 6c depicts the resulting detected orbit 652, optic nerve 654, and muscles 656, all overlaid on a 3D representation 650 of the skull, in accordance with at least one embodiment.
  • Figure 6d depicts a 2D image 660 of the skull displaying the detected orbit
  • the 2D image is a slice of the 3D representation 650.
  • the example study subjected nine clinically acquired CT images of cadaver and patients with different voxel size and slice thickness, in anonymous form.
  • Accuracy of a segmentation technique refers to the degree to which the segmentation results agree with the true segmentation, which was in the present study, a manual segmentation.
  • Sensitivity is a measure of how well the algorithm identifies orbits and optic nerve pixels (true positive).
  • Specificity is a measure of how well pixels are classified as truly not orbits and optic nerves (true negative).
  • Accuracy combines the sensitivity and specificity metrics within one measurement and quantifies the percentage of correctly classified pixels.
  • the equations used to calculate accuracy, sensitivity, and specificity are as follows: TP + TN [1]
  • Figure 7a depicts a table 700 illustrating accuracy results using Equation 1 for the nine subjects.
  • Figure 7b depicts a table 710 illustrating sensitivity results using Equation 2 for the nine subjects.
  • Figure 7c depicts a table 720 illustrating specificity results using Equation 3 for the nine subjects.
  • Previous segmentation methods involve manually drawing contours on a set of 2D slices, which as noted above is time consuming and subjective, or multi- atlas based segmentation that requires creation of atlases and parameter tuning.
  • the automated segmentation discussed above will save time and increase precision, and thus safety, to a subject.
  • the automated segmentation is executable via a non-transitory computer readable medium such as that described with reference to Figure 1, and the segmentations may be displayed on a display such as display 200 of Figure 2.
  • the computer readable medium may receive an input to detect various critical structures, and responsively execute instructions to carry out any or all of the processes discussed with reference to Figures 3a - 6d to detect such structures.
  • segmenting an image of one or more critical structures on a representation is performed to aid in navigating a surgical pathway.
  • One or more simulated surgical pathways may be generated based on the positions of the defined objects of interest. Segmenting allows visualization and demonstration of critical structures on imaging.
  • an indication of a selected pathway may be received.
  • multiple pathways may be generated as options to reach a target in a surgical procedure.
  • the computer readable medium may execute instructions to display, via a display such as display 200, only the selected pathway on the representation, removing any non- selected pathways.
  • the pathway may be selected, in one example embodiment, to perform a virtual endoscopic surgery, or a rehearsal surgery, to allow the surgeon to confirm that access is appropriate through the selected pathway.
  • Such virtual surgeries also allow a surgeon to visualize a target from different perspectives and different portals until an optimal portal has been identified.
  • a surgeon may set a "lock" on the chosen pathway to limit instrument deviation from the pathway by use of alerts or shut down of the system should deviation occur.
  • the virtual surgery may also be helpful in teaching surgical trainees. The virtual surgery allows for more in depth interactions with surgical anatomy atlases as well as virtual surgical rehearsal.
  • Image-based navigation may then be selected to assist during a surgical procedure.
  • a registration device e.g., LED
  • Obtained imaging (CT, MRI, US) of the subject is used and registration is performed.
  • a navigation-enabled surgical instrument is also registered, and is placed through the nose or mouth of the subject and advanced into the appropriate position in the body.
  • the endotracheal tube uses sensor technology to pick up data and provide signals indicating the position of the surgical instrument in space.
  • the position of the surgical instrument or a component thereof may then be displayed on the imaging data used for navigation, such as shown in the display 200 of Figure 2.
  • a notification or alert may be displayed or otherwise transmitted to the surgeon, or the power to the surgical instrument may automatically shut off.
  • a notification or alert may be displayed or otherwise transmitted to the surgeon, or the power to the surgical instrument may automatically shut off.
  • the sensors on the surgical instrument may transmit signals indicating data such as 3D position coordinates (x,y,z), pitch, yaw, and roll for the surgical instrument.
  • the signals may be received and may be stored in data storage.
  • the signals may be transmitted by sensors on the surgical instrument, and received by the system, on a continuous and/or periodic basis. In one example embodiment, the signals are received eight times per second.
  • the data may be transmitted continuously to a display to provide real-time visualization of a surgical pathway as the pathway builds.
  • an average linear velocity of the surgical instrument for a predetermined time period may be calculated.
  • an average angular velocity of the surgical instrument for a predetermined time period may be transmitted to a display.
  • an average jerk may be calculated over a last number of seconds and may be transmitted to a display.
  • the Holoborodko smooth noise-robust central difference method may be used to approximate velocity and acceleration from a position data set.
  • velocity, acceleration, and jerk may be computed from the position data by digital filtering or regression methods.
  • the frequency content of the received data may be calculated by algorithms such as time-windowing with Hamming and Hamming windows and the discrete Fourier transform (DFT) algorithm.
  • the resulting frequency content information may be displayed as a spectrum or a feature such as the energy content within a specified band of frequencies, calculated and displayed as a number, a bar graph, or a time-history graph, for example.
  • the types of data discussed above may be used to generate an image of the trajectory of the surgical instrument during a surgical procedure.
  • the image of the trajectory may be set as an overlay onto a medical image, such as a CT image, of the body portion of the subject.
  • Figure 8 depicts a display 800 comprising a series of views of a representation of a skull 810, in accordance with at least one embodiment. Two trajectories are overlaid onto the representation.
  • the first trajectory 820 represents the trajectory of a novice surgeon.
  • the second trajectory 830 represents the trajectory of an experienced surgeon. Generally, a greater volume and surface area are used by a novice surgeon to accomplish the same surgical task as an expert surgeon.
  • a surgical volume may be calculated from the data.
  • surgical volume is defined as the volume taken up by the surgical instrument in the body during a surgical procedure.
  • the surgical volume for a given surgical procedure may be approximated by the volume of the convex hull.
  • the convex hall is the smallest polyhedron such that all elements of P are on or inside the convex hull.
  • the triangulation of the convex hull is calculated and the area of all the facets triangles.
  • the Euclidean distance of the coordinate to predefined critical structures may be determined.
  • a safe distance threshold value e.g., 2 mm
  • Binary representation may be applied used such that for coordinate data at time t, if the distance of the data with a critical structure is less than the threshold, the distance is set to one and if the distance is greater than the threshold, the distance is set to zero.
  • the amount of time a surgical instrument spends near i.e., a distance less than the threshold
  • any critical structures may also be determined.
  • a portal e.g., the nose, the mouth
  • the closest distance to critical structures as well as surgical volume for endoscopic sinus and skull based surgeries
  • Objective measurements of technical skill in endoscopic sinus and skull based surgeries may be obtained.
  • the number of instrument passes through a portal (e.g., the nose, the mouth).
  • pathways may be compared based on volume and cross sectional areas at specific anatomic landmarks (e.g., nasal aperture, maxillary ostium, basal lamella, and sphenoid ostium).
  • Example surgeries include: endoscopic, maxillary antrostomy, ethmoidectomy, and sphenoidotomy, among others.
  • Such surgeries may be used to treat any of the following: chronic sinusitis, nasal obstructions, lesions, and resectable tumors, such as tumors in the head and neck, skull base, intracranial compartment, cerebrospinal fluid leak, vascular tumors, intracranial aneurysms, and infectious masses.

Abstract

Systems and methods for navigating a surgical pathway in a body of a subject are provided. A 3D representation of the body is provided, and positions of objects of interest relative to locations of a plurality of landmarks are defined on the representation. The objects of interest comprise at least one target and at least one entry portal. Another possible object of interest is a critical structure, the avoidance of which is desired when navigating a surgical pathway. The defined objects of interest are marked on the representation, and one or more simulated surgical pathways are generated on the representation for navigation of a tool. The surgical pathways are generated based on the positions of the defined objects of interest.

Description

Methods and Systems for Navigating a Surgical Pathway
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority to U.S. Provisional Patent Application Serial No. 62/112,005 filed on February 4, 2015, and to U.S. Provisional Patent Application Serial No. 62/213,738 filed on September 3, 2015, both of which are hereby incorporated by reference in their entirety.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH
This invention was made with U.S. government support under Grant No. 5 R21 EB016122-02, awarded by the National Institute of Health. The U.S. government has certain rights in the invention.
BACKGROUND
Endoscopic surgical procedures are increasingly performed for skull based surgeries as a less invasive alternative to open surgical procedures. However, complex sinus and skull base anatomy, the density of blood vessels and nerves in the skull, and a narrow surgical field of vision increase the difficulty of an endoscopic surgical procedure. Inadvertent contact with various critical structures in the skull can cause blindness, cerebral spinal fluid leaks, and even death.
To mitigate such risks, both pre-operative planning and image guided surgery (IGS) have been developed. Pre-operative planning involves determining outlines of organs and structures based on a large number of images, and manually drawing the individual contours on a set of two-dimensional (2D) slices of the images. This process is time consuming, labor intensive, and prone to errors as it is a subjective process. During a surgical procedure, IGS uses sensors to provide a display that shows the location of a surgical instrument relative to the images obtained before surgery.
After a surgical procedure has occurred, documentation of the surgical procedure generally takes the form of providing an "operative report" that the surgeon dictates within 48 hours of the operation. The operative report is prone to recall bias and is often the only record to communicate the steps of the surgical procedure.
SUMMARY
In accordance with the present invention, a system and a method are defined for navigating a surgical pathway in a body of a subject. In one embodiment, the computer-implemented method may comprise defining positions of objects of interest relative to locations of a plurality of landmarks on a representation of the body. The objects of interest comprise at least one target and at least one entry portal. The method further comprises marking the defined positions of the objects of interest on the representation, and generating one or more simulated surgical pathways on the representation for navigation of a tool based on the positions of the defined objects of interest.
Defining the positions of the objects of interest may comprise estimating a first search region based on spatial relationships between the plurality of landmarks and at least one critical structure, and generating a 2D slice from the representation that comprises the first search region. The at least one critical structure may comprise one or more of an orbit and nerve, in some example embodiments.
Defining the positions of the objects of interest may further comprise determining locations of a second plurality of landmarks on the 2D slice, estimating a second search region based on spatial relationships between the second plurality of landmarks, generating a pixelated image of the second search region, defining an outline of an object of interest within the pixelated image, generating a three-dimensional (3D) shape from the outline, and marking the 3D shape on the representation as one of the objects of interest. In another example embodiment, defining the positions of the objects of interest may further comprise determining intensity values for pixels within the pixelated image and classifying a region within the 2D slice comprising pixels having an intensity above a threshold value as being a critical structure. The critical structure may then be segmented on the representation.
In some example embodiments, the representation is a 3D anatomically accurate digital model created from a plurality of computerized tomography (CT) scans, a plurality of magnetic resonance imaging (MRI) scans, and a plurality of ultrasound (US) scans.
In some example embodiments, the method may further comprise receiving an indication of a selected pathway from the one or more pathways and responsively displaying the selected pathway on the representation.
In some example embodiments, the method may further comprise receiving signals indicating position and movement data for the tool with respect to the representation and overlaying an image representing the position and movement data onto the representation to show space occupied by the tool over time. The received signals may include data such as 3D coordinates, pitch, yaw, and/or roll data for the tool at periodic intervals. From the received signals, an average linear velocity of the tool for a predetermined time period, an average angular velocity of the tool for a predetermined time period, and/or an average jerk of the tool for a predetermined time period may be calculated. The calculated average linear and/or angular velocities, the calculated jerk, and the overlaid image may be displayed on the representation. The display may be updated in a continuous manner.
In some example embodiments, the data may be compared for a plurality of procedures. Common pathways traveled may be determined, as well as differences in the data between procedures. In another embodiment, a non-transitory computer readable medium is provided. The computer readable medium has stored therein instructions executable to cause a computing device to perform functions comprising detecting locations of a plurality of landmarks on a representation of a body of a subject, defining objects of interest based on the locations of the plurality of landmarks, the objects of interest comprising at least one critical structure, at least one target, and at least one entry portal, generating one or more pathways on the representation for a tool to travel based on the defined objects of interest, receiving an indication of a selection of a pathway from the one or more pathways, receiving signals indicating position and movement data for the tool with respect to the representation, and overlaying an image representing the position and movement data onto the representation.
In yet another embodiment, a computer-implemented method is provided. The method comprises detecting locations of a plurality of landmarks on a representation of a body of a subject, defining objects of interest based on the locations of the plurality of landmarks, the objects of interest comprising at least one critical structure, at least one target, and at least one entry portal, generating one or more pathways on the representation for a tool to travel based on the defined objects of interest, receiving an indication of approval of a pathway of the one or more pathways, receiving signals indicating position and movement data for the tool with respect to the representation, and overlaying an image representing the indicated position and movement data onto the representation.
These as well as other aspects and advantages of the synergy achieved by combining the various aspects of this technology, that while not previously disclosed, will become apparent to those of ordinary skill in the art by reading the following detailed description, with reference where appropriate accompanying drawings.
BRIEF DESCRIPTION OF THE FIGURES
Figure 1 depicts a simplified flow diagram of an example method that may be carried out to navigate a surgical pathway in a body of a subject, in accordance with at least one embodiment;
Figure 2 depicts an example display of a system that navigates a surgical pathway, in accordance with at least one embodiment;
Figure 3a depicts a 2D axial view image of a skull, in accordance with at least one embodiment;
Figure 3b depicts a 2D sagittal view image of a skull, in accordance with at least one embodiment;
Figure 4a depicts a 3D representation of a skull of a subject, in accordance with at least one embodiment;
Figure 4b depicts a 3D representation of a portion of the skull of Figure 4a, in accordance with at least one embodiment;
Figure 4c depicts a representation illustrating a smaller region of interest overlaid on the slices used for the representation of Figure 4b, in accordance with at least one embodiment;
Figure 4d depicts a 2D view of the overlaid region of interest of Figure 4c, in accordance with at least one embodiment;
Figure 5a depicts a pixelated image of the region of interest of Figure 4d, in accordance with at least one embodiment;
Figure 5b depicts an outlined image generated from the pixelated image of
Figure 5a, in accordance with at least one embodiment;
Figure 5c depicts a 3D graph of a sphere generated within a contoured region of Figure 5b, in accordance with at least one embodiment; Figure 5d depicts an image showing the sphere of Figure 5c overlaid on a representation of the skull, in accordance with at least one embodiment;
Figure 6a depicts a region of interest used to detect an optic nerve, in accordance with at least one embodiment;
Figure 6b depicts a graph plotting pixel intensity over distance, in accordance with at least one embodiment;
Figure 6c depicts a detected orbit, optic nerve and muscles overlaid on the 3D representation of Figure 4c, in accordance with at least one embodiment;
Figure 6d depicts a 2D image displaying the detected orbit, optic nerve, and muscle, in accordance with at least one embodiment;
Figure 7a depicts a table illustrating accuracy results using Equation 1 for the nine subjects, in accordance with at least one embodiment;
Figure 7b depicts a table illustrating sensitivity results using Equation 2 for the nine subjects, in accordance with at least one embodiment;
Figure 7c depicts a table illustrating specificity results using Equation 3 for the nine subjects, in accordance with at least one embodiment; and
Figure 8 depicts a display comprising a series of views of a representation of a skull, in accordance with at least one embodiment.
DETAILED DESCRIPTION
In the following detailed description, reference is made to the accompanying figures, which form a part thereof. In the figures, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, figures, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein.
I. Overview
While planning for an endoscopic surgery, surgeons typically aim to minimize the disturbance of critical structures without sacrificing access to a target. The systems and methods discussed herein provide for automated generation of a simulated surgical pathway on a representation of a body to aid in pre-operative surgical planning. To generate a simulated surgical pathway, positions of objects of interest are defined and marked relative to locations of a plurality of landmarks on the representation.
The objects of interest include a target, which is a structure or feature designated to be accessed during the surgery. The objects of interest also include an entry portal providing access into the body of the subject, e.g., a nostril, a mouth. The object of interest may also include a critical structure. A critical structure, as used herein, comprises any structure or feature designated to be avoided during the surgery. In one example embodiment, a critical structure may be a structure which, if inadvertently contacted, could result in damage to the structure that would have a deleterious effect on the patient. In some example embodiments, a critical structure may be an orbit, an optic nerve, a carotid artery, or a lesion. If damaged, critical structures may cause issues such as excess bleeding to a patient or loss of vision or blindness. Thus, segmentation of critical structures facilitates generating a successful surgical pathway. The methods and systems described herein provide an automatic segmentation process that is reliable and efficient for the segmentation of critical structures.
Once generated, a simulated surgical pathway may be overlaid on a representation of a body of a subject and then can be used by a surgeon or a surgical trainee to practice navigating a surgical procedure via the simulated surgical pathway. The simulated surgical pathway may thereafter be implemented into a navigation system to help guide a surgeon or surgical trainee during a surgical procedure. The navigation system may additionally receive and record position and movement information of a surgical instrument as the surgeon or surgical trainee manipulates the surgical instrument through the body. Thus, a detailed record of the surgery can be provided. Further, analysis of the motion of the surgical instrument may be performed and evaluations of surgical kinematics can be processed. Such data may be used to link surgical kinematics to patient outcomes.
II. Navigating a Surgical Pathway
Figure 1 depicts a simplified flow diagram of an example method 100 that may be carried out to navigate a surgical pathway in a body of a subject, in accordance with at least one embodiment. As referenced herein, a subject may be a human subject, and may be an adult human subject, an adolescent human subject, an infant human subject, or a newborn human subject.
For the method 100 and other processes and methods disclosed herein, the flowchart shows functionality and operation of one possible implementation of the present embodiments. In this regard, each block may represent a module, a segment, or a portion of program code, which includes one or more instructions executable by a processor for implementing specific logical functions or steps in the process. The program code may be stored on any type of computer readable medium, for example, data storage including a one or more computer-readable storage media that may be read or accessed by the processor, and may be a fixed or removable hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The computer readable medium may include a physical and/or non-transitory computer readable medium, for example, such as computer-readable media that stores data for short periods of time like register memory, processor cache and Random Access Memory (RAM). The computer readable medium may also include non-transitory media, such as secondary or persistent long term storage, like read only memory (ROM), optical or magnetic disks, compact-disc read only memory (CD-ROM), for example. The computer readable medium may also be any other volatile or non-volatile storage system. The computer readable medium may be considered a computer readable storage medium, a tangible storage device, or other article of manufacture, for example. Alternatively, program code, instructions, and/or data structures may be transmitted via a communications network via a propagated signal on a propagation medium (e.g., electromagnetic wave(s), sound wave(s), etc.). The method 100 allows for navigating a surgical pathway in a body of a subject. The method 100 may be used to generate a simulated surgical pathway for pre-operative visualization and preparation, guidance during an operation, and post-operative comparison and evaluation.
Initially, the method 100 includes defining positions of objects of interest relative to locations of a plurality of landmarks on a representation of a body of a subject, at block 110. The objects of interest comprise at least one target and at least one entry portal. The objects of interest may also comprise critical structures wherein contact is to be avoided.
The representation may be a 3D representation of a body of a subject. The 3D representation may be made from images obtained from a plurality of CT scans, a plurality of MRI scans, or a plurality of US scans, for example. The images may be obtained from other imaging modalities as well.
The plurality of landmarks include various features whose location is determined on the representation. For example, a landmark may be the frontal bone of a skull. As another example, a landmark may be a nasal bone of a skull. Other example landmarks are possible as well.
The method 100 then includes marking the defined positions of the objects of interest on the representation, at block 120. The defined positions may be overlaid on the 3D representation, in one example embodiment.
The method 100 then includes generating one or more simulated surgical pathways on the representation for navigation of a tool based on the positions of the defined objects of interest, at block 130. The simulated surgical pathways may be overlaid on the 3D representation, in one example embodiment. Thus, a simulated surgical pathway may be automatically and quickly generated by a computing device and used for pre-operative review and analysis, during an operation to guide a surgeon or surgical trainee to a target, and after an operation for analysis and comparison with an actual path taken by the surgeon and surgical trainee.
As noted above, a surgeon or trainee may want to locate and segment critical structures to prevent contact with the structures during a surgical procedure. An automated segmentation process may be carried out as part of the computer-implemented method for navigating a surgical pathway.
Figure 2 depicts an example display 200 of a system that navigates a surgical pathway, in accordance with at least one embodiment. The display 200 may be a flat panel display, a 3D display, or a holographic diffraction-based display, for example. Other displays may also be envisioned. The system may include a computer readable medium to execute instructions to carry out the method 100, as described with reference to Figure 1, for example.
As shown in display 200, there is a mode setting 210, wherein a user can choose to operate the system in "plan" mode, "virtual surgery" mode, "intra-op" mode, "post-op" mode, or "3D print" mode. The display 200 in Figure 2 shows the virtual surgery mode. The representation of the skull in display 200 may be manipulated via received inputs from a user. Thereby, the representation may be rotated and visualized with surgical pathways and various parts of the skull added or removed from the representation. 2D views 220 of the representation may be showed on the display as well.
As noted above, in some example embodiments, segmenting an image of one or more critical structures on a representation is performed to aid in navigating a surgical pathway, by demarcating structures to avoid during a surgical procedure. Segmentation first involves location of a critical structure. To locate critical structures in the body, the computing device limits the search space to image slices which are determined with high probability to contain the critical structures. To limit the search space, a computing device marks landmarks on a representation of the body. Then, from the representation a new, smaller region of interest is determined to further pinpoint the location of the orbits. To detect the orbits and orbital nerves, a pixelated image of the region of interest is isolated and then processed.
III. Example Segmentation Methods
In a recent study, an automated segmenting method was conducted.
Initially, the orbits were detected using landmarks on the skull and shape detection, and then the orbits were used to detect the optic nerves. The segmentations were performed on nine subjects using MATLAB® software on a computer readable medium. Thereafter, the results were analyzed for accuracy, sensitivity, and specificity.
Figure 3a depicts a 2D axial view image 300 of a skull, in accordance with at least one embodiment. The image 300 is used to detect the nasal bone. The nasal bone was marked as the highest point 310 on the skull in the image 300.
Figure 3b depicts a 2D sagittal view image 350 of a skull, in accordance with at least one embodiment. The frontal bone is marked as the first maximal point 360 from the nasal bone 310 to the top of the skull 370. These two example landmarks aid in limiting the search area to detect the orbits and optic nerves. In some example embodiments, additional landmarks may be used to further limit the search area for critical structures. Additional landmarks may include, for example, the centroid and the left and right bone structure Zygoma.
Figure 4a depicts a 3D representation 400 of a skull of a subject, in accordance with at least one embodiment. The 3D representation in the present example was made from a plurality of CT scans. In other examples, however, the 3D representation may be made from a plurality of CT, MRI, or US images, or a combination thereof. Landmarks, such as the frontal bone and the nasal bone of the skull, are determined on the representation in the manner described with reference to Figures 3a-b.
Figure 4b depicts a 3D representation 410 of a portion of the skull of Figure 4a, in accordance with at least one embodiment. In the present embodiment, slices of the representation 400 are selected from the nose tip 412 to the frontal bone 414 to narrow a search region containing with high probability based on the landmark detection, the orbits, which are deemed to be critical structures for the present example.
From the representation 410 in Figure 4b, a new, smaller region of interest is determined to further pinpoint the location of the orbits. Figure 4c depicts a representation 420 illustrating a smaller region of interest 422 overlaid on the slices used for the representation 410 of Figure 4b, in accordance with at least one embodiment. As shown in Figure 4c, the region of interest 422 contains the right orbit. The same process can be used to obtain a region of interest for the left orbit.
Figure 4d depicts a 2D view 430 of the overlaid region of interest 422 of Figure
4c, in accordance with at least one embodiment.
To detect the orbits and orbital nerves, a pixelated image of the region of interest 500 is isolated and then processed. Figure 5a depicts a pixelated image 5 500 of the region of interest 422 from Figure 4d that has been isolated from the selected slices of Figure 4c, in accordance with at least one embodiment. Orbits are detected from the pixelated image.
Figure 5b depicts an outlined image 510 generated from the pixelated image 500 of Figure 5a, in accordance with at least one embodiment. Canny edge0 detection was applied to the image 500 to specify an edge of the orbit, and Hough circle transform was used to detect the circular object representing the orbit. The result of canny edge detection and Hough circle transform is a contoured line or circle 512 in image 510 depicting the estimated circumference of an orbit. This process was repeated for a number of slices.
5 To improve the detected circles, the center of the circles in all the slices may be compared and a sphere was fitted, thereby ignoring semi-circles or circles wherein the center was far from the mean. Figure 5c depicts a 3D graph 520 of a sphere 522 generated from a contoured region of Figure 5b, in accordance with at least one embodiment. The sphere 522 is generated to fit within the circles such0 as circle 512, from various slices of the representation 400 depicted in Figure 4a. The sphere 522 is created to demonstrate the volume of a critical structure to be worked around during a surgical procedure.
In some example embodiments, the methods discussed for Figures 3a-5c may be used to detect a target or other structure in the skull, instead of a critical5 structure.
Figure 5d depicts an image 530 showing the sphere 522 of Figure 5c overlaid on a representation of the skull 524, in accordance with at least one embodiment. To detect another critical structure, in the present example, an optic nerve, the center of the detected orbit was used as an additional landmark to redefine a region of interest. Figure 6a depicts a region of interest 600 used to detect an optic nerve, in accordance with at least one embodiment. A line 610 is drawn along the length of the region of interest 600, and the intensity of pixels are determined and plotted based on distance from the line 610. Additional lines 612 indicate the distance from the landmark.
Figure 6b depicts a graph 620 plotting pixel intensity over distance of the estimated location of the landmark (here, the optic nerve), in accordance with at least one embodiment. Specific intensities are known to have a high probability of accurately identifying certain structures. In the present example, pixels having intensities within the range of 65 to 70 along the length of the estimated landmark in order to more precisely define the course of the optic nerve.
Figure 6c depicts the resulting detected orbit 652, optic nerve 654, and muscles 656, all overlaid on a 3D representation 650 of the skull, in accordance with at least one embodiment.
Figure 6d depicts a 2D image 660 of the skull displaying the detected orbit
652, optic nerve 654, and muscles 656, in accordance with at least one embodiment. The 2D image is a slice of the 3D representation 650.
As noted above, to test the accuracy of the segmentation process, the example study subjected nine clinically acquired CT images of cadaver and patients with different voxel size and slice thickness, in anonymous form.
Accuracy of a segmentation technique refers to the degree to which the segmentation results agree with the true segmentation, which was in the present study, a manual segmentation. Sensitivity is a measure of how well the algorithm identifies orbits and optic nerve pixels (true positive). Specificity is a measure of how well pixels are classified as truly not orbits and optic nerves (true negative).
Accuracy combines the sensitivity and specificity metrics within one measurement and quantifies the percentage of correctly classified pixels. The equations used to calculate accuracy, sensitivity, and specificity are as follows: TP + TN [1]
Accuracy = * 100
TP + FP + FN + TN
TP
Sensitivity = * 100 roi
TP + FN L J
Specificity =——— * 100
TN + FP [3] Where TP is true positive, TN is true negative, FP is false positive, and FN is false negative.
Figure 7a depicts a table 700 illustrating accuracy results using Equation 1 for the nine subjects. Figure 7b depicts a table 710 illustrating sensitivity results using Equation 2 for the nine subjects. Figure 7c depicts a table 720 illustrating specificity results using Equation 3 for the nine subjects.
While manual segmentation generally takes about 20-25 minutes to complete, the above-described automated segmentation process of Figures 3a-7c only took a few minutes and does not involve user interaction. As shown in table 700 of Figure 7a, the accuracy of detection using the above-described automated segmentation process was above 99% for each of the 9 subjects.
Previous segmentation methods involve manually drawing contours on a set of 2D slices, which as noted above is time consuming and subjective, or multi- atlas based segmentation that requires creation of atlases and parameter tuning.
The automated segmentation discussed above will save time and increase precision, and thus safety, to a subject. The automated segmentation is executable via a non-transitory computer readable medium such as that described with reference to Figure 1, and the segmentations may be displayed on a display such as display 200 of Figure 2. In operation, the computer readable medium may receive an input to detect various critical structures, and responsively execute instructions to carry out any or all of the processes discussed with reference to Figures 3a - 6d to detect such structures.
As discussed above, in some example embodiments, segmenting an image of one or more critical structures on a representation is performed to aid in navigating a surgical pathway. One or more simulated surgical pathways may be generated based on the positions of the defined objects of interest. Segmenting allows visualization and demonstration of critical structures on imaging.
IV. Navigation-Assisted Surgical Procedures and Post-Procedure Analysis
Once one or more simulated surgical pathways are generated, an indication of a selected pathway may be received. In one example embodiment, multiple pathways may be generated as options to reach a target in a surgical procedure. Responsive to receipt of an indication of a selected pathway, the computer readable medium may execute instructions to display, via a display such as display 200, only the selected pathway on the representation, removing any non- selected pathways. The pathway may be selected, in one example embodiment, to perform a virtual endoscopic surgery, or a rehearsal surgery, to allow the surgeon to confirm that access is appropriate through the selected pathway. Such virtual surgeries also allow a surgeon to visualize a target from different perspectives and different portals until an optimal portal has been identified. If a surgeon determines that a selected pathway is the pathway desired to be used for an actual surgery, the surgeon may set a "lock" on the chosen pathway to limit instrument deviation from the pathway by use of alerts or shut down of the system should deviation occur. The virtual surgery may also be helpful in teaching surgical trainees. The virtual surgery allows for more in depth interactions with surgical anatomy atlases as well as virtual surgical rehearsal.
Image-based navigation may then be selected to assist during a surgical procedure. For the image-based navigation, a registration device (e.g., LED) is placed on a subject. Obtained imaging (CT, MRI, US) of the subject is used and registration is performed. A navigation-enabled surgical instrument is also registered, and is placed through the nose or mouth of the subject and advanced into the appropriate position in the body. The endotracheal tube uses sensor technology to pick up data and provide signals indicating the position of the surgical instrument in space. The position of the surgical instrument or a component thereof (e.g., the tip of the instrument) may then be displayed on the imaging data used for navigation, such as shown in the display 200 of Figure 2.
During the surgical procedure, if a surgical instrument deviates outside of the selected pathway by a pre-determined number of indications, a notification or alert may be displayed or otherwise transmitted to the surgeon, or the power to the surgical instrument may automatically shut off. Similarly, if the surgical instrument passes within a pre-determined distance from a critical structure, a notification or alert may be displayed or otherwise transmitted to the surgeon, or the power to the surgical instrument may automatically shut off.
During a surgical procedure, the sensors on the surgical instrument may transmit signals indicating data such as 3D position coordinates (x,y,z), pitch, yaw, and roll for the surgical instrument. The signals may be received and may be stored in data storage. In some example embodiments, the signals may be transmitted by sensors on the surgical instrument, and received by the system, on a continuous and/or periodic basis. In one example embodiment, the signals are received eight times per second. In some example embodiments, the data may be transmitted continuously to a display to provide real-time visualization of a surgical pathway as the pathway builds.
From the received signals, an average linear velocity of the surgical instrument for a predetermined time period, an average angular velocity of the surgical instrument for a predetermined time period, and/or an average jerk of the surgical instrument for a predetermined time period may be calculated. In one example embodiment, an average velocity may be calculated over the last 10 seconds, and may be transmitted to a display. In yet another example embodiment, an average jerk may be calculated over a last number of seconds and may be transmitted to a display.
In one example embodiment, the Holoborodko smooth noise-robust central difference method may be used to approximate velocity and acceleration from a position data set. In another example embodiment, velocity, acceleration, and jerk may be computed from the position data by digital filtering or regression methods. Additionally, the frequency content of the received data may be calculated by algorithms such as time-windowing with Hamming and Hamming windows and the discrete Fourier transform (DFT) algorithm. The resulting frequency content information may be displayed as a spectrum or a feature such as the energy content within a specified band of frequencies, calculated and displayed as a number, a bar graph, or a time-history graph, for example. The types of data discussed above may be used to generate an image of the trajectory of the surgical instrument during a surgical procedure. The image of the trajectory may be set as an overlay onto a medical image, such as a CT image, of the body portion of the subject. Figure 8 depicts a display 800 comprising a series of views of a representation of a skull 810, in accordance with at least one embodiment. Two trajectories are overlaid onto the representation. The first trajectory 820 represents the trajectory of a novice surgeon. The second trajectory 830 represents the trajectory of an experienced surgeon. Generally, a greater volume and surface area are used by a novice surgeon to accomplish the same surgical task as an expert surgeon.
A surgical volume may be calculated from the data. As used herein,
"surgical volume" is defined as the volume taken up by the surgical instrument in the body during a surgical procedure. The surgical volume for a given surgical procedure may be approximated by the volume of the convex hull. Given a 3D data set, P, the convex hall is the smallest polyhedron such that all elements of P are on or inside the convex hull. To calculate the surface area, the triangulation of the convex hull is calculated and the area of all the facets triangles.
For each coordinate data, the Euclidean distance of the coordinate to predefined critical structures may be determined. A safe distance threshold value (e.g., 2 mm) to the critical structure is defined. Binary representation may be applied used such that for coordinate data at time t, if the distance of the data with a critical structure is less than the threshold, the distance is set to one and if the distance is greater than the threshold, the distance is set to zero. The amount of time a surgical instrument spends near (i.e., a distance less than the threshold) any critical structures may also be determined. Thus, discrete pathways taken by surgeons, the number of instrument passes through a portal (e.g., the nose, the mouth), the closest distance to critical structures, as well as surgical volume for endoscopic sinus and skull based surgeries may be recorded, displayed, analyzed, and compared. Objective measurements of technical skill in endoscopic sinus and skull based surgeries may be obtained. For example, the number of instrument passes through a portal (e.g., the nose, the mouth). As another example, pathways may be compared based on volume and cross sectional areas at specific anatomic landmarks (e.g., nasal aperture, maxillary ostium, basal lamella, and sphenoid ostium).
Analyzing such data can be helpful to improve surgical skill, which correlates with decreased postoperative complications such as emergency room visits, revision surgery, and patient injury or even death.
The systems and methods described above may be used to prepare for endoscopic and other minimally invasive surgeries. Example surgeries include: endoscopic, maxillary antrostomy, ethmoidectomy, and sphenoidotomy, among others. Such surgeries may be used to treat any of the following: chronic sinusitis, nasal obstructions, lesions, and resectable tumors, such as tumors in the head and neck, skull base, intracranial compartment, cerebrospinal fluid leak, vascular tumors, intracranial aneurysms, and infectious masses.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims, along with the full scope of equivalents to which such claims are entitled. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.

Claims

CLAIMS What is claimed is:
1. A computer-implemented method for navigating a surgical pathway in a body of a subject comprising:
defining positions of objects of interest relative to locations of a plurality of landmarks on a representation of a body of a subject, the objects of interest comprising at least one target and at least one entry portal;
marking the defined positions of the objects of interest on the representation;
and
generating one or more simulated surgical pathways on the representation for navigation of a tool based on the positions of the defined objects of interest.
2. The computer-implemented method of claim 1, wherein the objects of interest further comprise at least one critical structure and wherein defining the positions of the objects of interest comprises:
estimating a first search region based on spatial relationships between the plurality of landmarks and the at least one critical structure; and
generating a two-dimensional (2D) slice from the representation that comprises the first search region;
wherein the at least one critical structure comprises one or more of an orbit and nerve.
3. The computer-implemented method of claim 2, wherein defining the positions of the objects of interest further comprises: determining locations of a second plurality of landmarks on the 2D slice; estimating a second search region based on spatial relationships between the second plurality of landmarks;
generating a pixelated image of the second search region;
defining an outline of an object of interest within the pixelated image; generating a three-dimensional (3D) shape from the outline; and marking the 3D shape on the representation as one of the objects of interest.
4. The computer-implemented method of claim 2, wherein defining the positions of the objects of interest further comprises:
determining locations of a second plurality of landmarks on the 2D slice; estimating a second search region based on spatial relationships between the second plurality of landmarks and the at least one critical structure;
generating a pixelated image of the second search region;
determining intensity values for pixels within the pixelated image; and classifying a region within the 2D slice comprising pixels having an intensity above a threshold value as being a critical structure.
5. The computer-implemented method of claim 4, further comprising:
segmenting an image of the critical structure on the representation; and displaying the segment on the representation.
6. The computer-implemented method of any of claims 1-5, wherein the plurality of landmarks comprises a nasal bone and a frontal bone.
7. The computer-implemented method of claim 6, wherein the nasal bone is defined as the highest point in an axial view of the representation and the frontal bone is defined as the first maximal point from a nasal point to the top of the skull in a sagittal view of the representation.
8. The computer-implemented method of any of claims 1-7, wherein the representation comprises a 3D representation of one or more of the following: a plurality of computed tomography (CT) scans, a plurality of magnetic resonance imaging (MRI) scans, and a plurality of ultrasound (US) scans.
9. The computer-implemented method of any of claims 1-8, further comprising:
receiving an indication of a selected pathway from the one or more pathways; and
responsively displaying the selected pathway on the representation.
10. The computer-implemented method of any of claims 1-9, further comprising:
receiving signals indicating position and movement data for the tool with respect to the representation; and
overlaying an image representing the position and movement data onto the representation to show space occupied by the tool over time.
11. The computer-implemented method of claim 10, further comprising: calculating an average linear velocity of the tool for a predetermined time period;
calculating an average angular velocity of the tool for a predetermined time period; and/or
calculating an average jerk of the tool for a predetermined time period; and displaying the calculated average linear and/or angular velocities, the calculated jerk, and the overlaid image on the representation.
12. The computer-implemented method of claim 11, further comprising:
updating the display in a continuous manner.
13. The computer-implemented method of any of claims 10-12, wherein receiving signals indicating position and movement data for the tool with respect to the representation comprises:
receiving 3D coordinates, pitch, yaw, and/or roll data for the tool at periodic intervals; and
recording the 3D coordinates, pitch, yaw, and/or roll data, velocity, acceleration, and tremor.
14. The computer-implemented method of any of claims 10-13, further comprising:
calculating one or more of the following from the received data: frequency content, number of passes of the tool through an entry portal, surgical volume, surgical surface area, and closest distance of the tool to the at least one critical structure.
15. The computer-implemented method of any of claims 10-13, further comprising:
displaying the recorded data for a selected portion of a procedure, or for an entire procedure.
16. The computer-implemented method of any of claims 10-15, further comprising:
comparing the data for a plurality of procedures;
determining common pathways traveled for the plurality of procedures; and
identifying differences in the data for the plurality of procedures.
17. The computer-implemented method of any of claims 15-16, wherein the procedure comprises one of the following: a maxillary antrostomy, an ethmoidectomy, a sphenoidotomy, or a transsphenoidal approach to the body.
18. The computer-implemented method of any of claims 15-16, wherein the procedure is a simulated procedure performed on a simulated body.
19. The computer-implemented method of claim 2, further comprising:
issuing an alert if the tool is within a pre-determined distance from a critical structure.
20. A non-transitory computer readable medium having stored therein instructions executable to cause a computing device to perform functions comprising:
detecting locations of a plurality of landmarks on a representation of a body of a subject;
defining objects of interest based on the locations of the plurality of landmarks, the objects of interest comprising at least one critical structure, at least one target, and at least one entry portal;
generating one or more pathways on the representation for a tool to travel based on the defined objects of interest;
receiving an indication of a selection of a pathway from the one or more pathways;
receiving signals indicating position and movement data for the tool with respect to the representation; and
overlaying an image representing the position and movement data onto the representation.
21. The non-transitory computer readable medium of claim 20, wherein the representation is generated from one of a CT scan, an MRI scan, a US scan, or a holographic image of the body.
22. The non-transitory computer readable medium of any of claims 20-21, the functions further comprising:
comparing elements of the data for a plurality of procedures; 5 determining common pathways traveled for the plurality of procedures; and
identifying differences in the data for the plurality of procedures.
23. The non-transitory computer readable medium of any of claims 20-21, the 10 functions further comprising:
determining a cross-sectional area of the overall movement of the tool.
24. The non-transitory computer readable medium of claim 20, the functions for defining the objects of interest comprising:
15 estimating a first search region based on spatial relationships between the plurality of landmarks and orbits and nerves; and
generating a 2D slice from the representation that comprises the first search region.
20 25. The non-transitory computer readable medium of claim 21, the functions for defining the objects of interest further comprising:
determining locations of a second plurality of landmarks on the 2D slice; estimating a second search region based on spatial relationships between the second plurality of landmarks;
25 generating a pixelated image of the second search region;
defining an outline of an object of interest within the pixelated image; generating a 3D shape from the outline; and
marking the 3D shape on the representation as one of the objects of interest.
26. A computer-implemented method comprising:
detecting locations of a plurality of landmarks on a representation of a body of a subject;
defining objects of interest based on the locations of the plurality of landmarks, the objects of interest comprising at least one critical structure, at least one target, and at least one entry portal;
generating one or more pathways on the representation for a tool to travel based on the defined objects of interest;
receiving an indication of approval of a pathway of the one or more pathways;
receiving signals indicating position and movement data for the tool with respect to the representation; and
overlaying an image representing the indicated position and movement data onto the representation.
PCT/US2016/016555 2015-02-04 2016-02-04 Methods and systems for navigating surgical pathway WO2016126934A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201562112005P 2015-02-04 2015-02-04
US62/112,005 2015-02-04
US201562213738P 2015-09-03 2015-09-03
US62/213,738 2015-09-03

Publications (1)

Publication Number Publication Date
WO2016126934A1 true WO2016126934A1 (en) 2016-08-11

Family

ID=56564688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/016555 WO2016126934A1 (en) 2015-02-04 2016-02-04 Methods and systems for navigating surgical pathway

Country Status (1)

Country Link
WO (1) WO2016126934A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113008233A (en) * 2021-02-01 2021-06-22 北京中医药大学第三附属医院 Surgical instrument navigation method, device and system and storage medium
CN113693721A (en) * 2021-07-14 2021-11-26 北京理工大学 Multi-condition constrained path planning method and device
WO2022223042A1 (en) * 2021-04-23 2022-10-27 武汉联影智融医疗科技有限公司 Surgical path processing system, method, apparatus and device, and storage medium
CN116269155A (en) * 2023-03-22 2023-06-23 新光维医疗科技(苏州)股份有限公司 Image diagnosis method, image diagnosis device, and image diagnosis program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060004286A1 (en) * 2004-04-21 2006-01-05 Acclarent, Inc. Methods and devices for performing procedures within the ear, nose, throat and paranasal sinuses
US20100312096A1 (en) * 2009-06-08 2010-12-09 Michael Guttman Mri-guided interventional systems that can track and generate dynamic visualizations of flexible intrabody devices in near real time
US20130158346A1 (en) * 2003-12-12 2013-06-20 University Of Washington Catheterscope 3D Guidance and Interface System

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130158346A1 (en) * 2003-12-12 2013-06-20 University Of Washington Catheterscope 3D Guidance and Interface System
US20060004286A1 (en) * 2004-04-21 2006-01-05 Acclarent, Inc. Methods and devices for performing procedures within the ear, nose, throat and paranasal sinuses
US20100312096A1 (en) * 2009-06-08 2010-12-09 Michael Guttman Mri-guided interventional systems that can track and generate dynamic visualizations of flexible intrabody devices in near real time

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113008233A (en) * 2021-02-01 2021-06-22 北京中医药大学第三附属医院 Surgical instrument navigation method, device and system and storage medium
WO2022223042A1 (en) * 2021-04-23 2022-10-27 武汉联影智融医疗科技有限公司 Surgical path processing system, method, apparatus and device, and storage medium
CN113693721A (en) * 2021-07-14 2021-11-26 北京理工大学 Multi-condition constrained path planning method and device
CN116269155A (en) * 2023-03-22 2023-06-23 新光维医疗科技(苏州)股份有限公司 Image diagnosis method, image diagnosis device, and image diagnosis program
CN116269155B (en) * 2023-03-22 2024-03-22 新光维医疗科技(苏州)股份有限公司 Image diagnosis method, image diagnosis device, and image diagnosis program

Similar Documents

Publication Publication Date Title
US7809176B2 (en) Device and method for automated planning of an access path for a percutaneous, minimally invasive intervention
EP3145431B1 (en) Method and system of determining probe position in surgical site
US20190046272A1 (en) ENT Image Registration
JP2020522827A (en) Use of augmented reality in surgical navigation
WO2016126934A1 (en) Methods and systems for navigating surgical pathway
Nagelhus Hernes et al. Computer‐assisted 3D ultrasound‐guided neurosurgery: technological contributions, including multimodal registration and advanced display, demonstrating future perspectives
Zang et al. Optimal route planning for image-guided EBUS bronchoscopy
US20230177681A1 (en) Method for determining an ablation region based on deep learning
Kunz et al. Autonomous planning and intraoperative augmented reality navigation for neurosurgery
EP3292835B1 (en) Ent image registration
US20230172535A1 (en) Method for predicting the recurrence of a lesion by image analysis
US20220354579A1 (en) Systems and methods for planning and simulation of minimally invasive therapy
JP7314145B2 (en) Distance monitoring to selected anatomy during procedure
US20170224419A1 (en) System for planning the introduction of a needle in a patient's body
US20210085398A1 (en) Method and system for supporting medical personnel during a resection, and computer program product
Naik et al. Feature-based registration framework for pedicle screw trajectory registration between multimodal images
EP3975120B1 (en) Technique for guiding acquisition of one or more registration points on a patient's body
US20230248441A1 (en) Extended-reality visualization of endovascular navigation
Avrunin et al. Planning Method for Safety Neurosurgical and Computed Tomography Contrast-Data Set Visualization
Ivanov et al. Surgical Navigation Systems Based On AR/VR Technologies
Аврунін et al. Planning Method for Safety Neurosurgical and Computed Tomography Contrast-Data Set Visualization
Pardasani Neurosurgical Ultrasound Pose Estimation Using Image-Based Registration and Sensor Fusion-A Feasibility Study

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16747265

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16747265

Country of ref document: EP

Kind code of ref document: A1