WO2008076079A1 - Methods and apparatuses for cursor control in image guided surgery - Google Patents

Methods and apparatuses for cursor control in image guided surgery Download PDF

Info

Publication number
WO2008076079A1
WO2008076079A1 PCT/SG2007/000314 SG2007000314W WO2008076079A1 WO 2008076079 A1 WO2008076079 A1 WO 2008076079A1 SG 2007000314 W SG2007000314 W SG 2007000314W WO 2008076079 A1 WO2008076079 A1 WO 2008076079A1
Authority
WO
WIPO (PCT)
Prior art keywords
probe
virtual screen
screen
virtual
cursor
Prior art date
Application number
PCT/SG2007/000314
Other languages
French (fr)
Other versions
WO2008076079A8 (en
Inventor
Xiaohong Liang
Original Assignee
Bracco Imaging S.P.A.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bracco Imaging S.P.A. filed Critical Bracco Imaging S.P.A.
Publication of WO2008076079A1 publication Critical patent/WO2008076079A1/en
Publication of WO2008076079A8 publication Critical patent/WO2008076079A8/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B2017/00017Electrical control of surgical instruments
    • A61B2017/00207Electrical control of surgical instruments with hand gesture control or hand gesture recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2074Interface software
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • A61B2034/252User interfaces for surgical systems indicating steps of a surgical procedure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras

Definitions

  • the invention relates to an apparatus and method for displaying a cursor on a display screen based on the position of a probe in relation to a virtual screen.
  • the invention has particular, but not exclusive, application in image-guided surgery.
  • Pre-operative planning for surgical procedures enhances the ease of navigation in a complex three dimensional surgical space since the complex anatomy may be obscured during operation procedures due to lack of direct visibility.
  • Imaging for surgery planning can be built using computed tomography (CT), magnetic resonance imaging (MRI), magnetic resonance angiograph (MRA), magnetic resonance venogram (MRV), functional MRI and CTA, positron emission tomography (PET), and/or single position emission computed tomography (SPECT).
  • CT computed tomography
  • MRI magnetic resonance imaging
  • MRA magnetic resonance angiograph
  • MMV magnetic resonance venogram
  • PET positron emission tomography
  • SPECT single position emission computed tomography
  • Some surgical planning environments allow the surgeon to interact with the 3D image. Using a stereoscopic imaging technology, depth information can be generated to enable 3D visualization to facilitate surgical planning.
  • a surgical planning environment may have a virtual control panel to control virtual tools to be used to perform operations and manipulations on objects displayed in 3D.
  • Image guidance systems have been widely adopted in neurosurgery and have been proven to increase the accuracy and reduce the invasiveness of a wide range of surgical procedures.
  • Typical image guided surgical systems are based on a series of images constructed from pre-operative imaging data that is gathered before the surgical operation, such as Magnetic Resonance Imaging (MRI) images, Computed Tomography (CT) images, X-ray images, ultrasound images and/or the like.
  • the pre-operative images are typically registered in relation with the patient in the physical world by means of an optical tracking system to provide guidance during the surgical operation.
  • MIS Minimally Invasive Surgery
  • Imaging techniques such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and three-dimensional Ultrasonography (3DUS), are currently available to collect volumetric internal images of a patient without a single incision.
  • MRI Magnetic Resonance Imaging
  • CT Computed Tomography
  • 3DUS three-dimensional Ultrasonography
  • the complex anatomy structures of a patient can be visualized and examined; critical structures can be identified, segmented and located; and surgical approach can be planned.
  • the scanned images and surgical plan can be mapped to the actual patient on the operating table and a surgical navigation system can be used to guide the surgeon during the surgery.
  • U.S. Patent No. 5383454 discloses a system for indicating the position of a tip of a probe within an object on cross-sectional, scanned images of the object.
  • the position of the tip of the probe can be detected and translated to the coordinate system of cross- sectional images.
  • the cross-sectional image closest to the measured position of the tip of the probe can be selected; and a cursor representing the position of the tip of the probe can be displayed on the selected image.
  • U.S. Patent No. 6167296 describes a system for tracking the position of a pointer in real time by a position tracking system. Scanned image data of a patient is utilized to dynamically display 3-dimensional perspective images in real time of the patient's anatomy from the viewpoint of the pointer.
  • WO 02/100284 Al discloses a guide system in which a virtual image and a real image are overlaid together to provide visualization of augmented reality.
  • the virtual image is generated by a computer based on CT and/or MRI images which are co-registered and displayed as a multi-modal stereoscopic object and manipulated in a virtual reality environment to identify relevant surgical structures for display as 3D objects.
  • the right and left eye projections of the stereo image generated by the computer are displayed on the right and left LCD screens of a head mounted display.
  • the right and left LCD screens are partially transparent such that the real world seen through the right and left LCD screens of the head mounted display is overlaid with the computer generated stereo image.
  • the stereoscopic video output of a microscope is combined, through the use of a video mixer, with the stereoscopic, segmented 3D imaging data of the computer for display in a head mounted display.
  • the crop plane used by the computer to generate the virtual image can be coupled to the focus plane of the microscope.
  • changing the focus value of the microscope can be used to slice through the virtual 3D model to see details at different planes.
  • WO 2005/000139 Al discloses a surgical navigation imaging system, in which a micro-camera can be provided in a hand-held navigation probe.
  • Real time images of an operative scene from the viewpoint of the micro-camera can be overlaid with computer generated 3D graphics, which depicts structures of interest from the viewpoint of the micro-camera.
  • the computer generated 3D graphics are based on pre-operative scans. Depth perception can be enhanced through varying transparent settings of the camera image and the superimposed 3D graphics.
  • a virtual interface can be displayed adjacent to the combined image to facilitate user interaction.
  • the practitioner may need to control the content displayed on the monitor of the navigation system for optimal navigation.
  • a 2D user interface may be displayed for the adjustment of the controls.
  • U.S. Patent No. 5230623 describes an operating a pointer with interactive computer graphics.
  • the position of an operating pointer or arm apparatus is detected and read out on a computer and associated graphics display; and the pointer can be changed from its pointer function to a "3D mouse" so as to alternately by use control the functionality of the computer as in calibration and display features.
  • One technique includes defining a virtual screen in a volume of a 3D tracking system, determining a position on the virtual screen according to a location of a probe in the volume as tracked by the 3D tracking system, and displaying a cursor on a display screen based on the position on the virtual screen, according to a mapping between the virtual screen and the display screen.
  • the virtual screen is defined to be substantially parallel to a predefined vector in the volume of the 3D tracking system.
  • a tip of the probe is located in a central region of the virtual screen when the virtual screen is defined.
  • the present disclosure includes methods and apparatuses which perform these methods, including processing systems which perform these methods, and computer readable media which when executed on processing systems cause the systems to perform these methods.
  • Figure 1 illustrates an example screen of a surgical navigation system in a navigation mode, according to one embodiment.
  • Figure 2 illustrates an example screen of a surgical system in a control mode showing tools to adjust parameters for the display of 3D images of augmented reality with a cursor, according to one embodiment.
  • Figure 3 illustrates a coordinate system of a display screen within a cursor is displayed, according to one embodiment.
  • Figure 4 illustrates the spatial relation between a virtual screen and a probe when the virtual screen is created, according to one embodiment.
  • Figure 5 illustrates a position of a current virtual screen based on a current probe position, according to one embodiment.
  • Figure 6 illustrates a method to move a virtual screen along the shooting line of the probe, according to one embodiment.
  • Figures 7 — 9 illustrate a method to define a vector for defining the virtual screen, according to one embodiment.
  • Figure 10 illustrates a method to test cursor control, according to one embodiment.
  • Figure 11 illustrates a coordinate system of a virtual screen from which a probe location is to be mapped to the location of the cursor in the display screen, according to one embodiment.
  • Figure 12 illustrates an intersection point on a virtual screen based on the position of a probe, according to one embodiment.
  • Figure 13 illustrates a system to provide a display of augmented reality to guide a surgical procedure, according to one embodiment.
  • Figure 14 illustrates another system to provide a display of augmented reality to guide a surgical procedure, according to one embodiment.
  • Figure 15 is a block diagram of a data processing system used in some embodiments of cursor control.
  • Figure 16 is a block diagram illustrating an architecture of an apparatus for implementing one or more of the disclosed techniques.
  • the present disclosure includes methods and apparatuses for cursor control in image guided surgery.
  • 3D images can be provided based on the view point of a navigation instrument, such as a probe, to guide surgical navigation.
  • a navigation instrument such as a probe
  • patient-specific data sets from one or more imaging techniques such as magnetic resonance imaging, magnetic resonance angiography, magnetic resonance venography, and computed tomography can be co-registered, integrated, and displayed in 3D.
  • the stereoscopic images of the 3D objects as captured on the images can be used to provide augmented reality based image guided surgical navigation.
  • the surgeon may wish to access a 2D interface (e.g., the control panel) for adjusting one or more parameters that control the display of augmented reality.
  • a 2D interface e.g., the control panel
  • the surgeon may adjust parameters that control the color, brightness, viewpoint, resolution, contrast, etc.
  • the surgeon should not be in contact with non-sterilized objects to avoid contamination after the sterilization process.
  • the probe which can be sterilized, for 3D image guided navigation system can be additionally used to control the movement of the cursor on the screen.
  • the probe is a 3D position tracked instrument to navigate about a surgical environment.
  • the probe includes a mounted video camera to capture a point of view of the surgical environment from the viewpoint of the probe; and the stereoscopic images are provided according to the viewpoint of the probe to provide navigation guidance.
  • the probe when the system is in the navigation mode, the probe is used to provide the viewpoint of the image that is used to guide the surgical process; when the system is in the control mode, the probe for 3D image guided navigation system can be used as a cursor controller (e.g., to control the movement of a cursor on a control panel interface, which can be displayed as a 2D interface).
  • a cursor controller e.g., to control the movement of a cursor on a control panel interface, which can be displayed as a 2D interface.
  • the system can be alternated between the navigation mode and the control mode via activating a switch.
  • the surgeon can use the probe to drive the cursor onto a graphical user interface element and use a foot switch to signal the system to perform a function associated with the graphical user interface element while the cursor displayed on the graphical user interface element.
  • a 2-D interface may be a control panel with multiple user interface elements, such as buttons, sliders, etc., which can be selectively accessed based on the position of the cursor displayed on the control panel.
  • the control panel can be used to control the transparency of the displayed image, adjustment of the color map, intensity, etc.
  • the surgeon can scale the image and adjust the image resolution as desired for image guided surgery.
  • the user can control the display of various virtual objects for image guided surgery, such as the transparency of a virtual object or a real time video image for a display of augmented reality.
  • the user can utilize digital zooming to see an enlarged display.
  • the user can set a cutting plane relative to the tip of the probe.
  • the user may select from an Augmented Reality (AR) dominated display for navigation and an orthogonal slices dominated display for navigation.
  • AR Augmented Reality
  • At least one embodiment of the disclosure allows a surgeon to access a 2D user interface, from within the sterile field, via a cursor that is controlled by a position tracked probe for image guided navigation during a surgery process, without having to utilize other cursor controlling devices.
  • the benefits include continence, no need for extra hands, no need to touch a keyboard, mouse or draped monitor during surgery.
  • Pre-operative image data such as 3D image of the patient scanned before the patient entering the operating room (OR) can be used to generate virtual image data, such as 3D objects segmented from the pre-operative image data, surgical planning data, diagnosis information, etc.
  • an Image Guided Surgery navigation system can be operated in a navigation mode to provide image based guidance for the surgery process.
  • the navigation mode is a default mode.
  • a stereoscopic display of the virtual image data can be overlaid with a real time video image, captured by a camera mounted on the probe, to provide augmented reality view of the surgical field from the view point of the probe (camera).
  • a representation of the probe is further overlaid on the real time video image and/or the virtual image to indicate the position and orientation of the probe in the displayed image.
  • the Image Guided Surgery navigation system can be switched into a control mode for accessing a 2D graphical user interface.
  • the user can press a footswitch shortly to swap the system from the navigation mode to the control mode.
  • the probe can be used as a cursor controller to control the movement of the cursor on the 2D graphical user interface.
  • the probe can be used to move the cursor on the icon button or slider; and a footswitch can be pressed to indicate the selection of the button or slider while the cursor is positioned on the icon button or slider.
  • the Image Guided Surgery navigation system can be switched back to the navigation mode from the control mode.
  • the probe can be used to move the cursor to the 3D window; and pressing the footswitch once while the cursor is within the 3D window signals the system to go from the control mode to the navigation mode.
  • the Image Guided Surgery navigation system can be swapped between the navigation mode and the control mode from time to time by repeating the operations described above.
  • the image guided surgery may be performed without the real time video image; and the probe may not include the camera. In one embodiment, the image guided surgery may be performed with the real time video image but without the virtual image data.
  • FIG. 1 illustrates an example screen of a surgical navigation system in a navigation mode, according to one embodiment.
  • an Augmented Reality (AR) dominated display for navigation is illustrated.
  • the display screen includes a main window (501) for the display of augmented reality, including a real time video image from a video camera mounted on the probe, a tip portion (509) of the probe as captured in the video image, and virtual objects that are rendered based on the virtual image data registered to the patient in the operation room.
  • augmented reality including a real time video image from a video camera mounted on the probe, a tip portion (509) of the probe as captured in the video image, and virtual objects that are rendered based on the virtual image data registered to the patient in the operation room.
  • a 3D model of the tip portion (509) is also overlaid on the main window (501).
  • the 3D model of the tip portion (509) aligns with the tip portion (509) of the probe as captured in the video image; and misalignment can be easily observed and corrected to improve the navigation accuracy.
  • the display screen also includes three windows (503, 505 and 507), which display three orthogonal slides of the virtual image data to show the location of the tip of the probe relative to the patient.
  • the crosshair displayed in the windows (503, 505 and 507) indicates the position of the tip of probe.
  • the orthogonal slides are generated based on a pre-operative 3D image data set. In another embodiment, the orthogonal slides may be obtained in real time based on the traced position of the probe.
  • Figure 2 illustrates an example screen of a surgical system in a control mode showing tools to adjust parameters for the display of 3D images of augmented reality, according to one embodiment.
  • the display screen includes a 3D window (513) showing a preview of the augmented reality window and the slice windows.
  • the preview is presented based on the image and data that is displayed prior to the system is switched to the control mode.
  • the display screen shows a cursor (511) which can be moved according to the movement of the probe in the operating room.
  • the cursor (511) can be moved around on the 2D graphical user interface to select buttons, sliders, etc.
  • the settings or parameters changed or adjusted via the 2D graphical user interface are applied to the preview shown in the 3D window (513).
  • the effect of the change or adjustment can be viewed in the 3D window (513) in the control mode without having to switch back to the navigation mode for the purpose to observe the effect of the change. Details on the methods and systems for the control of the cursor using the probe are provided below.
  • a virtual screen is defined in the operation room; and a position of a point on the virtual screen that is pointed at by the probe is mapped to the displayed screen as the position of the cursor.
  • Figure 11 illustrates a coordinate system of a virtual screen from which a probe location is to be mapped to the location of the cursor in the display screen, according to one embodiment.
  • the position of the point (523) as pointed at by the probe is determinate from the intersection of the line of the probe as tracked by a position tracking system in the operating room and the virtual screen defined in the operating room, as illustrated in Figure 12.
  • the position of the probe relative to the virtual screen is determined by the orientation of the probe relative to the virtual screen.
  • Figure 12 illustrates an intersection point on a virtual screen based on the position of a probe, according to one embodiment.
  • the shooting line is a line along, and/or an axis of, the probe.
  • the shooting line corresponds with a longitudinal axis of the probe. This longitudinal axis may also define the z-axis Zp of the probe.
  • the position and orientation of the probe as tracked in the operating room determines the shooting line.
  • Figure 3 illustrates a coordinate system of a display screen within a cursor is displayed, according to one embodiment.
  • the cursor (521) is to be displayed at a point based on the position of the point (523) in the virtual screen.
  • the position of the cursor (521) in the coordinate system of the real display screen can be determined.
  • the origin O s of the display screen is located at the upper-left corner of the screen with an active area defined by the dimensions of (sizeX s , sizeY s ).
  • the ratio between the width and the height of the screen can be 4:3 or other ratios.
  • the origin can also be set at a different location on the display screen.
  • a virtual screen is generated in the operation room when a surgical navigation system is swapped from a navigation mode to a control mode (e.g., via activation of a designated switch, such as a footswitch, or moving the probe outside a predetermined region, or moving the probe into a predetermined region).
  • a virtual screen is generated with the center located at the tip of the probe, as shown in Figure 4.
  • Figure 4 illustrates the spatial relation between a virtual screen and a probe at the time the virtual screen is generated according to one embodiment.
  • the coordinate system of the probe and the coordinate system of the virtual screen are illustrated.
  • the origins are substantially overlapped.
  • the origin of the coordinate of the probe can be located at the probe tip.
  • the tip of the probe is located in a central region of the virtual screen when the virtual screen is defined.
  • the shooting line of the probe is predefined as the z-axis of the probe.
  • the y-axis ( O V Y V ) of the virtual screen is a pre-defined vector with an initial direction of (0.0, 1.0, 0.0).
  • This pre-defined vector can be defined on the spot as a vertical vector in the surgical environment in the operating room, or a vertical direction of the display screen. Alternatively, the pre-defined vector corresponds to a horizontal direction of the display screen.
  • the pre-defined vector can be defined once and used in later sessions. For example, the value of the pre-defined vector can be stored in a file on the hard disk and can be loaded when the surgical environment is initiated.
  • the coordinate system of the virtual screen can be determined by at least one of the direction of the shooting line of the probe, the position of the origin, and/or the predefined vector.
  • the coordinate system can be determined by the set of formula (1) shown below.
  • the coordinate values are defined by the coordinate system of the tracking system.
  • the plane of the virtual screen can be determined by determining the coefficients of the plane equation.
  • the virtual screen can be determined.
  • these coordinate values are defined in the coordinate system of the tracking system.
  • the plane of the virtual screen can be determined as the coefficients of the following equation.
  • the virtual screen remains at substantially the same position and orientation until a different virtual screen is generated.
  • the orientation of the virtual screen is maintained substantially the same while the position of the virtual screen can be adjusted along its normal direction to maintain the distance between the virtual screen and the tip of the probe relatively constant.
  • the orientation of the virtual screen can be adjusted over time to follow the orientation of the screen.
  • an apparatus determines a position of a virtual screen according to a tracked location of a probe.
  • the virtual screen may have a predetermined orientation with respect to the probe, which changes as the position and/or the orientation of the probe changes.
  • the virtual screen has a pre-defined orientation until the probe is outside an area defined by the virtual screen.
  • a display generator in the apparatus generates a cursor for display on a display screen according to a mapping between the virtual screen and the display screen when the system is in a control mode. In the navigation mode, the position and origination of the probe can define the view point of an augmented reality view.
  • a new virtual screen is generated to replace an old virtual screen if the cursor is located substantially close to the boundary or moving outwards of the display. Therefore, the cursor can remain visible and be controlled by the probe without being limited by the area of the old virtual screen thus expanding the region of operation. Automatic generation of the new virtual screen gives the user more freedom to control the cursor on the screen with the movement of the probe in the workspace.
  • the cursor may be automatically shifted to the center of the new virtual screen from the boundary of the old virtual screen when the new virtual screen is generated.
  • a new virtual screen can be defined based on the current location of the probe tip. For example, the system may be switched from the control mode to the navigation mode then back to the control mode by activating a footswitch a predetermined number of times and maintaining the probe in the desired position. Thus, a new virtual screen is generated at the tip of the probe.
  • the virtual screen is adjusted in the volume of the 3D tracking system to follow an orientation of the probe over a period of time without changing an intersection point between a projection line along the probe and the virtual screen, when the probe is stationary in the volume of the 3D tracking system during the period of time.
  • Figure 5 illustrates a position of a current virtual screen based on a current probe position, according to one embodiment.
  • a virtual screen can be used until the intersection point between the shooting line of the probe and the virtual screen approaches and/or exceeds a boundary of the virtual screen.
  • the virtual screen is dragged by the movement of the intersection point on the virtual screen so that the intersection point remains on the boundary of the virtual screen when the intersection point exceeds the boundaries of the old virtual screen.
  • a new virtual screen can be generated (or the virtual screen is moved/adjusted) at a new location such that the probe remains at the boundary point of the new virtual screen.
  • the new virtual screen may be generated when the probe approaches the boundary of the old virtual screen closely.
  • the virtual screen tracks the probe in a 'dragging' like motion.
  • the boundary of the virtual screen is defined such that the probe tip is located on the new virtual screen at substantially the same location that the probe tip exited the boundaries of the old virtual screen as if the probe tip is dragging the old virtual screen.
  • the position of the virtual screen remains substantially the same when the intersection point of the shooting line of the probe and the virtual screen is within the boundaries of the virtual screen.
  • the virtual screen is moved based on the movement of the intersection point if the intersection point approaches the boundary and exceeds beyond the boundary of the virtual screen.
  • a zero displacement of the cursor is generated when the projection point of the probe is located substantially close to or beyond a boundary of the virtual screen.
  • Figure 6 illustrates a method to move a virtual screen along the shooting line of the probe, according to one embodiment.
  • the virtual screen can be moved along the shooting line of the probe such that the probe tip stays on the virtual screen. Since the virtual screen is adjusted based on the shooting line of the screen, the cursor position on the screen is not affected. In one embodiment, the virtual screen is dragged to the tip of the probe instantaneously.
  • the adjustment of the virtual screen towards the probe tip can be performed over a period of time (e.g., a pre-determined period of time, or a period of time scaled according to the speed of the probe), rather than instantaneously.
  • the adjustment of the cursor position may be faster when the probe movement is slow; and when the probe movement is faster than a threshold, the virtual screen is not adjusted.
  • the virtual screen is rotated relative to the current cursor position. For example, when the probe tip is on the virtual screen, and the user can hold the probe at an angle with the virtual screen to rotate the virtual screen about the probe tip to obtain a better view of the scene. In one embodiment, the virtual screen is rotated in a pre-determined amount of time from an original orientation to a desired location.
  • the intersection point between the shooting line (projection point) of the probe and the plane of the virtual screen is determined to compute the cursor position on the display screen.
  • the position and/or orientation of the virtual screen can then be recomputed/adjusted according to the change of the position and orientation of the probe without affecting the relative position of the intersection point in the virtual screen. For example a new virtual screen is generated to replace the virtual screen in response to the projection point being substantially close to or beyond a boundary of the virtual screen.
  • the position of the virtual screen is adjustable to maintain an intersection point between a projection line along the probe and a plane of the virtual screen on the boundary of the virtual screen,.
  • the virtual screen is moved along the projection line of the probe to maintain a constant distance between the virtual screen and the probe.
  • a virtual screen is moved along the projection line of the probe over a period of time to reduce a difference between a predetermined distance and a current distance between the virtual screen and the probe.
  • the period of time is based on a speed of probe movement in the volume of the 3D tracking system.
  • the period of time may decrease as the speed of the probe increases.
  • the virtual screen can be re-generated according to the position and orientation of the probe.
  • the virtual screen can be repositioned/adjusted according to an interpolation between the previous and current positions and orientations of the probe.
  • the adjusting the position of the virtual screen comprises dragging the virtual screen according to the movement of the intersection point perpendicular to an edge of the virtual screen and movement of the intersection point parallel to the edge of the virtual screen.
  • the adjusting the position of the virtual screen comprises dragging the virtual screen according to movement of the intersection point perpendicular to an edge of the virtual screen while allowing the intersection point to slide along the edge on the virtual screen.
  • the displacement of the probe can be determined based on probe movement. For example, suppose the probe moves from its original position as "O v " illustrated in Figure 11 to a new position "Probe" shown in Figure 12. According to one embodiment, the intersection point between the shooting line of the probe and the virtual screen is point P and the shooting line has the same direction as the z-axis of the probe. The origin of the z-axis can be the probe tip.
  • the intersection point P' (not shown) between the shooting line of the probe and the virtual screen in the opposite direction of the shooting line can be determined.
  • the intersection point between the shooting line of the probe and the virtual screen is determined by solving a set of equations determined by the equation of the shooting line and that of the virtual plane. The method is described below. After the intersection point is obtained, the displacement of the probe on the virtual screen can be determined.
  • the displacement /Sd 1 on the virtual screen can be calculated with following equation.
  • the cursor displacement on the display is scaled by the ratio of the size of the display and the virtual screen.
  • the cursor displacement on the display can be computed with following equation. sizeX.
  • the cursor displacement Ad 2 is then added to initial (or previous) cursor position prevCursorPos to obtain the new cursor position after the probe has been moved.
  • One technique for doing this is defined in Equation 6:
  • Update cursor position ⁇ currCursorPos.x prevCursorPos. x - ⁇ x 9
  • the displacement vector of the cursor has an opposite sign compared to the direction of the movement of the probe. For instance, '- ⁇ X 2 'is used instead of a '+ ⁇ X 2 ' since the directions of x-y axes (O V X V and O V Y V ) of the virtual
  • the displacement of the probe exceeds the boundary of the virtual screen (e.g., the intersection point of the shooting line of the probe and the virtual screen exceeds the boundary of the virtual screen)
  • the displacement of the probe Ad 1 is recorded as zero.
  • the cursor also remains at its initial (or previous) position and no update would occur until the probe returns to a region within the boundaries of the virtual screen.
  • a virtual screen is defined in the 3D tracking system of the surgical navigation system.
  • the movement of the probe relative to this virtual screen can be mapped to the cursor movement to generate the cursor on the display screen with a 2D interface (e.g., a control panel).
  • the virtual screen can be defined as the X-Y plane of the 3D tracking system.
  • the position of the probe detected by the tracking system can be mapped to the cursor in the screen display.
  • the virtual screen is defined at the probe position.
  • the probe position relative to the virtual screen can be determined by the intersection position of a vector extending in a direction of the probe tip and the virtual screen.
  • the intersection position can be expressed in the virtual screen coordinates.
  • Another example is to calculate a 3D to 2D transfer matrix to transfer coordinates in the tracking coordinate system to the virtual screen and to map the probe tip's position to the virtual screen using the transfer matrix.
  • the virtual screen is generated to be perpendicular to the orientation of the probe when the system is switched from the navigation mode to the control mode. Therefore, the virtual screen may have a same coordinate system as that of the probe (e.g., three axes aligned those of the probe respectively).
  • a pre-defined vector is determined to define the direction of the y- axis of the virtual screen so that the virtual screen can be vertically displayed (e.g., substantially perpendicular to the ground) in front of the operator independent of the orientation of the probe at the time the virtual screen is created.
  • the movement of the cursor on the screen can track the movement of the probe in the surgical navigation environment.
  • the virtual screen is defined to be substantially parallel to the pre-defined vector in the volume of the 3D tracking system.
  • the pre-defined vector corresponds to a vertical direction of the display screen.
  • the pre-defined vector is user defined prior to the set up of the surgical navigation system in the operating room.
  • the predefined vector can be defined on the spot in the operating room as well. The pre-defined vector can be saved in a file to be re-loaded anytime the system is used.
  • the position of the probe relative to the virtual screen is mapped as the corresponding position of the cursor relative to the real screen.
  • a direction of the probe is determined.
  • a plane that includes the pre-defined vector and a second vector that is perpendicular to the pre-defined vector and the direction of the probe is determined.
  • the virtual screen is defined in the plane.
  • the movement of the probe relative to the virtual screen can be mapped as the movement of the cursor relative to the real screen.
  • the virtual screen rotates as the probe rotates.
  • the virtual screen is dragged with a probe when the movement of the probe is outwards relative to the boundary of the virtual screen.
  • Figures 7 - 9 illustrate a method to define a vector for defining the virtual screen, according to one embodiment.
  • a key (e.g., 'F5') is pressed to initiate an interface for a user to define a vector for setting up a pre-defined vector.
  • the interface includes instructions to guide the user in the vector defining process where the pre-defined vector is defined by recording two or more positions of the probe in the volume of the 3D tracking system.
  • an upper endpoint of the vector is selected by the probe and recorded via clicking the 'record' button shown on the interface.
  • the 'clear' button can be used to delete the selected endpoints.
  • the interface includes a panel to indicate whether the endpoints of the vector have been successfully defined or if an error has occurred.
  • a lower end point of the vector can be selected by the probe and recorded via clicking the 'record' button shown on the interface.
  • the probe is shifted downwards from the upper endpoint until the desired location has been reached. To save the lower endpoint, the 'record' button is pressed while maintaining the probe in the desired location. Similarly, the 'clear' button can be used to delete the selected end points. As shown in the status indicator panel, the upper end up has been successfully selected.
  • the status panel indicates that the lower end point has been selected.
  • the direction of the resultant vector is determined and displayed in the status panel indicator.
  • the position of the two points defined using the probe defines the vector that can be used to define the orientation of the virtual screen.
  • the 'return' button is utilized to switch from the control mode to the navigation mode where the user is able to navigate about the 3D image.
  • Figure 10 illustrates a method to test cursor control, according to one embodiment.
  • the defined vector can be tested via pressing the 'test' button to verify the pre-defined vector.
  • the 'test' button may trigger a 'test vertical vector' interface displayed with a set of grids.
  • the probe can be moved around in the workspace to evaluate whether the cursor movement tracks the movement of the probe.
  • the 'return' button can be pressed.
  • the same pre-defined vector can be used. However, if the movement of the cursor is unsatisfactory, an option to re-define the predefined vector may be available.
  • a virtual screen is generated.
  • the virtual screen can be generated with its center located at the tip of the probe with a coordinate system shown in Figure 11.
  • the directions of the X-axis and Y-axis of the virtual screen are opposite to those of the real screen as shown in Figure 3.
  • Other coordinate systems can also be used.
  • the orientation of the coordinate system of the virtual screen can be defined to be aligned with the coordinate system of the probe or of the tracking system.
  • the ratio between the width and the height of the virtual screen plane can be the same as the ratio of the real screen (e.g., 4:3). Other ratios may be used for the virtual screen.
  • a shooting line of the probe is a vector extending in the direction that the probe is pointed towards.
  • the virtual screen may intersect with the shooting line of the probe as the probe is being operated by a user.
  • the displacement Ad 1 on the virtual screen between the intersection point and the origin of the virtual screen is scaled (e.g., the scale factor can depend on the size of the screen rectangle and that of the virtual screen) to obtain a new displacement Ad 2 of the cursor on the display screen.
  • the position of the cursor when the virtual screen is generated can be recorded.
  • the new displacement Ad 2 can be added onto the old cursor position to generate the new cursor position.
  • the cursor's movement tracks the movement of the probe.
  • the cursor when the probe is pointing at the same point on the virtual screen, the cursor is mapped to the same point on the real screen.
  • the velocity of the movement of the probe can be used to control the movement of the cursor (e.g., the scaling of the displacement maybe weighted according to the speed of the probe movement).
  • Image Guided Surgery Figure 13 illustrates a system to provide a display of augmented reality to guide a surgical procedure, according to one embodiment.
  • a computer 123 is used to generate a virtual image of a view, according to a viewpoint acquired by the video camera 103 to enhance the display image based on the real image.
  • the real image and the virtual image can be integrated in real time for display on the display device 125 (e.g., a monitor, or other display devices).
  • the computer 123 is to generate the virtual image based on the object model 121 which can be generated from scanned data of the patient and defined before the image guided procedure (e.g., a neurosurgical procedure).
  • the object model 121 can include diagnostic information, surgical plan, and/or segmented anatomical features that are captured from the scanned 3D image data.
  • a video camera 103 is mounted on a probe 101 such that at least a portion of the probe tip 115, is in the field of view 105 of the camera.
  • the video camera 103 has a predefined position and orientation with respect to the probe 101 such that the position and orientation of the video camera 103 can be determined from the position and the orientation of the probe 101.
  • the probe 101 may include other instruments such as additional illuminating devices.
  • the probe 101 may not include a video camera.
  • a representation of the probe is overlaid on the scanned image of the patient based on the spatial relationship between the patient and the probe.
  • images used in navigation obtained pre-operatively or intra-operatively from imaging devices such as ultra-sonography, MRI, X-ray, etc., can be images of internal anatomies.
  • the tracked position of the navigation instrument can be indicated in the images of the body part.
  • the pre-operative images can be registered with the corresponding anatomic region of the patient.
  • the spatial relationship between the pre-operative images and the patient in the tracking system is determined.
  • the location of the navigation instrument as tracked by the tracking system can be spatially correlated with the corresponding locations in the pre-operative images.
  • a representation of the probe can be overlaid on the pre-operative images according to the relative position between the patient and the probe. Further, the system can determine the pose (position and orientation) of the video camera based on the tracked location of the probe. Thus, the images obtained from the video camera can be spatially correlated with the pre-operative images for the overlay of the video image with the pre-operative images.
  • one registration technique maps the image data of a patient to the patient using a number of anatomical features on the body surface of the patient by matching their positions identified and located in the scan images and the corresponding positions on the patient as determined using a tracked probe.
  • the registration accuracy can be further improved by mapping a surface of a body part of the patient generated from the imaging data to the surface data of the corresponding body part generated on the operating table.
  • the position tracking system 127 uses two tracking cameras 131 and 133 to capture the scene for position tracking.
  • a reference frame 117 with feature points is attached rigidly to the patient 111.
  • the feature points can be fiducial points marked with markers or tracking balls 112-114, or Light Emitting Diodes (LEDs).
  • the feature points are tracked by the position tracking system 127.
  • the spatial relationship between the set of feature points and the pre-operative images is determined.
  • the spatial relation between the pre-operative images which represent the patient and positions determined by the tracking system can be dynamically determined, using the tracked location of the feature points and the spatial relation between the set of feature points and the pre-operative images.
  • the probe 101 has feature points 107, 108 and 109 (e.g., tracking balls).
  • the image of the feature points in images captured by the tracking cameras 131 and 133 can be automatically identified using the position tracking system 127.
  • the position tracking system 127 Based on the positions of the feature points of the probe 101 in the video images of the tracking cameras, the position tracking system 127 can compute the position and orientation of the probe 101 in the coordinate system 135 of the position tracking system.
  • the location of the reference frame 117 is determined based on the tracked positions of the feature points 112-113; and the location of the tip 115 of the probe is determined based on the tracked positions of the feature points 107, 108 and 109.
  • the system can correlate the location of the reference frame, the position of the tip of the probe, and the position of the identified feature in the preoperative images.
  • the position of the tip of the probe can be expressed relative to the reference frame.
  • Three or more sets of such correlation data can be used to determine a transformation that maps between the positions as determined in the pre-operative images and positions as determined relative to the reference frame.
  • registration data representing the spatial relation between the positions as determined in the pre-operative images and positions as determined relative to the reference frame is stored after the registration.
  • the registration data is stored with identification information of the patient and the pre-operative images.
  • a registration process is initiated, such previously generated registration data is searched for the patient and the pre-operative images. If it is determined that the previous recorded registration data is found and valid, the registration data can be loaded into the computer process to eliminate the need to repeat the registration operations of touching the anatomical features with the probe tips.
  • the image data of a patient including the various objects associated with the surgical plan which are in the same coordinate systems as the image data, can be mapped to the patient on the operating table.
  • the position tracking system can determine a position based on the delay in the propagation of a signal, such as a radio signal, an ultrasound signal, or a laser beam.
  • a signal such as a radio signal, an ultrasound signal, or a laser beam.
  • a number of transmitters and/or receivers can be used to determine the propagation delays to a set of points to track the position of a transmitter (or a receiver).
  • the position tracking system can determine a position based on the positions of components of a supporting structure that can be used to support the probe.
  • Image based guidance can also be provided based on the real time position and orientation relation between the patient 111, the probe 101 and the object model 121.
  • the computer can generate a representation of the probe (e.g., using a 3D model of the probe) to show the relative position of the probe with respect to the object.
  • the computer 123 can generate a 3D model of the real time scene having the probe 101 and the patient 111, using the real time determined position and orientation relation between the patient 111 and the probe 101, a 3D model of the patient 111 generated based on the pre-operative image, a model of the probe 101 and the registration data.
  • the computer 123 can generate a stereoscopic view of the 3D model of the real time scene for any pairs of viewpoints specified by the user.
  • the pose of the virtual observer with the pair of viewpoints associated with the eyes of the virtual observer can have a pre-determined geometric relation with the probe 101, or be specified by the user in real time during the image guided procedure.
  • the object model 121 can be prepared based on scanned images prior to the performance of a surgical operation. For example, after the patient is scanned, such as by CT and/or MRI scanners, the scanned images can be used in a (VR) environment for planning. Detailed information on Dextroscope can be found in "Planning Simulation of Neurosurgery in a Environment" by Kockro, et al. in
  • scanned images from different imaging modalities can be co- registered and displayed as a multimodal stereoscopic object.
  • relevant surgical structures can be identified and isolated from scanned images. Additionally, landmarks and surgical paths can be marked. The positions of anatomical features in the images can also be identified. The identified positions of the anatomical features can be subsequently used in the registration process for correlating with the corresponding positions on the patient.
  • no video camera is mounted in the probe.
  • the video camera can be a separate device which can be tracked separately.
  • the video camera can be part of a microscope.
  • the video camera can be mounted on a head mounted display device to capture the images as seen by the eyes through the head mounted display device.
  • the video camera can be integrated with an endoscopic unit.
  • Figure 14 illustrates another system to provide a display of augmented reality to guide a surgical procedure, according to one embodiment.
  • the system includes a stereo LCD head mounted display 201 (for example, a SONY LDI 100).
  • the head mounted display 201 can be worn by a user, or alternatively, it can be coupled to an operating microscope 203 supported by a structure 205.
  • a support structure allows the LCD display 201 to be mounted on top of the binocular during microscopic surgery.
  • the head mounted display 201 is partially transparent to allow the overlay of the image displayed on the head mounted display 201 onto the scene that is seen through the head mounted display 201.
  • the head mounted display 201 is not transparent; and a video image of the scene is captured and overlaid with graphics and/or images that are generated based on the pre-operative images.
  • the system further includes an optical tracking unit 207 to track the locations of a probe 209, the head mounted display 201, and/or the microscope 203.
  • the location of the head mounted display 201 can be tracked to determine the viewing direction of the head mounted display 201 and generate the image for display in the head mounted display 201 according to the viewing direction of the head mounted display 201.
  • the location of the probe 209 can be used to present a representation of the tip of the probe on the image displayed on head mounted display 201.
  • the location and the setting of the microscope 203 can be used in generating the image for display in the head mounted display 201 when the user views surgical environment via the microscope.
  • the location of the patient 221 is also tracked. Thus, even if the patient moves during the operation, the computer 211 can still overlay the virtual data on the real view accurately.
  • the tracking unit 207 operates by detecting three or more reflective spherical markers attached to an object.
  • the tracking unit 207 can operate by detecting the light from LEDs.
  • the location of the object can be determined in the 3D space covered by the two cameras of the tracking system.
  • three markers or more can be attached along its upper frontal edge (close to the forehead of the person wearing the display).
  • the microscope 203 can also be tracked by reflective makers mounted to a support structure attached to the microscope such that a free line of sight to the cameras of the tracking system is provided during most of the microscope movements.
  • the tracking unit 207 used in the system is available commercially, such as from Northern Digital, Polaris. Alternatively, other types of tracking units can also be used.
  • the system further includes a computer 211, which is capable of real time stereoscopic graphics rendering, and transmitting the computer-generated images to the head mounted display 201 via a cable 213.
  • the system may further include a footswitch 215, to transmit signals to the computer 211 via a cable 217.
  • a user can activate the footswitch to indicate to the computer that the probe tip is touching a fiducial point on the patient, at which moment the position of the probe tip represents the position of the fiducial point on the patient.
  • the settings of the microscope 203 are transmitted (as discussed below) to the computer 211 via cable 219.
  • the tracking unit 207 and the microscope 203 communicate with the computer 211 via a serial port in one embodiment.
  • the footswitch 215 can be connected to another computer port for interaction with the computer during the surgical procedure.
  • the head of the patient 221 is registered to the volumetric preoperative data with the aid of markers (fiducials) on the patient's skin or disposed elsewhere on or in the patient.
  • the fiducials can be glued to the skin before the imaging procedure and remain on the skin until the surgery starts. In some embodiments, four or more (e.g. six) flducials are used.
  • the positions of the markers in the images are identified and marked.
  • a probe tracked by the tracking system is used to point to the flducials in the real world (on the skin) that correspond to those marked on the images.
  • the 3D data is then registered to the patient.
  • the registration procedure yields a transformation matrix which can be used to map the positions as tracked in the real world to the corresponding positions in the images.
  • the registration method can be other kind of means such as surface-based registration besides the point-based registration mentioned above.
  • the surgeon can wear the head mounted display 201 to examine the patient 221 through the semi- transparent screen of the display 201 where the stereoscopic reconstruction of the segmented imaging data can be displayed.
  • the surgeon can see the 3D image data to be overlaid directly on the actual patient and.
  • the image of the 3D structures appearing "inside" the head can be viewed from different angles while the viewer is changing position.
  • registering image data with a patient involves providing a reference frame with a fixed position relative to the patient and determining the position and orientation of the reference frame using a tracking device. The image data is then registered to the patient relative to the reference frame.
  • a transformation matrix that represents the spatial relation between the coordinate system of the image data and a coordinate system based on the reference frame can be determined during the registration process and recorded (e.g., in a file on a hard drive, or other types of memory, of the computer (123 or 211)).
  • other types of registration data that can be used to derive the transformation matrix such as the input data received during the registration, can be stored.
  • the module uses one or more rules to search and determine the validity of the registration data.
  • the name of the patient can be used to identify the patient.
  • other types of identifications can be used to identify the patient.
  • a patient ID number can be used to identify the patient.
  • the patient ID number can be obtained and/or derived from a Radio Frequency Identification (RFID) tag of the patient in an automated process.
  • RFID Radio Frequency Identification
  • the module determines the validity of the registration data based on a number of rules. For example, the module can be configured to reject registration data that is older than pre-determined time period, such as 24 hours. In one embodiment, the module can further provide the user the options to choose between use the registration data or start a new registration process.
  • the system can assign identifications to image data, such that the registration data is recorded in association with the identification of the image data.
  • Figure 15 is a block diagram of a data processing system used in some embodiments of cursor control.
  • Figure 15 illustrates various components of a computer system, it is not intended to represent any particular architecture or manner of interconnecting the components. Other systems that have fewer or more components/modules can also be used.
  • the computer system 400 is one embodiment of a data processing system.
  • the system 400 includes an inter-connect 401 (e.g., bus and system core logic), which interconnects a microprocessor(s) 403 and memory 407.
  • the microprocessor (403) is coupled to cache memory 405, which can be implemented on a same chip as the microprocessor (403).
  • the inter-connect (401) interconnects the microprocessor(s) (403) and memory (407) (e.g., the volatile memory and/or the nonvolatile memory) together and also interconnects them to a display controller and display device (413) and to peripheral devices such as input/output (I/O) devices (409) through an input/output controller(s) (411).
  • I/O devices include mice, keyboards, modems, network interfaces, printers, scanners, video cameras and other devices.
  • the inter-connect (401) can include one or more buses connected to one another through various bridges, controllers and/or adapters.
  • the I/O controller (411) includes a USB (Universal Serial Bus) adapter for controlling USB peripherals, and/or an IEEE-1394 bus adapter for controlling IEEE-1394 peripherals.
  • the inter-connect (401) can include a network connection.
  • the volatile memory includes RAM (Random Access Memory), which typically loses data after the system is restarted.
  • the non- volatile memory includes ROM (Read Only Memory), and other types of memories, such as hard drive, flash memory, floppy disk, etc.
  • Volatile RAM is typically implemented as dynamic RAM (DRAM) which requires power continually in order to refresh or maintain the data in the memory.
  • Non- volatile memory is typically a magnetic hard drive, flash memory, a magnetic optical drive, or an optical drive (e.g., a DVD RAM), or other type of memory system which maintains data even after the main power is removed from the system.
  • the non- volatile memory can also be a random access memory.
  • the non- volatile memory can be a local device coupled directly to the rest of the components in the data processing system.
  • a non- volatile memory that is remote from the system such as a network storage device coupled to the data processing system through a network interface such as a modem or Ethernet interface, can also be used.
  • Various embodiments can be implemented using hardware, programs of instruction, or combinations of hardware and programs of instructions.
  • a customized navigation system such as those described above may work together with another full-fledged navigation system via a connection protocol.
  • the visualization display result of the full-fledged navigation system may offer restricted functionality lacking, say, some features such as stereoscopic display.
  • the customized navigation system can enhance the visualization display result with more sophisticated image processing procedures.
  • the customized navigation system will obviate the need for the registration process of the pre-operative images in relation with the patient in the physical world otherwise required by the full-fledged navigation system instead.
  • the customized navigation system can retrieve tracking data, registration results, virtual patient data information, etc from the full- fertilged navigation system in real time.
  • users can still navigate the enhanced visualization display of the virtual patient dataset with the customized navigation system in the operating room.
  • the communication between the full-fledged navigation system and the customized one can be either one-way or two-way.
  • the image guided system may not perform the tracking process of the probe directly by itself.
  • the tracking data is retrieved from a third-party system.
  • the image guided system can still implement the cursor control techniques described above with the probe based on the tracking data that is retrieved from the third-party system.
  • routines executed to implement the embodiments can be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as "computer programs.”
  • the computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
  • Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks, (DVDs), etc.), among others.
  • the instructions can be embodied in digital and analog communication links for electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc.
  • a machine readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods.
  • the executable software and data can be stored in various places including for example ROM, volatile RAM, non- volatile memory and/or cache. Portions of this software and/or data can be stored in any one of these storage devices.
  • a machine readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).
  • a machine e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.
  • aspects of the present disclosure can be embodied, at least in part, in software. That is, the techniques can be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non- volatile memory, cache or a remote storage device.
  • processor such as a microprocessor
  • a memory such as ROM, volatile RAM, non- volatile memory, cache or a remote storage device.
  • hardwired circuitry can be used in combination with software instructions to implement the embodiments.
  • the techniques are not limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system.
  • the apparatus 1600 comprises a definition module 1602 which defines a virtual screen in a volume of a 3D tracking system.
  • the user virtual screen is generated responsive a user activation of the probe.
  • Apparatus 1600 also comprises a tracking module 1604 for determining a position of a probe in the volume as tracked by the 3D tracking system.
  • tracking module 1604 receives tracking data from a 3D tracking system, and determines a position of the probe in the 3D volume of the tracking system.
  • Apparatus 1600 also comprises display generator 1604 which displays a cursor on a display screen based on a position of the probe relative to the virtual screen, according to a mapping between the virtual screen and the display screen.

Abstract

Methods and apparatuses of cursor control in image guided surgery are disclosed here. One embodiment includes defining a virtual screen in a volume of a 3D tracking system, determining a position of a probe in the volume as tracked by the 3D tracking system, and displaying a cursor on a display screen based on the position on or relative to the virtual screen, according to a mapping between the virtual screen and the display screen. The defining the virtual screen includes defining the virtual screen to be substantially parallel to a pre-defined vector in the volume of the 3D tracking system.

Description

METHODS AND APPARATUSES FOR CURSOR CONTROL IN IMAGE
GUIDED SURGERY
CROSS-REFERENCE TO RELATED APPLICATIONS This patent application claims the priority of Provisional U.S. Patent Application Serial No. 60/870,809, filed December 19, 2006, the disclosure of which is incorporated herein by reference.
TECHNOLOGY FIELD The invention relates to an apparatus and method for displaying a cursor on a display screen based on the position of a probe in relation to a virtual screen. The invention has particular, but not exclusive, application in image-guided surgery.
BACKGROUND Pre-operative planning for surgical procedures enhances the ease of navigation in a complex three dimensional surgical space since the complex anatomy may be obscured during operation procedures due to lack of direct visibility.
Surgery planning has evolved from utilizing 2D images on a display and mentally visualizing a 3D image to creation of 3D interactive environment. Previously, surgeons relied on a stack of 2D images, such as those obtained from computed tomography (CT) images to mentally rebuild the 3D structures. Recently, 3D imaging for surgery planning can be built using computed tomography (CT), magnetic resonance imaging (MRI), magnetic resonance angiograph (MRA), magnetic resonance venogram (MRV), functional MRI and CTA, positron emission tomography (PET), and/or single position emission computed tomography (SPECT).
Some surgical planning environments allow the surgeon to interact with the 3D image. Using a stereoscopic imaging technology, depth information can be generated to enable 3D visualization to facilitate surgical planning. A surgical planning environment may have a virtual control panel to control virtual tools to be used to perform operations and manipulations on objects displayed in 3D. Image guidance systems have been widely adopted in neurosurgery and have been proven to increase the accuracy and reduce the invasiveness of a wide range of surgical procedures. Typical image guided surgical systems (or "navigation systems") are based on a series of images constructed from pre-operative imaging data that is gathered before the surgical operation, such as Magnetic Resonance Imaging (MRI) images, Computed Tomography (CT) images, X-ray images, ultrasound images and/or the like. The pre-operative images are typically registered in relation with the patient in the physical world by means of an optical tracking system to provide guidance during the surgical operation.
During a surgical procedure, a surgeon cannot see beyond the exposed surfaces without the help from any visualization equipments. Within the constraint of a limited surgical opening, the exposed visible field may lack the spatial clues to comprehend the surrounding anatomic structures. Visualization facilities may provide the spatial clues which may not be otherwise available to the surgeon and thus allow Minimally Invasive Surgery (MIS) to be performed, dramatically reducing the trauma to the patient.
Many imaging techniques, such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and three-dimensional Ultrasonography (3DUS), are currently available to collect volumetric internal images of a patient without a single incision.
Using these scanned images, the complex anatomy structures of a patient can be visualized and examined; critical structures can be identified, segmented and located; and surgical approach can be planned.
The scanned images and surgical plan can be mapped to the actual patient on the operating table and a surgical navigation system can be used to guide the surgeon during the surgery.
U.S. Patent No. 5383454 discloses a system for indicating the position of a tip of a probe within an object on cross-sectional, scanned images of the object. The position of the tip of the probe can be detected and translated to the coordinate system of cross- sectional images. The cross-sectional image closest to the measured position of the tip of the probe can be selected; and a cursor representing the position of the tip of the probe can be displayed on the selected image.
U.S. Patent No. 6167296 describes a system for tracking the position of a pointer in real time by a position tracking system. Scanned image data of a patient is utilized to dynamically display 3-dimensional perspective images in real time of the patient's anatomy from the viewpoint of the pointer.
International Patent Application Publication No. WO 02/100284 Al discloses a guide system in which a virtual image and a real image are overlaid together to provide visualization of augmented reality. The virtual image is generated by a computer based on CT and/or MRI images which are co-registered and displayed as a multi-modal stereoscopic object and manipulated in a virtual reality environment to identify relevant surgical structures for display as 3D objects. In an example of see through augmented reality, the right and left eye projections of the stereo image generated by the computer are displayed on the right and left LCD screens of a head mounted display. The right and left LCD screens are partially transparent such that the real world seen through the right and left LCD screens of the head mounted display is overlaid with the computer generated stereo image. In an example of microscope assisted augmented reality, the stereoscopic video output of a microscope is combined, through the use of a video mixer, with the stereoscopic, segmented 3D imaging data of the computer for display in a head mounted display. The crop plane used by the computer to generate the virtual image can be coupled to the focus plane of the microscope. Thus, changing the focus value of the microscope can be used to slice through the virtual 3D model to see details at different planes.
International Patent Application Publication No. WO 2005/000139 Al discloses a surgical navigation imaging system, in which a micro-camera can be provided in a hand-held navigation probe. Real time images of an operative scene from the viewpoint of the micro-camera can be overlaid with computer generated 3D graphics, which depicts structures of interest from the viewpoint of the micro-camera. The computer generated 3D graphics are based on pre-operative scans. Depth perception can be enhanced through varying transparent settings of the camera image and the superimposed 3D graphics. A virtual interface can be displayed adjacent to the combined image to facilitate user interaction.
International Patent Application Publication No. WO 2005/000139 Al also suggests that the real time images as well as the virtual images can be stereoscopic, using a dual camera arrangement.
During an image guided surgery, the practitioner may need to control the content displayed on the monitor of the navigation system for optimal navigation. A 2D user interface may be displayed for the adjustment of the controls.
U.S. Patent No. 5230623 describes an operating a pointer with interactive computer graphics. In U.S. Patent No. 5230623, the position of an operating pointer or arm apparatus is detected and read out on a computer and associated graphics display; and the pointer can be changed from its pointer function to a "3D mouse" so as to alternately by use control the functionality of the computer as in calibration and display features.
SUMMARY
The invention is defined in the independent claims. Some optional features of the invention are defined in the dependent claims.
Methods and apparatuses for cursor control in image guided surgery are described here. Some techniques are summarized in this section.
One technique includes defining a virtual screen in a volume of a 3D tracking system, determining a position on the virtual screen according to a location of a probe in the volume as tracked by the 3D tracking system, and displaying a cursor on a display screen based on the position on the virtual screen, according to a mapping between the virtual screen and the display screen. In one technique, the virtual screen is defined to be substantially parallel to a predefined vector in the volume of the 3D tracking system. In one embodiment, a tip of the probe is located in a central region of the virtual screen when the virtual screen is defined.
The present disclosure includes methods and apparatuses which perform these methods, including processing systems which perform these methods, and computer readable media which when executed on processing systems cause the systems to perform these methods.
Other features will be apparent from the accompanying drawings and from the detailed description which follows.
BRIEF DESCRIPTION OF THE DRAWINGS The disclosure is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
Figure 1 illustrates an example screen of a surgical navigation system in a navigation mode, according to one embodiment.
Figure 2 illustrates an example screen of a surgical system in a control mode showing tools to adjust parameters for the display of 3D images of augmented reality with a cursor, according to one embodiment.
Figure 3 illustrates a coordinate system of a display screen within a cursor is displayed, according to one embodiment.
Figure 4 illustrates the spatial relation between a virtual screen and a probe when the virtual screen is created, according to one embodiment.
Figure 5 illustrates a position of a current virtual screen based on a current probe position, according to one embodiment.
Figure 6 illustrates a method to move a virtual screen along the shooting line of the probe, according to one embodiment. Figures 7 — 9 illustrate a method to define a vector for defining the virtual screen, according to one embodiment.
Figure 10 illustrates a method to test cursor control, according to one embodiment. Figure 11 illustrates a coordinate system of a virtual screen from which a probe location is to be mapped to the location of the cursor in the display screen, according to one embodiment.
Figure 12 illustrates an intersection point on a virtual screen based on the position of a probe, according to one embodiment.
Figure 13 illustrates a system to provide a display of augmented reality to guide a surgical procedure, according to one embodiment.
Figure 14 illustrates another system to provide a display of augmented reality to guide a surgical procedure, according to one embodiment. Figure 15 is a block diagram of a data processing system used in some embodiments of cursor control.
Figure 16 is a block diagram illustrating an architecture of an apparatus for implementing one or more of the disclosed techniques.
DETAILED DESCRIPTION
The following description and drawings are illustrative of the disclosure and are not to be construed as limiting the disclosure. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be, but not necessarily are, references to the same embodiment; and, such references mean at least one.
Reference in this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not other embodiments. Without the intent to limit the scope of the disclosure, exemplary instruments, apparatus, methods and their related results according to various embodiments are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Moreover, certain theories are proposed and disclosed herein; however, in no way the disclose theories, whether right or wrong, should limit the scope of the disclosure so long as the techniques can be practiced according to the disclosure without regard for any particular theory or scheme of action.
The present disclosure includes methods and apparatuses for cursor control in image guided surgery.
In a surgical system, 3D images can be provided based on the view point of a navigation instrument, such as a probe, to guide surgical navigation. For example, patient-specific data sets from one or more imaging techniques, such as magnetic resonance imaging, magnetic resonance angiography, magnetic resonance venography, and computed tomography can be co-registered, integrated, and displayed in 3D. The stereoscopic images of the 3D objects as captured on the images can be used to provide augmented reality based image guided surgical navigation.
During the surgical process, the surgeon may wish to access a 2D interface (e.g., the control panel) for adjusting one or more parameters that control the display of augmented reality. For example, the surgeon may adjust parameters that control the color, brightness, viewpoint, resolution, contrast, etc. In the operating room, the surgeon should not be in contact with non-sterilized objects to avoid contamination after the sterilization process.
In one embodiment, in order for the surgeon to access the control panel without using a non-sterilized mouse or keyboard, the probe, which can be sterilized, for 3D image guided navigation system can be additionally used to control the movement of the cursor on the screen. In one embodiment, the probe is a 3D position tracked instrument to navigate about a surgical environment. In one embodiment, the probe includes a mounted video camera to capture a point of view of the surgical environment from the viewpoint of the probe; and the stereoscopic images are provided according to the viewpoint of the probe to provide navigation guidance.
In one embodiment, when the system is in the navigation mode, the probe is used to provide the viewpoint of the image that is used to guide the surgical process; when the system is in the control mode, the probe for 3D image guided navigation system can be used as a cursor controller (e.g., to control the movement of a cursor on a control panel interface, which can be displayed as a 2D interface).
In one embodiment, the system can be alternated between the navigation mode and the control mode via activating a switch. When in the control mode, the surgeon can use the probe to drive the cursor onto a graphical user interface element and use a foot switch to signal the system to perform a function associated with the graphical user interface element while the cursor displayed on the graphical user interface element.
For example, a 2-D interface may be a control panel with multiple user interface elements, such as buttons, sliders, etc., which can be selectively accessed based on the position of the cursor displayed on the control panel. In one embodiment, the control panel can be used to control the transparency of the displayed image, adjustment of the color map, intensity, etc.
For example, by accessing the buttons on the control panel, tools for data fusion, segmentation, and surgical planning can be activated; and by operating sliders in the control panel, the surgeon can scale the image and adjust the image resolution as desired for image guided surgery. For example, the user can control the display of various virtual objects for image guided surgery, such as the transparency of a virtual object or a real time video image for a display of augmented reality. For example, the user can utilize digital zooming to see an enlarged display. For example, the user can set a cutting plane relative to the tip of the probe. For example, the user may select from an Augmented Reality (AR) dominated display for navigation and an orthogonal slices dominated display for navigation.
Thus, at least one embodiment of the disclosure allows a surgeon to access a 2D user interface, from within the sterile field, via a cursor that is controlled by a position tracked probe for image guided navigation during a surgery process, without having to utilize other cursor controlling devices. The benefits include continence, no need for extra hands, no need to touch a keyboard, mouse or draped monitor during surgery.
Event Row
Pre-operative image data, such as 3D image of the patient scanned before the patient entering the operating room (OR), can be used to generate virtual image data, such as 3D objects segmented from the pre-operative image data, surgical planning data, diagnosis information, etc.
In one embodiment, after the virtual image data is registered to the patient in the operating room (OR), an Image Guided Surgery navigation system can be operated in a navigation mode to provide image based guidance for the surgery process. In one embodiment, the navigation mode is a default mode.
In one embodiment, a stereoscopic display of the virtual image data can be overlaid with a real time video image, captured by a camera mounted on the probe, to provide augmented reality view of the surgical field from the view point of the probe (camera). In one embodiment, a representation of the probe is further overlaid on the real time video image and/or the virtual image to indicate the position and orientation of the probe in the displayed image. Some details of a stereoscopic display of the augmented reality for image guided surgery are provided further below. Further details can be found in a co-pending U.S. Patent Application Serial No. 11/277,920, filed March 29, 2006 and entitled "Methods and Apparatuses for Stereoscopic Image Guided Surgical Navigation," the disclosure of which is incorporated herein by reference. In one embodiment, the Image Guided Surgery navigation system can be switched into a control mode for accessing a 2D graphical user interface. For example, while in the navigation mode, the user can press a footswitch shortly to swap the system from the navigation mode to the control mode. When in the control mode, the probe can be used as a cursor controller to control the movement of the cursor on the 2D graphical user interface. For example, to activate an icon button or slider on the 2D graphical user interface, the probe can be used to move the cursor on the icon button or slider; and a footswitch can be pressed to indicate the selection of the button or slider while the cursor is positioned on the icon button or slider.
After the interactions with the 2D graphical user interface (e.g. pressing certain buttons and/or moving certain sliders), the Image Guided Surgery navigation system can be switched back to the navigation mode from the control mode. For example, the probe can be used to move the cursor to the 3D window; and pressing the footswitch once while the cursor is within the 3D window signals the system to go from the control mode to the navigation mode.
During the process of surgery operation, the Image Guided Surgery navigation system can be swapped between the navigation mode and the control mode from time to time by repeating the operations described above.
In some embodiments, the image guided surgery may be performed without the real time video image; and the probe may not include the camera. In one embodiment, the image guided surgery may be performed with the real time video image but without the virtual image data.
Figure 1 illustrates an example screen of a surgical navigation system in a navigation mode, according to one embodiment. In Figure 1, an Augmented Reality (AR) dominated display for navigation is illustrated. The display screen includes a main window (501) for the display of augmented reality, including a real time video image from a video camera mounted on the probe, a tip portion (509) of the probe as captured in the video image, and virtual objects that are rendered based on the virtual image data registered to the patient in the operation room.
In one embodiment, a 3D model of the tip portion (509) is also overlaid on the main window (501). When the system is correctly registered, the 3D model of the tip portion (509) aligns with the tip portion (509) of the probe as captured in the video image; and misalignment can be easily observed and corrected to improve the navigation accuracy.
In Figure 1, the display screen also includes three windows (503, 505 and 507), which display three orthogonal slides of the virtual image data to show the location of the tip of the probe relative to the patient. The crosshair displayed in the windows (503, 505 and 507) indicates the position of the tip of probe.
In one embodiment, the orthogonal slides are generated based on a pre-operative 3D image data set. In another embodiment, the orthogonal slides may be obtained in real time based on the traced position of the probe.
Figure 2 illustrates an example screen of a surgical system in a control mode showing tools to adjust parameters for the display of 3D images of augmented reality, according to one embodiment. In Figure 2, the display screen includes a 3D window (513) showing a preview of the augmented reality window and the slice windows. In one embodiment, the preview is presented based on the image and data that is displayed prior to the system is switched to the control mode.
In Figure 2, the display screen shows a cursor (511) which can be moved according to the movement of the probe in the operating room. The cursor (511) can be moved around on the 2D graphical user interface to select buttons, sliders, etc. In one embodiment, the settings or parameters changed or adjusted via the 2D graphical user interface are applied to the preview shown in the 3D window (513). Thus, the effect of the change or adjustment can be viewed in the 3D window (513) in the control mode without having to switch back to the navigation mode for the purpose to observe the effect of the change. Details on the methods and systems for the control of the cursor using the probe are provided below.
Mapping
In one embodiment, a virtual screen is defined in the operation room; and a position of a point on the virtual screen that is pointed at by the probe is mapped to the displayed screen as the position of the cursor.
Figure 11 illustrates a coordinate system of a virtual screen from which a probe location is to be mapped to the location of the cursor in the display screen, according to one embodiment. In Figure 11, the position of the point (523) as pointed at by the probe is determinate from the intersection of the line of the probe as tracked by a position tracking system in the operating room and the virtual screen defined in the operating room, as illustrated in Figure 12. Thus, the position of the probe relative to the virtual screen is determined by the orientation of the probe relative to the virtual screen.
Figure 12 illustrates an intersection point on a virtual screen based on the position of a probe, according to one embodiment. In Figure 12, the shooting line is a line along, and/or an axis of, the probe. In Figure 12, the shooting line corresponds with a longitudinal axis of the probe. This longitudinal axis may also define the z-axis Zp of the probe. The position and orientation of the probe as tracked in the operating room determines the shooting line. The intersection point (P) between the shooting line and the virtual screen, as defined in the operating room, determine the position of the point (523) in the virtual screen, which can be calculated in the coordinate system of the virtual screen.
Figure 3 illustrates a coordinate system of a display screen within a cursor is displayed, according to one embodiment. In Figure 3, the cursor (521) is to be displayed at a point based on the position of the point (523) in the virtual screen. Through mapping the virtual screen as illustrated in Figure 11 to the real screen as illustrated in Figure 3, the position of the cursor (521) in the coordinate system of the real display screen can be determined.
In one embodiment, the origin Os of the display screen is located at the upper-left corner of the screen with an active area defined by the dimensions of (sizeXs, sizeYs). The ratio between the width and the height of the screen can be 4:3 or other ratios. For the computation of the position of the cursor, the origin can also be set at a different location on the display screen.
Virtual screen generation
In one embodiment, a virtual screen is generated in the operation room when a surgical navigation system is swapped from a navigation mode to a control mode (e.g., via activation of a designated switch, such as a footswitch, or moving the probe outside a predetermined region, or moving the probe into a predetermined region). In one embodiment, once the control mode has been activated, a virtual screen is generated with the center located at the tip of the probe, as shown in Figure 4.
Figure 4 illustrates the spatial relation between a virtual screen and a probe at the time the virtual screen is generated according to one embodiment.
The coordinate system of the probe and the coordinate system of the virtual screen are illustrated. In one embodiment, the origins are substantially overlapped. The origin of the coordinate of the probe can be located at the probe tip. For example, the tip of the probe is located in a central region of the virtual screen when the virtual screen is defined. In one embodiment, the shooting line of the probe is predefined as the z-axis of the probe.
In one embodiment, the y-axis ( OVYV ) of the virtual screen is a pre-defined vector with an initial direction of (0.0, 1.0, 0.0). This pre-defined vector can be defined on the spot as a vertical vector in the surgical environment in the operating room, or a vertical direction of the display screen. Alternatively, the pre-defined vector corresponds to a horizontal direction of the display screen. In one embodiment, the pre-defined vector can be defined once and used in later sessions. For example, the value of the pre-defined vector can be stored in a file on the hard disk and can be loaded when the surgical environment is initiated.
Thus, the coordinate system of the virtual screen can be determined by at least one of the direction of the shooting line of the probe, the position of the origin, and/or the predefined vector. For example, the coordinate system can be determined by the set of formula (1) shown below.
Figure imgf000016_0001
In one embodiment, the coordinate values are defined by the coordinate system of the tracking system. For example, the plane of the virtual screen can be determined by determining the coefficients of the plane equation.
After the origin and three axes of the virtual screen have been determined with formula (1), the virtual screen can be determined. In one embodiment, these coordinate values are defined in the coordinate system of the tracking system. The plane of the virtual screen can be determined as the coefficients of the following equation.
Virtual plane equation : Ax + By + Cz + D = 0 (2)
In one embodiment, the virtual screen remains at substantially the same position and orientation until a different virtual screen is generated. Alternatively, the orientation of the virtual screen is maintained substantially the same while the position of the virtual screen can be adjusted along its normal direction to maintain the distance between the virtual screen and the tip of the probe relatively constant. Alternatively, the orientation of the virtual screen can be adjusted over time to follow the orientation of the screen.
Virtual Screen Adjustment In one embodiment, an apparatus determines a position of a virtual screen according to a tracked location of a probe. The virtual screen may have a predetermined orientation with respect to the probe, which changes as the position and/or the orientation of the probe changes. Alternatively, the virtual screen has a pre-defined orientation until the probe is outside an area defined by the virtual screen. In one embodiment, a display generator in the apparatus generates a cursor for display on a display screen according to a mapping between the virtual screen and the display screen when the system is in a control mode. In the navigation mode, the position and origination of the probe can define the view point of an augmented reality view.
An apparatus is illustrated in Figure 16, described below.
In one embodiment, a new virtual screen is generated to replace an old virtual screen if the cursor is located substantially close to the boundary or moving outwards of the display. Therefore, the cursor can remain visible and be controlled by the probe without being limited by the area of the old virtual screen thus expanding the region of operation. Automatic generation of the new virtual screen gives the user more freedom to control the cursor on the screen with the movement of the probe in the workspace.
The cursor may be automatically shifted to the center of the new virtual screen from the boundary of the old virtual screen when the new virtual screen is generated. In one embodiment, if the cursor position is substantially exceeding the boundary of the virtual screen, a new virtual screen can be defined based on the current location of the probe tip. For example, the system may be switched from the control mode to the navigation mode then back to the control mode by activating a footswitch a predetermined number of times and maintaining the probe in the desired position. Thus, a new virtual screen is generated at the tip of the probe. In one embodiment, the virtual screen is adjusted in the volume of the 3D tracking system to follow an orientation of the probe over a period of time without changing an intersection point between a projection line along the probe and the virtual screen, when the probe is stationary in the volume of the 3D tracking system during the period of time.
Figure 5 illustrates a position of a current virtual screen based on a current probe position, according to one embodiment.
A virtual screen can be used until the intersection point between the shooting line of the probe and the virtual screen approaches and/or exceeds a boundary of the virtual screen. In one embodiment, the virtual screen is dragged by the movement of the intersection point on the virtual screen so that the intersection point remains on the boundary of the virtual screen when the intersection point exceeds the boundaries of the old virtual screen.
For example, a new virtual screen can be generated (or the virtual screen is moved/adjusted) at a new location such that the probe remains at the boundary point of the new virtual screen. The new virtual screen may be generated when the probe approaches the boundary of the old virtual screen closely. Thus, the virtual screen tracks the probe in a 'dragging' like motion.
In one embodiment, the boundary of the virtual screen is defined such that the probe tip is located on the new virtual screen at substantially the same location that the probe tip exited the boundaries of the old virtual screen as if the probe tip is dragging the old virtual screen.
In one embodiment, the position of the virtual screen remains substantially the same when the intersection point of the shooting line of the probe and the virtual screen is within the boundaries of the virtual screen. In one embodiment, the virtual screen is moved based on the movement of the intersection point if the intersection point approaches the boundary and exceeds beyond the boundary of the virtual screen. In one embodiment, a zero displacement of the cursor is generated when the projection point of the probe is located substantially close to or beyond a boundary of the virtual screen.
Figure 6 illustrates a method to move a virtual screen along the shooting line of the probe, according to one embodiment.
For example, if the probe tip leaves the virtual screen, the virtual screen can be moved along the shooting line of the probe such that the probe tip stays on the virtual screen. Since the virtual screen is adjusted based on the shooting line of the screen, the cursor position on the screen is not affected. In one embodiment, the virtual screen is dragged to the tip of the probe instantaneously.
In one embodiment, the adjustment of the virtual screen towards the probe tip can be performed over a period of time (e.g., a pre-determined period of time, or a period of time scaled according to the speed of the probe), rather than instantaneously. For example, the adjustment of the cursor position may be faster when the probe movement is slow; and when the probe movement is faster than a threshold, the virtual screen is not adjusted.
In one embodiment, the virtual screen is rotated relative to the current cursor position. For example, when the probe tip is on the virtual screen, and the user can hold the probe at an angle with the virtual screen to rotate the virtual screen about the probe tip to obtain a better view of the scene. In one embodiment, the virtual screen is rotated in a pre-determined amount of time from an original orientation to a desired location.
In one embodiment, when the probe is moved relative to the virtual screen, the intersection point between the shooting line (projection point) of the probe and the plane of the virtual screen is determined to compute the cursor position on the display screen. The position and/or orientation of the virtual screen can then be recomputed/adjusted according to the change of the position and orientation of the probe without affecting the relative position of the intersection point in the virtual screen. For example a new virtual screen is generated to replace the virtual screen in response to the projection point being substantially close to or beyond a boundary of the virtual screen.
In one embodiment, the position of the virtual screen is adjustable to maintain an intersection point between a projection line along the probe and a plane of the virtual screen on the boundary of the virtual screen,. In one embodiment, the virtual screen is moved along the projection line of the probe to maintain a constant distance between the virtual screen and the probe.
In one embodiment, a virtual screen is moved along the projection line of the probe over a period of time to reduce a difference between a predetermined distance and a current distance between the virtual screen and the probe. For example, the period of time is based on a speed of probe movement in the volume of the 3D tracking system. For example, the period of time may decrease as the speed of the probe increases.
The virtual screen can be re-generated according to the position and orientation of the probe. In one embodiment, the virtual screen can be repositioned/adjusted according to an interpolation between the previous and current positions and orientations of the probe.
In one embodiment, the adjusting the position of the virtual screen comprises dragging the virtual screen according to the movement of the intersection point perpendicular to an edge of the virtual screen and movement of the intersection point parallel to the edge of the virtual screen. For example, the adjusting the position of the virtual screen comprises dragging the virtual screen according to movement of the intersection point perpendicular to an edge of the virtual screen while allowing the intersection point to slide along the edge on the virtual screen.
Displacement Calculation After the virtual screen is generated, the displacement of the probe can be determined based on probe movement. For example, suppose the probe moves from its original position as "Ov" illustrated in Figure 11 to a new position "Probe" shown in Figure 12. According to one embodiment, the intersection point between the shooting line of the probe and the virtual screen is point P and the shooting line has the same direction as the z-axis of the probe. The origin of the z-axis can be the probe tip.
In one embodiment, if the tip of the probe is at the other side of the virtual screen (e.g., the tip of the probe is behind the virtual screen), the intersection point P' (not shown) between the shooting line of the probe and the virtual screen in the opposite direction of the shooting line can be determined.
In one embodiment, the intersection point between the shooting line of the probe and the virtual screen is determined by solving a set of equations determined by the equation of the shooting line and that of the virtual plane. The method is described below. After the intersection point is obtained, the displacement of the probe on the virtual screen can be determined.
In Figure 12, suppose the direction of the shooting line OpZp is defined with a vector as (1, m, n). The present position of the probe tip is Op(X0, Y0, Z0). In order to determine the intersection point P/P' , the navigation system solves a set of equations of both the virtual screen and the shooting line as below.
_ , (virtual plane : Ax + By + Cz + -D = 0
Compute the intersection point shooting line : . X - X0 _ y ~ y0 _ z - Zy (3) m n
After the intersection point P/P' is obtained, the displacement /Sd1 on the virtual screen can be calculated with following equation.
Figure imgf000021_0001
In one embodiment, the cursor displacement on the display is scaled by the ratio of the size of the display and the virtual screen. For example, the cursor displacement on the display can be computed with following equation. sizeX.
Compute cursor displacement scale — sizeXv (5)
Ad2 on the screen
Ad2 = Ad1 • scale = (Ax2, Ay2) The cursor displacement Ad2 is then added to initial (or previous) cursor position prevCursorPos to obtain the new cursor position after the probe has been moved. One technique for doing this is defined in Equation 6:
Update cursor position \currCursorPos.x = prevCursorPos. x - Δx9
, 1 (6) on the screen [currCursorPos.y = prevCursorPos. y - Ay2
In one embodiment, when the direction of the x-y axes of the virtual screen is opposite to that of the display screen, the displacement vector of the cursor has an opposite sign compared to the direction of the movement of the probe. For instance, '-ΔX2'is used instead of a '+ΔX2' since the directions of x-y axes (OVXV and OVYV ) of the virtual
screen are opposite of those (OSXS and OSYS ) of the screen rectangle respectively.
In one embodiment, if the displacement of the probe exceeds the boundary of the virtual screen (e.g., the intersection point of the shooting line of the probe and the virtual screen exceeds the boundary of the virtual screen), the displacement of the probe Ad1 is recorded as zero. Thus, the cursor also remains at its initial (or previous) position and no update would occur until the probe returns to a region within the boundaries of the virtual screen.
Pre-defined vector In one embodiment, a virtual screen is defined in the 3D tracking system of the surgical navigation system. The movement of the probe relative to this virtual screen can be mapped to the cursor movement to generate the cursor on the display screen with a 2D interface (e.g., a control panel).
The virtual screen can be defined as the X-Y plane of the 3D tracking system. Thus, the position of the probe detected by the tracking system can be mapped to the cursor in the screen display. In one embodiment, the virtual screen is defined at the probe position.
The probe position relative to the virtual screen can be determined by the intersection position of a vector extending in a direction of the probe tip and the virtual screen. The intersection position can be expressed in the virtual screen coordinates. Another example is to calculate a 3D to 2D transfer matrix to transfer coordinates in the tracking coordinate system to the virtual screen and to map the probe tip's position to the virtual screen using the transfer matrix.
In one embodiment, the virtual screen is generated to be perpendicular to the orientation of the probe when the system is switched from the navigation mode to the control mode. Therefore, the virtual screen may have a same coordinate system as that of the probe (e.g., three axes aligned those of the probe respectively).
In one embodiment, a pre-defined vector is determined to define the direction of the y- axis of the virtual screen so that the virtual screen can be vertically displayed (e.g., substantially perpendicular to the ground) in front of the operator independent of the orientation of the probe at the time the virtual screen is created. Thus, the movement of the cursor on the screen can track the movement of the probe in the surgical navigation environment.
For example, when the probe moves horizontally in the workspace, the cursor also moves horizontally on the screen. In one embodiment, the virtual screen is defined to be substantially parallel to the pre-defined vector in the volume of the 3D tracking system. In one embodiment, the pre-defined vector corresponds to a vertical direction of the display screen. In one embodiment, the pre-defined vector is user defined prior to the set up of the surgical navigation system in the operating room. Similarly, the predefined vector can be defined on the spot in the operating room as well. The pre-defined vector can be saved in a file to be re-loaded anytime the system is used.
In one embodiment, the position of the probe relative to the virtual screen is mapped as the corresponding position of the cursor relative to the real screen. In response to a user input, a direction of the probe is determined. A plane that includes the pre-defined vector and a second vector that is perpendicular to the pre-defined vector and the direction of the probe is determined. In one embodiment, the virtual screen is defined in the plane.
Alternatively, the movement of the probe relative to the virtual screen can be mapped as the movement of the cursor relative to the real screen. In one embodiment, the virtual screen rotates as the probe rotates. In one embodiment, the virtual screen is dragged with a probe when the movement of the probe is outwards relative to the boundary of the virtual screen.
Figures 7 - 9 illustrate a method to define a vector for defining the virtual screen, according to one embodiment.
In one embodiment, a key (e.g., 'F5') is pressed to initiate an interface for a user to define a vector for setting up a pre-defined vector. As shown in the screenshot, the interface includes instructions to guide the user in the vector defining process where the pre-defined vector is defined by recording two or more positions of the probe in the volume of the 3D tracking system.
In one embodiment, an upper endpoint of the vector is selected by the probe and recorded via clicking the 'record' button shown on the interface. The 'clear' button can be used to delete the selected endpoints. In one embodiment, the interface includes a panel to indicate whether the endpoints of the vector have been successfully defined or if an error has occurred. A lower end point of the vector can be selected by the probe and recorded via clicking the 'record' button shown on the interface. In one embodiment, the probe is shifted downwards from the upper endpoint until the desired location has been reached. To save the lower endpoint, the 'record' button is pressed while maintaining the probe in the desired location. Similarly, the 'clear' button can be used to delete the selected end points. As shown in the status indicator panel, the upper end up has been successfully selected.
The status panel indicates that the lower end point has been selected. In one embodiment, the direction of the resultant vector is determined and displayed in the status panel indicator. The position of the two points defined using the probe defines the vector that can be used to define the orientation of the virtual screen. In one embodiment, the 'return' button is utilized to switch from the control mode to the navigation mode where the user is able to navigate about the 3D image.
Figure 10 illustrates a method to test cursor control, according to one embodiment.
In one embodiment, the defined vector can be tested via pressing the 'test' button to verify the pre-defined vector. The 'test' button may trigger a 'test vertical vector' interface displayed with a set of grids. In the testing environment, the probe can be moved around in the workspace to evaluate whether the cursor movement tracks the movement of the probe. To return to the previous screen (e.g., the screen to define a vector), the 'return' button can be pressed.
If the cursor movement is satisfactory, the same pre-defined vector can be used. However, if the movement of the cursor is unsatisfactory, an option to re-define the predefined vector may be available.
In one embodiment, to map the movement of the probe to the displacement of the cursor on the screen, a virtual screen is generated. For example, the virtual screen can be generated with its center located at the tip of the probe with a coordinate system shown in Figure 11.
In one embodiment, the directions of the X-axis and Y-axis of the virtual screen are opposite to those of the real screen as shown in Figure 3. Other coordinate systems can also be used. In addition, the orientation of the coordinate system of the virtual screen can be defined to be aligned with the coordinate system of the probe or of the tracking system. Similarly, the ratio between the width and the height of the virtual screen plane can be the same as the ratio of the real screen (e.g., 4:3). Other ratios may be used for the virtual screen.
A shooting line of the probe is a vector extending in the direction that the probe is pointed towards. The virtual screen may intersect with the shooting line of the probe as the probe is being operated by a user. In one embodiment, the displacement Ad1 on the virtual screen between the intersection point and the origin of the virtual screen is scaled (e.g., the scale factor can depend on the size of the screen rectangle and that of the virtual screen) to obtain a new displacement Ad2 of the cursor on the display screen.
The position of the cursor when the virtual screen is generated can be recorded. To obtain the new position of the cursor, the new displacement Ad2 can be added onto the old cursor position to generate the new cursor position. Thus the cursor's movement tracks the movement of the probe.
In one embodiment, when the probe is pointing at the same point on the virtual screen, the cursor is mapped to the same point on the real screen. In one embodiment, the velocity of the movement of the probe can be used to control the movement of the cursor (e.g., the scaling of the displacement maybe weighted according to the speed of the probe movement).
Image Guided Surgery Figure 13 illustrates a system to provide a display of augmented reality to guide a surgical procedure, according to one embodiment.
In one embodiment, a computer 123 is used to generate a virtual image of a view, according to a viewpoint acquired by the video camera 103 to enhance the display image based on the real image. The real image and the virtual image can be integrated in real time for display on the display device 125 (e.g., a monitor, or other display devices). In one embodiment, the computer 123 is to generate the virtual image based on the object model 121 which can be generated from scanned data of the patient and defined before the image guided procedure (e.g., a neurosurgical procedure). For example, the object model 121 can include diagnostic information, surgical plan, and/or segmented anatomical features that are captured from the scanned 3D image data.
In one embodiment, a video camera 103 is mounted on a probe 101 such that at least a portion of the probe tip 115, is in the field of view 105 of the camera. In one embodiment, the video camera 103 has a predefined position and orientation with respect to the probe 101 such that the position and orientation of the video camera 103 can be determined from the position and the orientation of the probe 101. The probe 101 may include other instruments such as additional illuminating devices.
In one embodiment, the probe 101 may not include a video camera. Thus, a representation of the probe is overlaid on the scanned image of the patient based on the spatial relationship between the patient and the probe.
For example, images used in navigation, obtained pre-operatively or intra-operatively from imaging devices such as ultra-sonography, MRI, X-ray, etc., can be images of internal anatomies. To show a navigation instrument inside a body part of a patient, the tracked position of the navigation instrument can be indicated in the images of the body part.
For example, the pre-operative images can be registered with the corresponding anatomic region of the patient. In one embodiment, in the registration process, the spatial relationship between the pre-operative images and the patient in the tracking system is determined. Using the spatial relation determined in the registration process, the location of the navigation instrument as tracked by the tracking system can be spatially correlated with the corresponding locations in the pre-operative images.
In addition, a representation of the probe can be overlaid on the pre-operative images according to the relative position between the patient and the probe. Further, the system can determine the pose (position and orientation) of the video camera based on the tracked location of the probe. Thus, the images obtained from the video camera can be spatially correlated with the pre-operative images for the overlay of the video image with the pre-operative images.
Various registration techniques can be used to determine the spatial relation between the pre-operative images and the patient. For example, one registration technique maps the image data of a patient to the patient using a number of anatomical features on the body surface of the patient by matching their positions identified and located in the scan images and the corresponding positions on the patient as determined using a tracked probe.
The registration accuracy can be further improved by mapping a surface of a body part of the patient generated from the imaging data to the surface data of the corresponding body part generated on the operating table. Some example details on registration can be found in U.S. Patent Application No. 10/480,715, filed July 21, 2004" the disclosure of which is hereby incorporated herein by reference.
In one embodiment, the position tracking system 127 uses two tracking cameras 131 and 133 to capture the scene for position tracking. A reference frame 117 with feature points is attached rigidly to the patient 111. The feature points can be fiducial points marked with markers or tracking balls 112-114, or Light Emitting Diodes (LEDs). In one embodiment, the feature points are tracked by the position tracking system 127. In a registration process, the spatial relationship between the set of feature points and the pre-operative images is determined. Thus, even if the patient is moved during the surgery, the spatial relation between the pre-operative images which represent the patient and positions determined by the tracking system can be dynamically determined, using the tracked location of the feature points and the spatial relation between the set of feature points and the pre-operative images.
For example, the probe 101 has feature points 107, 108 and 109 (e.g., tracking balls). The image of the feature points in images captured by the tracking cameras 131 and 133 can be automatically identified using the position tracking system 127. Based on the positions of the feature points of the probe 101 in the video images of the tracking cameras, the position tracking system 127 can compute the position and orientation of the probe 101 in the coordinate system 135 of the position tracking system.
In one embodiment, the location of the reference frame 117 is determined based on the tracked positions of the feature points 112-113; and the location of the tip 115 of the probe is determined based on the tracked positions of the feature points 107, 108 and 109. When the user signals (e.g., using a foot switch) that the probe tip is touching an anatomical feature (or a fiducial point) corresponding to an identified feature in the pre- operative images, the system can correlate the location of the reference frame, the position of the tip of the probe, and the position of the identified feature in the preoperative images.
Thus, the position of the tip of the probe can be expressed relative to the reference frame. Three or more sets of such correlation data can be used to determine a transformation that maps between the positions as determined in the pre-operative images and positions as determined relative to the reference frame.
In one embodiment, registration data representing the spatial relation between the positions as determined in the pre-operative images and positions as determined relative to the reference frame is stored after the registration. The registration data is stored with identification information of the patient and the pre-operative images. When a registration process is initiated, such previously generated registration data is searched for the patient and the pre-operative images. If it is determined that the previous recorded registration data is found and valid, the registration data can be loaded into the computer process to eliminate the need to repeat the registration operations of touching the anatomical features with the probe tips.
Using the registration data, the image data of a patient, including the various objects associated with the surgical plan which are in the same coordinate systems as the image data, can be mapped to the patient on the operating table.
Although Figure 13 illustrates an example of using tracking cameras in the position tracking system, other types of position tracking systems can also be used. For example, the position tracking system can determine a position based on the delay in the propagation of a signal, such as a radio signal, an ultrasound signal, or a laser beam. A number of transmitters and/or receivers can be used to determine the propagation delays to a set of points to track the position of a transmitter (or a receiver). Alternatively, or in combination, for example, the position tracking system can determine a position based on the positions of components of a supporting structure that can be used to support the probe.
Image based guidance can also be provided based on the real time position and orientation relation between the patient 111, the probe 101 and the object model 121. For example, based on the known geometric relation between the viewpoint and the probe 101, the computer can generate a representation of the probe (e.g., using a 3D model of the probe) to show the relative position of the probe with respect to the object.
For example, the computer 123 can generate a 3D model of the real time scene having the probe 101 and the patient 111, using the real time determined position and orientation relation between the patient 111 and the probe 101, a 3D model of the patient 111 generated based on the pre-operative image, a model of the probe 101 and the registration data. With the 3D model of the scene, the computer 123 can generate a stereoscopic view of the 3D model of the real time scene for any pairs of viewpoints specified by the user. Thus, the pose of the virtual observer with the pair of viewpoints associated with the eyes of the virtual observer can have a pre-determined geometric relation with the probe 101, or be specified by the user in real time during the image guided procedure.
In one embodiment, the object model 121 can be prepared based on scanned images prior to the performance of a surgical operation. For example, after the patient is scanned, such as by CT and/or MRI scanners, the scanned images can be used in a (VR) environment for planning. Detailed information on Dextroscope can be found in "Planning Simulation of Neurosurgery in a Environment" by Kockro, et al. in
Neurosurgery Journal, Vol. 46, No. 1, pp.118-137, September 2000, and "Multimodal Volume-based Tumor Neurosurgery Planning in the Virtual Workbench," by Serra, et al., in Proceedings of the First International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Massachusetts, Institute of Technology, Cambridge Mass, USA, Oct. 11-13, 1998, pp.1007-1016. The disclosures of these publications are incorporated herein by reference.
In one embodiment, scanned images from different imaging modalities can be co- registered and displayed as a multimodal stereoscopic object. During the planning session, relevant surgical structures can be identified and isolated from scanned images. Additionally, landmarks and surgical paths can be marked. The positions of anatomical features in the images can also be identified. The identified positions of the anatomical features can be subsequently used in the registration process for correlating with the corresponding positions on the patient.
In some embodiments, no video camera is mounted in the probe. The video camera can be a separate device which can be tracked separately. For example, the video camera can be part of a microscope. For example, the video camera can be mounted on a head mounted display device to capture the images as seen by the eyes through the head mounted display device. For example, the video camera can be integrated with an endoscopic unit. Figure 14 illustrates another system to provide a display of augmented reality to guide a surgical procedure, according to one embodiment.
The system includes a stereo LCD head mounted display 201 (for example, a SONY LDI 100). The head mounted display 201 can be worn by a user, or alternatively, it can be coupled to an operating microscope 203 supported by a structure 205. In one embodiment, a support structure allows the LCD display 201 to be mounted on top of the binocular during microscopic surgery.
In one embodiment, the head mounted display 201 is partially transparent to allow the overlay of the image displayed on the head mounted display 201 onto the scene that is seen through the head mounted display 201. Alternatively, the head mounted display 201 is not transparent; and a video image of the scene is captured and overlaid with graphics and/or images that are generated based on the pre-operative images.
In one embodiment, the system further includes an optical tracking unit 207 to track the locations of a probe 209, the head mounted display 201, and/or the microscope 203. For example, the location of the head mounted display 201 can be tracked to determine the viewing direction of the head mounted display 201 and generate the image for display in the head mounted display 201 according to the viewing direction of the head mounted display 201.
In addition, the location of the probe 209 can be used to present a representation of the tip of the probe on the image displayed on head mounted display 201. For example, the location and the setting of the microscope 203 can be used in generating the image for display in the head mounted display 201 when the user views surgical environment via the microscope. In one embodiment, the location of the patient 221 is also tracked. Thus, even if the patient moves during the operation, the computer 211 can still overlay the virtual data on the real view accurately.
In one embodiment, the tracking unit 207 operates by detecting three or more reflective spherical markers attached to an object. Alternatively, the tracking unit 207 can operate by detecting the light from LEDs. By knowing and calibrating the shape of an object carrying the markers (such as pen-shaped probe 209), the location of the object can be determined in the 3D space covered by the two cameras of the tracking system. To track the LCD display 201, three markers or more can be attached along its upper frontal edge (close to the forehead of the person wearing the display).
The microscope 203 can also be tracked by reflective makers mounted to a support structure attached to the microscope such that a free line of sight to the cameras of the tracking system is provided during most of the microscope movements. In one embodiment, the tracking unit 207 used in the system is available commercially, such as from Northern Digital, Polaris. Alternatively, other types of tracking units can also be used.
In one embodiment, the system further includes a computer 211, which is capable of real time stereoscopic graphics rendering, and transmitting the computer-generated images to the head mounted display 201 via a cable 213. The system may further include a footswitch 215, to transmit signals to the computer 211 via a cable 217. For example, during the registration process, a user can activate the footswitch to indicate to the computer that the probe tip is touching a fiducial point on the patient, at which moment the position of the probe tip represents the position of the fiducial point on the patient.
In one embodiment, the settings of the microscope 203 are transmitted (as discussed below) to the computer 211 via cable 219. The tracking unit 207 and the microscope 203 communicate with the computer 211 via a serial port in one embodiment. The footswitch 215 can be connected to another computer port for interaction with the computer during the surgical procedure.
In one example of neurosurgery, the head of the patient 221 is registered to the volumetric preoperative data with the aid of markers (fiducials) on the patient's skin or disposed elsewhere on or in the patient. For example, the fiducials can be glued to the skin before the imaging procedure and remain on the skin until the surgery starts. In some embodiments, four or more (e.g. six) flducials are used. During the pre-operative planning phase, the positions of the markers in the images are identified and marked.
In the operating theatre, a probe tracked by the tracking system is used to point to the flducials in the real world (on the skin) that correspond to those marked on the images. The 3D data is then registered to the patient. In one embodiment, the registration procedure yields a transformation matrix which can be used to map the positions as tracked in the real world to the corresponding positions in the images.
In some embodiments, the registration method can be other kind of means such as surface-based registration besides the point-based registration mentioned above.
In one embodiment, after the image-to-patient registration procedure, the surgeon can wear the head mounted display 201 to examine the patient 221 through the semi- transparent screen of the display 201 where the stereoscopic reconstruction of the segmented imaging data can be displayed. The surgeon can see the 3D image data to be overlaid directly on the actual patient and. The image of the 3D structures appearing "inside" the head can be viewed from different angles while the viewer is changing position.
In one embodiment, registering image data with a patient involves providing a reference frame with a fixed position relative to the patient and determining the position and orientation of the reference frame using a tracking device. The image data is then registered to the patient relative to the reference frame.
For example, a transformation matrix that represents the spatial relation between the coordinate system of the image data and a coordinate system based on the reference frame can be determined during the registration process and recorded (e.g., in a file on a hard drive, or other types of memory, of the computer (123 or 211)). Alternatively, other types of registration data that can be used to derive the transformation matrix, such as the input data received during the registration, can be stored. When the program for the image guided surgery system is re-started, it is automatically determined if the recorded registration data exists for the corresponding patient and image data. If registration data is available, the program can utilize the existing registration data and skip some of the registration operations.
In some embodiments, the module uses one or more rules to search and determine the validity of the registration data. For example, the name of the patient can be used to identify the patient. Alternatively, other types of identifications can be used to identify the patient. For example, a patient ID number can be used to identify the patient. Further, in some embodiment, the patient ID number can be obtained and/or derived from a Radio Frequency Identification (RFID) tag of the patient in an automated process.
In one embodiment, the module determines the validity of the registration data based on a number of rules. For example, the module can be configured to reject registration data that is older than pre-determined time period, such as 24 hours. In one embodiment, the module can further provide the user the options to choose between use the registration data or start a new registration process. The system can assign identifications to image data, such that the registration data is recorded in association with the identification of the image data.
Data Processing System
Figure 15 is a block diagram of a data processing system used in some embodiments of cursor control.
While Figure 15 illustrates various components of a computer system, it is not intended to represent any particular architecture or manner of interconnecting the components. Other systems that have fewer or more components/modules can also be used.
The computer system 400 is one embodiment of a data processing system. The system 400 includes an inter-connect 401 (e.g., bus and system core logic), which interconnects a microprocessor(s) 403 and memory 407. The microprocessor (403) is coupled to cache memory 405, which can be implemented on a same chip as the microprocessor (403).
The inter-connect (401) interconnects the microprocessor(s) (403) and memory (407) (e.g., the volatile memory and/or the nonvolatile memory) together and also interconnects them to a display controller and display device (413) and to peripheral devices such as input/output (I/O) devices (409) through an input/output controller(s) (411). Typical I/O devices include mice, keyboards, modems, network interfaces, printers, scanners, video cameras and other devices.
The inter-connect (401) can include one or more buses connected to one another through various bridges, controllers and/or adapters. In one embodiment the I/O controller (411) includes a USB (Universal Serial Bus) adapter for controlling USB peripherals, and/or an IEEE-1394 bus adapter for controlling IEEE-1394 peripherals. The inter-connect (401) can include a network connection.
In one embodiment, the volatile memory includes RAM (Random Access Memory), which typically loses data after the system is restarted. The non- volatile memory includes ROM (Read Only Memory), and other types of memories, such as hard drive, flash memory, floppy disk, etc.
Volatile RAM is typically implemented as dynamic RAM (DRAM) which requires power continually in order to refresh or maintain the data in the memory. Non- volatile memory is typically a magnetic hard drive, flash memory, a magnetic optical drive, or an optical drive (e.g., a DVD RAM), or other type of memory system which maintains data even after the main power is removed from the system. The non- volatile memory can also be a random access memory.
The non- volatile memory can be a local device coupled directly to the rest of the components in the data processing system. A non- volatile memory that is remote from the system, such as a network storage device coupled to the data processing system through a network interface such as a modem or Ethernet interface, can also be used. Various embodiments can be implemented using hardware, programs of instruction, or combinations of hardware and programs of instructions.
In some implementations, a customized navigation system such as those described above may work together with another full-fledged navigation system via a connection protocol. For example, the visualization display result of the full-fledged navigation system may offer restricted functionality lacking, say, some features such as stereoscopic display. In these implementations, the customized navigation system can enhance the visualization display result with more sophisticated image processing procedures. In this case, the customized navigation system will obviate the need for the registration process of the pre-operative images in relation with the patient in the physical world otherwise required by the full-fledged navigation system instead. Via the above-mentioned connection protocol, the customized navigation system can retrieve tracking data, registration results, virtual patient data information, etc from the full- fledged navigation system in real time. Thus, users can still navigate the enhanced visualization display of the virtual patient dataset with the customized navigation system in the operating room. Depending on the connection protocol, the communication between the full-fledged navigation system and the customized one can be either one-way or two-way.
In such implementations, the image guided system may not perform the tracking process of the probe directly by itself. The tracking data is retrieved from a third-party system. The image guided system can still implement the cursor control techniques described above with the probe based on the tracking data that is retrieved from the third-party system.
In general, routines executed to implement the embodiments can be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as "computer programs." The computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
While some embodiments have been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that various embodiments are capable of being distributed as a program product in a variety of forms and are capable of being applied regardless of the particular type of machine or computer-readable media used to actually effect the distribution.
Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks, (DVDs), etc.), among others. The instructions can be embodied in digital and analog communication links for electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc.
A machine readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods. The executable software and data can be stored in various places including for example ROM, volatile RAM, non- volatile memory and/or cache. Portions of this software and/or data can be stored in any one of these storage devices.
In general, a machine readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).
Aspects of the present disclosure can be embodied, at least in part, in software. That is, the techniques can be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non- volatile memory, cache or a remote storage device.
In various embodiments, hardwired circuitry can be used in combination with software instructions to implement the embodiments. Thus, the techniques are not limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system.
One apparatus for performing the disclosed techniques is illustrated in Figure 16. The apparatus 1600 comprises a definition module 1602 which defines a virtual screen in a volume of a 3D tracking system. In one or more embodiments described above, the user virtual screen is generated responsive a user activation of the probe. Apparatus 1600 also comprises a tracking module 1604 for determining a position of a probe in the volume as tracked by the 3D tracking system. In one or more embodiments, tracking module 1604 receives tracking data from a 3D tracking system, and determines a position of the probe in the 3D volume of the tracking system. Apparatus 1600 also comprises display generator 1604 which displays a cursor on a display screen based on a position of the probe relative to the virtual screen, according to a mapping between the virtual screen and the display screen.
Li this description, various functions and operations are described as being performed by or caused by software code to simplify description. However, those skilled in the art will recognize what is meant by such expressions is that the functions result from execution of the code by a processor, such as a microprocessor.
Although some of the drawings illustrate a number of operations in a particular order, operations which are not order dependent can be reordered and other operations can be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be apparent to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof. In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims

1. A method comprising: defining a virtual screen in a volume of a 3D tracking system; determining a position of a probe in the volume as tracked by the 3D tracking system; and displaying a cursor on a display screen based on a position of the probe relative to the virtual screen, according to a mapping between the virtual screen and the display screen.
2. The method of claim 1, wherein the mapping is carried out according to a position of a point on the virtual screen that is pointed at by the probe.
3. The method of claim 1 or claim 2, wherein displaying the cursor on the display screen based on the position of the probe relative to the virtual screen comprises determining an orientation of the probe relative to the virtual screen.
4. The method of claim 3, wherein determining an orientation of the probe relative to the virtual screen comprises determining an intersection point of a shooting line of the probe on the virtual screen.
5. The method of claim 4, wherein the shooting line of the probe is an axis of the probe, and determining an intersection point of the shooting line of the probe on the virtual screen comprises determining a point of intersection on the virtual screen of the axis of the probe.
6. The method of claim 4 or claim 5, wherein the shooting line of the probe is a line along the probe, and determining an intersection point of the shooting line of the probe on the virtual screen comprises determining a point of intersection on the virtual screen of the line along the probe.
7. A method, comprising: defining a virtual screen in a volume of a 3D tracking system; determining a position on the virtual screen according to a location of a probe in the volume as tracked by the 3D tracking system; and displaying a cursor on a display screen based on the position of the probe on the virtual screen, according to a mapping between the virtual screen and the display screen.
8. The method of any preceding claim, wherein the defining the virtual screen comprises defining the virtual screen to be substantially parallel to a pre-defined vector in the volume of the 3D tracking system.
9. The method of claim 8, wherein a tip of the probe is located in a central region of the virtual screen when the virtual screen is defined.
10. The method of claim 8, wherein the pre-defined vector corresponds to a vertical direction of the display screen.
11. The method of claim 8, wherein the pre-defined vector corresponds to a horizontal direction of the display screen.
12. The method of any preceding claim, further comprising, in response to a user input: determining a direction of the probe; determining a plane that includes the pre-defined vector and a second vector that is perpendicular to the pre-defined vector and the direction of the probe; wherein the virtual screen is defined in the plane.
13. The method of claim 12, further comprising recording two or more positions of the probe in the volume of the 3D tracking system to define the predefined vector.
14. The method of claim 12, further comprising determining the orientation of the pre-defined vector based on a computation of an average position of the tracked position of the probe during a predetermined period of time.
15. The method of claim 13, wherein the recording the two or more positions of the probe to determine a scale of the predefined vector.
16. The method of any preceding claim, further comprising adjusting the virtual screen in the volume of the 3D tracking system to follow an orientation of the probe over a period of time without changing an intersection point between a projection line along the probe and the virtual screen, when the probe is stationary in the volume of the 3D tracking system during the period of time.
17. The method of any preceding claim, further comprising: scaling a displacement of a projection point of the probe on the virtual screen based on a speed of the probe in the volume of the 3D tracking system to obtain a displacement of the cursor on the display screen.
18. The method of claim 17, further comprising generating a zero displacement of the cursor when the projection point of the probe is located substantially close to or beyond a boundary of the virtual screen.
19. The method of claim 17, further comprising, responsive to the projection point being located substantially close to or beyond a boundary of the virtual screen, generating a new virtual screen to replace the virtual screen.
20. The method of any preceding claim, further comprising adjusting the position of the virtual screen to maintain an intersection point between a projection line along the probe and a plane of the virtual screen on the boundary of the virtual screen, when the intersection point would move outside of the virtual screen if the position of the virtual screen were not adjusted.
21. The method of claim 20, further comprising moving the virtual screen along the projection line of the probe to maintain a constant distance between the virtual screen and the probe.
22. The method of claim 20, further comprising moving the virtual screen along the projection line of the probe over a period of time to reduce a difference between a predetermined distance and a current distance between the virtual screen and the probe.
23. The method of claim 22 wherein the period of time is based on a speed of probe movement in the volume of the 3D tracking system.
24. The method of claim 23, wherein the period of time decreases as the speed of the probe increases.
25. The method of any of claims 20 to 24, wherein the adjusting the position of the virtual screen comprises dragging the virtual screen according to movement of the intersection point perpendicular to an edge of the virtual screen and movement of the intersection point parallel to the edge of the virtual screen.
26. The method of any of claims 20 to 25, wherein the adjusting the position of the virtual screen comprises dragging the virtual screen according to movement of the intersection point perpendicular to an edge of the virtual screen while allowing the intersection point to slide along the edge on the virtual screen.
27. The method of claim 16 wherein the adjusting the virtual screen comprises rotating the virtual screen with a rotational movement of the probe.
28. An apparatus, comprising: a definition module for defining a virtual screen in a volume of a 3D tracking system; a tracking module for determining a position of a probe in the volume as tracked by the 3D tracking system; and a display generator for displaying a cursor on a display screen based on a position of the probe relative to the virtual screen, according to a mapping between the virtual screen and the display screen.
29. An apparatus, comprising: means a definition module for defining a virtual screen in a volume of a 3D tracking system; means a tracking module for determining a position on the virtual screen according to a location of a probe in the volume as tracked by the 3D tracking system; and means a display generator for displaying a cursor on a display screen based on the position on the virtual screen, according to a mapping between the virtual screen and the display screen.
30. An apparatus, comprising: a probe; a tracking system coupled to the probe to track at least one of a position and orientation of the probe; a data processing system coupled to the tracking system to receive the tracked location of the probe; and a video camera to capture a real time image based on a location of the probe, the real time image to be integrated information derived from pre-operative 3D images to generate an environment; and wherein wherein the probe is usable to control a cursor in the environment.
31. The apparatus of claim 30 wherein the video camera is installed on the probe.
32. The apparatus of claim 30 or 31 wherein the probe is usable to navigate about the environment in real time.
33. The apparatus of any of claims 30 to 32 further comprising an input device comprising at least one of a footswitch, a mouse, and a keyboard for receiving an input signal.
34. The apparatus of any of claims 30 to 33 wherein the data processing system is to map at least one of the position and orientation of the probe to a display screen.
PCT/SG2007/000314 2006-12-19 2007-09-17 Methods and apparatuses for cursor control in image guided surgery WO2008076079A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US87080906P 2006-12-19 2006-12-19
US60/870,809 2006-12-19

Publications (2)

Publication Number Publication Date
WO2008076079A1 true WO2008076079A1 (en) 2008-06-26
WO2008076079A8 WO2008076079A8 (en) 2008-09-12

Family

ID=39536578

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2007/000314 WO2008076079A1 (en) 2006-12-19 2007-09-17 Methods and apparatuses for cursor control in image guided surgery

Country Status (1)

Country Link
WO (1) WO2008076079A1 (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012056034A1 (en) * 2010-10-28 2012-05-03 Fiagon Gmbh Navigating attachment for optical devices in medicine, and method
FR2974997A1 (en) * 2011-05-10 2012-11-16 Inst Nat Rech Inf Automat Control system for information processing unit installed in surgery room for interventional radiology, has pedal operable by foot of surgeon to manage interactions with processing unit, where pedal is placed on floor in surgery room
US9164777B2 (en) 2011-08-30 2015-10-20 Microsoft Technology Licensing, Llc Determining the display of equal spacing guides between diagram shapes
US9323436B2 (en) 2012-04-05 2016-04-26 Microsoft Technology Licensing, Llc Utilizing drawing guides in determining the display of smart guides in a drawing program
US9498231B2 (en) 2011-06-27 2016-11-22 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
WO2017066373A1 (en) 2015-10-14 2017-04-20 Surgical Theater LLC Augmented reality surgical navigation
US9645831B2 (en) 2011-10-31 2017-05-09 Microsoft Technology Licensing, Llc Consolidated orthogonal guide creation
US20180046352A1 (en) * 2016-08-09 2018-02-15 Matthew Johnson Virtual cursor movement
WO2018156633A1 (en) 2017-02-21 2018-08-30 Novarad Corporation Augmented reality viewing and tagging for medical procedures
US10105149B2 (en) 2013-03-15 2018-10-23 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
US10219811B2 (en) 2011-06-27 2019-03-05 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
US20190201158A1 (en) * 2017-12-28 2019-07-04 Ethicon Llc Control of a surgical system through a surgical barrier
US10861236B2 (en) 2017-09-08 2020-12-08 Surgical Theater, Inc. Dual mode augmented reality surgical system and method
US10943505B2 (en) 2012-05-25 2021-03-09 Surgical Theater, Inc. Hybrid image/scene renderer with hands free control
US11024414B2 (en) 2011-03-30 2021-06-01 Surgical Theater, Inc. Method and system for simulating surgical procedures
US11116574B2 (en) 2006-06-16 2021-09-14 Board Of Regents Of The University Of Nebraska Method and apparatus for computer aided surgery
US11237627B2 (en) 2020-01-16 2022-02-01 Novarad Corporation Alignment of medical images in augmented reality displays
US11287874B2 (en) 2018-11-17 2022-03-29 Novarad Corporation Using optical codes with augmented reality displays
US11357574B2 (en) 2013-10-31 2022-06-14 Intersect ENT International GmbH Surgical instrument and method for detecting the position of a surgical instrument
US11430139B2 (en) 2019-04-03 2022-08-30 Intersect ENT International GmbH Registration method and setup
US11547499B2 (en) 2014-04-04 2023-01-10 Surgical Theater, Inc. Dynamic and interactive navigation in a surgical environment
US11793537B2 (en) 2017-10-30 2023-10-24 Cilag Gmbh International Surgical instrument comprising an adaptive electrical system
US11832899B2 (en) 2017-12-28 2023-12-05 Cilag Gmbh International Surgical systems with autonomously adjustable control programs
US11844545B2 (en) 2018-03-08 2023-12-19 Cilag Gmbh International Calcified vessel identification
US11844579B2 (en) 2017-12-28 2023-12-19 Cilag Gmbh International Adjustments based on airborne particle properties
US11857152B2 (en) 2017-12-28 2024-01-02 Cilag Gmbh International Surgical hub spatial awareness to determine devices in operating theater
US11864845B2 (en) 2017-12-28 2024-01-09 Cilag Gmbh International Sterile field interactive control displays
US11864728B2 (en) 2017-12-28 2024-01-09 Cilag Gmbh International Characterization of tissue irregularities through the use of mono-chromatic light refractivity
US11871901B2 (en) 2012-05-20 2024-01-16 Cilag Gmbh International Method for situational awareness for surgical network or surgical network connected device capable of adjusting function based on a sensed situation or usage
US11890065B2 (en) 2017-12-28 2024-02-06 Cilag Gmbh International Surgical system to limit displacement
US11896322B2 (en) 2017-12-28 2024-02-13 Cilag Gmbh International Sensing the patient position and contact utilizing the mono-polar return pad electrode to provide situational awareness to the hub
US11903587B2 (en) 2017-12-28 2024-02-20 Cilag Gmbh International Adjustment to the surgical stapling control based on situational awareness
US11911117B2 (en) 2011-06-27 2024-02-27 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
US11925350B2 (en) 2019-02-19 2024-03-12 Cilag Gmbh International Method for providing an authentication lockout in a surgical stapler with a replaceable cartridge
US11931027B2 (en) 2018-03-28 2024-03-19 Cilag Gmbh Interntional Surgical instrument comprising an adaptive control system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5230623A (en) * 1991-12-10 1993-07-27 Radionics, Inc. Operating pointer with interactive computergraphics
WO1999023946A1 (en) * 1997-11-12 1999-05-20 Stereotaxis, Inc. Device and method for specifying magnetic field for surgical applications
DE10335369A1 (en) * 2003-07-30 2005-03-03 Carl Zeiss Mixed reality display for the operation and control of surgical instruments, and especially a microscope, is prepared by setting the operating keys and the pointer within the virtual space

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5230623A (en) * 1991-12-10 1993-07-27 Radionics, Inc. Operating pointer with interactive computergraphics
WO1999023946A1 (en) * 1997-11-12 1999-05-20 Stereotaxis, Inc. Device and method for specifying magnetic field for surgical applications
DE10335369A1 (en) * 2003-07-30 2005-03-03 Carl Zeiss Mixed reality display for the operation and control of surgical instruments, and especially a microscope, is prepared by setting the operating keys and the pointer within the virtual space

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11857265B2 (en) 2006-06-16 2024-01-02 Board Of Regents Of The University Of Nebraska Method and apparatus for computer aided surgery
US11116574B2 (en) 2006-06-16 2021-09-14 Board Of Regents Of The University Of Nebraska Method and apparatus for computer aided surgery
WO2012056034A1 (en) * 2010-10-28 2012-05-03 Fiagon Gmbh Navigating attachment for optical devices in medicine, and method
US9641808B2 (en) 2010-10-28 2017-05-02 Fiagon Gmbh Navigating attachment for optical devices in medicine, and method
EP2632382B1 (en) 2010-10-28 2017-09-20 Fiagon AG Medical Technologies Navigating attachment for optical devices in medicine, and method
US11024414B2 (en) 2011-03-30 2021-06-01 Surgical Theater, Inc. Method and system for simulating surgical procedures
FR2974997A1 (en) * 2011-05-10 2012-11-16 Inst Nat Rech Inf Automat Control system for information processing unit installed in surgery room for interventional radiology, has pedal operable by foot of surgeon to manage interactions with processing unit, where pedal is placed on floor in surgery room
US9498231B2 (en) 2011-06-27 2016-11-22 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
US11911117B2 (en) 2011-06-27 2024-02-27 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
US10080617B2 (en) 2011-06-27 2018-09-25 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
US10219811B2 (en) 2011-06-27 2019-03-05 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
US9164777B2 (en) 2011-08-30 2015-10-20 Microsoft Technology Licensing, Llc Determining the display of equal spacing guides between diagram shapes
US9645831B2 (en) 2011-10-31 2017-05-09 Microsoft Technology Licensing, Llc Consolidated orthogonal guide creation
US10282219B2 (en) 2011-10-31 2019-05-07 Microsoft Technology Licensing, Llc Consolidated orthogonal guide creation
US9323436B2 (en) 2012-04-05 2016-04-26 Microsoft Technology Licensing, Llc Utilizing drawing guides in determining the display of smart guides in a drawing program
US11871901B2 (en) 2012-05-20 2024-01-16 Cilag Gmbh International Method for situational awareness for surgical network or surgical network connected device capable of adjusting function based on a sensed situation or usage
US10943505B2 (en) 2012-05-25 2021-03-09 Surgical Theater, Inc. Hybrid image/scene renderer with hands free control
US10105149B2 (en) 2013-03-15 2018-10-23 Board Of Regents Of The University Of Nebraska On-board tool tracking system and methods of computer assisted surgery
US11357574B2 (en) 2013-10-31 2022-06-14 Intersect ENT International GmbH Surgical instrument and method for detecting the position of a surgical instrument
US11547499B2 (en) 2014-04-04 2023-01-10 Surgical Theater, Inc. Dynamic and interactive navigation in a surgical environment
EP3361979A4 (en) * 2015-10-14 2019-06-26 Surgical Theater LLC Augmented reality surgical navigation
US11197722B2 (en) 2015-10-14 2021-12-14 Surgical Theater, Inc. Surgical navigation inside a body
WO2017066373A1 (en) 2015-10-14 2017-04-20 Surgical Theater LLC Augmented reality surgical navigation
US20180046352A1 (en) * 2016-08-09 2018-02-15 Matthew Johnson Virtual cursor movement
US11266480B2 (en) 2017-02-21 2022-03-08 Novarad Corporation Augmented reality viewing and tagging for medical procedures
WO2018156633A1 (en) 2017-02-21 2018-08-30 Novarad Corporation Augmented reality viewing and tagging for medical procedures
EP3585299A4 (en) * 2017-02-21 2021-05-05 Novarad Corporation Augmented reality viewing and tagging for medical procedures
US10861236B2 (en) 2017-09-08 2020-12-08 Surgical Theater, Inc. Dual mode augmented reality surgical system and method
US11532135B2 (en) 2017-09-08 2022-12-20 Surgical Theater, Inc. Dual mode augmented reality surgical system and method
US11925373B2 (en) 2017-10-30 2024-03-12 Cilag Gmbh International Surgical suturing instrument comprising a non-circular needle
US11793537B2 (en) 2017-10-30 2023-10-24 Cilag Gmbh International Surgical instrument comprising an adaptive electrical system
US11819231B2 (en) 2017-10-30 2023-11-21 Cilag Gmbh International Adaptive control programs for a surgical system comprising more than one type of cartridge
US11832899B2 (en) 2017-12-28 2023-12-05 Cilag Gmbh International Surgical systems with autonomously adjustable control programs
US11896322B2 (en) 2017-12-28 2024-02-13 Cilag Gmbh International Sensing the patient position and contact utilizing the mono-polar return pad electrode to provide situational awareness to the hub
US11844579B2 (en) 2017-12-28 2023-12-19 Cilag Gmbh International Adjustments based on airborne particle properties
US11857152B2 (en) 2017-12-28 2024-01-02 Cilag Gmbh International Surgical hub spatial awareness to determine devices in operating theater
US20190201158A1 (en) * 2017-12-28 2019-07-04 Ethicon Llc Control of a surgical system through a surgical barrier
US11864845B2 (en) 2017-12-28 2024-01-09 Cilag Gmbh International Sterile field interactive control displays
US11864728B2 (en) 2017-12-28 2024-01-09 Cilag Gmbh International Characterization of tissue irregularities through the use of mono-chromatic light refractivity
US11918302B2 (en) 2017-12-28 2024-03-05 Cilag Gmbh International Sterile field interactive control displays
US11890065B2 (en) 2017-12-28 2024-02-06 Cilag Gmbh International Surgical system to limit displacement
US11903587B2 (en) 2017-12-28 2024-02-20 Cilag Gmbh International Adjustment to the surgical stapling control based on situational awareness
US11896443B2 (en) * 2017-12-28 2024-02-13 Cilag Gmbh International Control of a surgical system through a surgical barrier
US11844545B2 (en) 2018-03-08 2023-12-19 Cilag Gmbh International Calcified vessel identification
US11931027B2 (en) 2018-03-28 2024-03-19 Cilag Gmbh Interntional Surgical instrument comprising an adaptive control system
US11287874B2 (en) 2018-11-17 2022-03-29 Novarad Corporation Using optical codes with augmented reality displays
US11925350B2 (en) 2019-02-19 2024-03-12 Cilag Gmbh International Method for providing an authentication lockout in a surgical stapler with a replaceable cartridge
US11430139B2 (en) 2019-04-03 2022-08-30 Intersect ENT International GmbH Registration method and setup
US11237627B2 (en) 2020-01-16 2022-02-01 Novarad Corporation Alignment of medical images in augmented reality displays

Also Published As

Publication number Publication date
WO2008076079A8 (en) 2008-09-12

Similar Documents

Publication Publication Date Title
WO2008076079A1 (en) Methods and apparatuses for cursor control in image guided surgery
US20210267698A1 (en) Graphical user interface for a surgical navigation system and method for providing an augmented reality image during operation
CA3099734C (en) Live 3d holographic guidance and navigation for performing interventional procedures
Sielhorst et al. Advanced medical displays: A literature review of augmented reality
CA2486525C (en) A guide system and a probe therefor
US9107698B2 (en) Image annotation in image-guided medical procedures
US20070236514A1 (en) Methods and Apparatuses for Stereoscopic Image Guided Surgical Navigation
EP1395194B1 (en) A guide system
US7774044B2 (en) System and method for augmented reality navigation in a medical intervention procedure
US20070238981A1 (en) Methods and apparatuses for recording and reviewing surgical navigation processes
US20080123910A1 (en) Method and system for providing accuracy evaluation of image guided surgery
JP2007512854A (en) Surgical navigation system (camera probe)
EP1011424A1 (en) Imaging device and method
Vogt et al. Reality augmentation for medical procedures: System architecture, single camera marker tracking, and system evaluation
US20210121238A1 (en) Visualization system and method for ent procedures
EP3907585B1 (en) Systems and methods of controlling an operating room display using an augmented reality headset
CN112888395A (en) Method and system for real-time updating of cross-camera placement
WO2018011105A1 (en) Systems and methods for three dimensional touchless manipulation of medical images
Shahidi et al. Volumetric image guidance via a stereotactic endoscope
Shahidi et al. Proposed simulation of volumetric image navigation using a surgical microscope
Salb et al. INPRES (intraoperative presentation of surgical planning and simulation results): augmented reality for craniofacial surgery

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07808943

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07808943

Country of ref document: EP

Kind code of ref document: A1