|Número de publicación||US20050059886 A1|
|Tipo de publicación||Solicitud|
|Número de solicitud||US 10/958,972|
|Fecha de publicación||17 Mar 2005|
|Fecha de presentación||5 Oct 2004|
|Fecha de prioridad||24 Jul 1998|
|También publicado como||US6081577, US6549607, US6801597, US20040008809, WO2000004830A1|
|Número de publicación||10958972, 958972, US 2005/0059886 A1, US 2005/059886 A1, US 20050059886 A1, US 20050059886A1, US 2005059886 A1, US 2005059886A1, US-A1-20050059886, US-A1-2005059886, US2005/0059886A1, US2005/059886A1, US20050059886 A1, US20050059886A1, US2005059886 A1, US2005059886A1|
|Cesionario original||Webber Richard L.|
|Exportar cita||BiBTeX, EndNote, RefMan|
|Citas de patentes (37), Citada por (28), Clasificaciones (8)|
|Enlaces externos: USPTO, Cesión de USPTO, Espacenet|
This application is a continuation-in-part of co-pending application Ser. No. 09/252,632, entitled “Method And System For Creating Task-Dependent Three-Dimensional Images,” filed on Feb. 19, 1999, such application being incorporated herein by reference.
The present invention relates to a method and system for creating three-dimensional displays or images from a multiplicity of two-dimensional projected images and, more specifically, to a method and system for producing task-dependent radiographic images of an object of interest which are substantially free of blurring artifacts.
A variety of three-dimensional imaging modalities has been developed for medical applications, as well as for use in non-destructive testing of manufactured parts. In particular, a wide range of tomosynthetic imaging techniques has previously been demonstrated to be useful in examining three-dimensional objects by means of radiation. These imaging techniques differ in the size and configuration of the effective imaging aperture. At one extreme, the imaging aperture approaches zero (i.e., a pinhole) and the resulting display is characterized by images produced from a single transmission radiograph. This yields an infinitely wide depth of field and therefore no depth information can be extracted from the image. At the other extreme, the aperture approaches a surrounding ring delimiting an infinite numerical aperture resulting in projection angles orthogonal to the long axis of the irradiated object. This yields an infinitely narrow depth of field and hence no information about adjacent slices through the object can be ascertained. It therefore follows that a “middle ground” approach, which provides the ability to adapt a sampling aperture to a particular task, would be highly advantageous.
The key to achieving the full potential of diagnostic flexibility lies in the fact that perceptually meaningful three-dimensional reconstructions can be produced from optical systems having any number of different aperture functions. That fact can be exploited since any aperture can be approximated by summation of a finite number of appropriately distributed point apertures. The key is to map all incrementally obtained projective data into a single three dimensional matrix. To accomplish this goal, one needs to ascertain all positional degrees of freedom existing between the object of interest, the source of radiation, and the detector.
In the past, the relative positions of the object, the source, and the detector have been determined by fixing the position of the object relative to the detector while the source of radiation is moved along a predetermined path, i.e. a path of known or fixed geometry. Projective images of the object are then recorded at known positions of the source of radiation. In this way, the relative positions of the source of radiation, the object of interest, and the detector can be determined for each recorded image.
A method and system which enables the source of radiation to be decoupled from the object of interest and the detector has been described in U.S. Pat. No. 5,359,637, that issued on Oct. 25, 1994, which is incorporated herein by reference. This is accomplished by fixing the position of the object of interest relative to the detector and providing a fiducial reference which is in a fixed position relative to the coupled detector and object. The position of the image of the fiducial reference in the recorded image then can be used to determine the position of the source of radiation. In addition, a technique for solving the most general application wherein the radiation source, the object of interest, and the detector are independently positioned for each projection has been described by us in co-pending U.S. patent application Ser. No. 09/034,922, filed on Mar. 5, 1998, which is also incorporated herein by reference.
Once the relative positions of the radiation source, the object, and the detector are determined, each incrementally obtained projective image is mapped into a single three-dimensional matrix. The mapping is performed by laterally shifting and summing the projective images to yield tomographic images at a selected slice position through the object of interest. A three-dimensional representation of the object can be obtained by repeating the mapping process for a series of slice positions through the object. However, the quality and independence of the tomographic images is compromised by blurring artifacts produced from unregistered details located outside the plane of reconstruction.
In addition, quantitative information has traditionally been difficult to determine from conventional tomography. Although many questions of medical interest are concerned with temporal changes of a structure (e.g., changes in the size and shape of a tumor over time), the ability to compare diagnostic measurements made over time is complicated by the fact that factors other than the parameter of diagnostic interest often contribute to the measured differences. For example, spatial variations produced from arbitrary changes in the observational vantage point(s) of the radiation source create differences between the measurements which are unrelated to temporal changes of the object being investigated. In addition, conventional X-ray sources produce radiation that varies with changes in tube potential, beam filtration, beam orientation, tube current, distance form the focal spot, and exposure time. The fluctuations in the output of radiation sources is therefore another factor that limits the ability to derive quantitative information from conventional tomography.
In light of the foregoing, it would be highly beneficial to provide a method for producing a three-dimensional representation of an object that is substantially free of blurring artifacts from unregistered details. In addition, the method should enable quantitative information related to temporal changes associated with the object to be measured.
The present invention relates to a system and a method for synthesizing an image slice through a selected object from a plurality of projected radiographic images of the selected object. The system comprises a radiation source for irradiating the object. The preferred radiation source depends upon the particular application. For example, the present invention may be practiced using x-rays, electron microscopy, ultrasound, visible light, infrared light, ultraviolet light, microwaves, or virtual radiation simulated by manipulation of magnetic fields (magnetic resonance imaging (MRI)). In one embodiment of the present invention, the position of the radiation source within a plane parallel to an image plane is determined from projected images of two object points associated with a fiducial reference which is maintained in fixed position relative to the selected object. Once the projected images are compensated for differences in magnification, the relative position of the radiation source within the plane parallel to the image plane is determined from an estimate of the actual distance between the two object points obtained from a sinusoidal fit of the distances between the projected images of the object points.
A recording medium or radiation detector is used to record a series of projected images of the selected object. The recording medium may be in the form of a photographic plate or a radiation-sensitive, solid-state image detector such as a charge-coupled device (CCD), or any other system capable of producing two-dimensional projections or images suitable for digitization or other analysis.
An image synthesizer is provided for transforming the series of projected images of the selected object into an image slice. The image slice consists of an array of pixels with each pixel having an associated attenuation value and corresponds to a cross-sectional slice through the selected object at a selected slice position. A three-dimensional representation of the object can be obtained by repeating the transformation at a series of slice positions through the object.
In addition, an optional source comparator is provided for adjusting the radiation source to enable meaningful quantitative comparisons between projected images recorded either at different times and/or using different radiation sources. The source comparator is positionable between the radiation source and the radiographic medium for producing a gradient image indicative of characteristics associated with the output from the radiation source. In operation, the source comparator is used to record a first gradient image using a first radiation source at the same time that a first projected image or series of projected images is recorded. When a second projected image or series of projected images are to be recorded, the source comparator is used to record a second gradient image. The second gradient image is compared to the first gradient and differences between the two gradient images are noted. The beam energy, filtration, and beam exposure associated with the radiation source used to record the second gradient image are then adjusted to minimize the differences between the first gradient image and the second gradient image.
In one embodiment, the source comparator comprises two wedges or five-sided polyhedrons of equal dimension having a rectangular base and two right-triangular faces. The triangular faces lie in parallel planes at opposite edges of the base such that the triangular faces are oriented as mirror images of each other. As a result, each wedge has a tapered edge and provides a uniformly increasing thickness from the tapered edge in a direction parallel to the plane of the base and perpendicular to the tapered edge. The wedges are arranged with the base of one wedge adjacent to the base of the other wedge such that the tapered edges of the two wedges are at adjacent edges of the base. One wedge is formed from a uniform high attenuation material while the other wedge is formed from a uniform low attenuation material. Accordingly, when the source comparator is irradiated from a radiation source directed perpendicularly to the bases of the wedges, the resulting image will be a quadrilateral having an intensity gradient that is maximized in a particular direction.
In operation, the system of the present invention is used to produce an image slice through the selected object that is substantially free of blurring artifacts from unregistered details located outside a plane of reconstruction. The radiation source and recording medium are used to record a series of two-dimensional projected images of the selected object. The series of two-dimensional projected images are then shifted by an amount and in a direction required to superimpose the object images of the two-dimensional images. The shifted two-dimensional images can then be combined in a non-linear manner to generate a tomosynthetic slice through the selected object. In one embodiment, the two-dimensional images are combined by selecting details from a single projection demonstrating the most relative attenuation at each pixel. Alternatively, a different non-linear operator could be used wherein the two-dimensional images are combined by selecting details from a single projection demonstrating the least relative attenuation at each pixel in the reconstructed image. Optionally, a series of reconstructed images at varying slice positions through the selected object are determined to create a three-dimensional representation of the selected object.
Alternatively, the system of the present invention is used to synthesize a three-dimensional reconstruction of the object from as few as two projected images of the object. A first projected image of the object is recorded in a first projection plane and a second projected image is recorded in a second projection plane. Each of the first and the second projected images are then rendered at a common magnification. Using a known angle between the first and the second projection planes, the first and the second projected images are transformed to occupy the same volume. The transformed first and second projected images are then combined into a three-dimensional representation of the selected object. Additional projected images are optionally combined with the three-dimensional representation to refine the three-dimensional representation.
In yet another embodiment, the system of the present invention is used to synthesize a three-dimensional representation of the selected object from two or more sets of projected images of the selected object. The first and second sets of projected images are tomosynthetically transformed into a series of contiguous slices forming a first and a second three-dimensional volume, respectively, using previously disclosed methods (e.g., U.S. Pat. No. 5,668,844) or those in the public domain (e.g., tomosynthesis). The first and second three-dimensional volumes are then rendered at a common magnification. The second three-dimensional volume is then rotated by an angle corresponding to the angular disparity between the first and the second three-dimensional volumes. The rotated second three-dimensional volume is then merged with the first three-dimensional volume to produce a three-dimensional representation of the selected object.
Alternatively, the system of the present invention can be used to determine temporal changes in the selected object. The radiation source and recording medium are used to record a first series of two-dimensional projected images of the selected object. At some later time, the radiation source and recording medium are used to record a second series of two-dimensional projected images of the selected object. Both series are tomosynthetically converted into a series of slices via previously disclosed methods (TACT®) or those in the public domain (tomosynthesis). Each slice of the first series is then correlated with a corresponding slice of the second series to form pairs of correlated slices. Each pair of slices is then aligned to maximize the overlap between homologous structures. Each pair of correlated slices is then subtracted to produce a difference image. Each difference image is then displayed individually. Alternatively, all of the difference images can be overlapped to yield a complete difference image corresponding to the volumetric difference associated with the entire tomosynthetically reconstructed volume.
When a three-dimensional representation of the selected object is produced, the three-dimensional representation can be viewed holographically using a display in accordance with the present invention. The display comprises stereoscopic spectacles which are worn by an observer and a target operatively associated with the spectacles. Accordingly, as the observer changes his or her vantage point, movement of the spectacles translates into a corresponding movement of the target. A detector is operatively associated with the target for tracking movement of the target. The detector is connected to a monitor such that the monitor receives a signal from the detector indicative of movement of the target. In response to the signal from the detector, the monitor displays an image pair of the three-dimensional representation which, when viewed through the spectacles produces a stereoscopic effect. The image pair which is displayed is changed to compensate for changes in the vantage point of the observer.
The foregoing summary, as well as the following detailed description of the preferred embodiments of the present invention, will be better understood when read in conjunction with the accompanying drawings, in which:
The present invention generally relates to a system 20, as depicted schematically in
In general, the pattern of source 27 positions does not need to be in any fixed geometry or position. Indeed, the position of the source 27 may be totally arbitrary in translation and displacement relative to the object 21. Likewise, the recording medium 31 may also be arbitrarily movable relative to the object 21 by translation, displacement, tilting, or rotation. The only requirement is that for every degree of freedom in the system resulting from movement of the source 27 or the recording medium 31 relative to the object 21, the fiducial reference 22 must include sufficient measurable or defined characteristics, such as size, shape, or numbers of reference markers 23, to account for each degree of freedom.
The minimum number of reference markers required to completely determine the system depends on the constraints, if any, imposed on the relative positions of (1) the radiation source, (2) the object and fiducial reference, and (3) the recording medium. The system may have a total of nine possible relative motions (2 translations and 1 displacement for the radiation source relative to a desired projection plane and 2 translations, 1 displacement, 2 tilts, and 1 rotation for the recording medium relative to the desired projection plane). Each of these possible relative motions must be capable of analysis either by constraining the system and directly measuring the quantity, by providing a sufficient number of reference markers to enable the quantity to be determined, or by estimating the value of the quantity. Each unconstrained relative motion represents a degree of freedom for the system. For a system to be completely determined, the total number of degrees of freedom in the system must be less than or equal to the total number of degrees of freedom associated with the fiducial reference.
More than the minimum number of reference markers can be used. In such cases, the system is overdetermined and least squares fitting can be used to improve the accuracy of the resulting image slices. If, however, less than the minimum number of reference markers is used, then the system is underdetermined and the unknown degrees of freedom must either be estimated or measured directly.
Although the reference markers can be essentially any size and shape, spherical reference markers of known diameter may be used. When using spherical reference markers of a finite size, a single reference marker can account for up to five degrees of freedom. When a spherical reference marker is projected obliquely onto the recording medium, the reference image cast by the spherical reference marker is elliptical and is independent of any rotation of the reference marker. Determining the position of the reference image in the projection plane (X- and Y-coordinates) and the magnitudes of the major and minor diameters of the elliptical image accounts for four degrees of freedom. Further, when the distance between the radiation source and the reference marker is sufficiently short, the reference image will be magnified relative to the actual size of the reference marker, thereby accounting for an additional degree of freedom. In contrast, only two degrees of freedom (the X- and Y-coordinates) are typically associated with the reference image of a point-size reference marker.
The most complex, yet most generally applicable, arrangement is depicted in
One embodiment of the present invention that permits this general arrangement to be realized conveniently involves two-dimensional projected images from a system comprised of a fiducial reference having five point-size or finite reference markers. This approach conveniently facilitates three-dimensional reconstructions when exactly four reference markers are coplanar and no three or more reference markers are collinear. Under these conditions, only the projection from the non-coplanar marker need be distinguished from the other four because the projections from the latter always bear a fixed sequential angular arrangement relative to each other which simplifies identification of homologous points in all projections. For example, the reference markers can be placed at five contiguous vertices of a cube as shown in
The most general reconstruction task requiring information sufficient to determine all nine possible degrees of freedom requires computation of separate projective transformations for each projected image in each and every slice. However, by limiting the region of interest to a subvolume constrained such that the magnification across and between its slices may be considered constant, it is possible to generate veridical three-dimensional images within the volume much more efficiently. The increase in efficiency under these conditions results from the fact that all projections within this region can be mapped by a single fixed transformation, and that associated slice generation can be accomplished by simple tomosynthetic averaging of laterally shifted projections as described in U.S. Pat. No. 5,359,637.
Another useful arrangement of the fiducial reference comprising five reference markers is shown in
Alternatively, the six degrees of freedom for the radiation source 27 relative to the desired projection plane 37 (two translational, one displacement, two rotational, and one tilting degree of freedom) can be determined independently from the use of the fiducial reference when the orientation of the detector is fixed or known relative to either the object of interest or the radiation source. For example, the position of the radiation source 27 can be determined from multiple plane projections recorded from an arbitrarily positioned camera provided that the lens aperture is adjusted such that the entire object always appears in focus. The three relative angles associated with each projection are determined by attaching three orthogonally oriented angle sensing devices, such as gyroscopes, to the camera. The displacement of the radiation source relative to the object is determined using a range finder associated with the camera. Since the position of the camera within a plane parallel to the camera's projection plane is used only to determine the three-dimensional geometric relationships underlying the disparity observed between object images, the remaining degrees of freedom need only be measured relative to one another and, therefore, can be fixed from a geometric analysis of paired point projections. Referring to
A method for determining the position of the radiation source relative to the object using an arbitrarily positionable camera in accordance with the present invention is depicted in
At step 1004, it is determined whether additional object images are desired. If additional object images are desired, the camera is repositioned at step 1005 and the process returns to step 1002. It should be appreciated that a minimum of three object images is required to produce a meaningful sinusoidal regression, as discussed in detail below. If no additional object images are to be recorded, the recorded object images and data is optionally stored in a computer readable format and the process proceeds to step 1007.
Each of the object images is then individually scaled to render all of the object images at the same magnification at step 1009. The scaling is possible using the range recorded for each object image because the linear magnification is inversely proportional to the range. By scaling the object images, an effective displacement between the camera and the object can be defined.
At step 1011, a first object point, visible on all of the projected object images, is selected. A representative object image is then selected at step 1013. The representative object image should be the object image which best approximates the orientation to which desired reconstructed tomosynthetic slices are to be parallel.
Each object image is then rotated and translated, at step 1015, so that all of the object images are brought into tomosynthetic registration. Specifically, each object image is rotated by an amount sufficient to adjust the rotational orientation of the camera about an axis perpendicular to the projection plane to match that of the representative object image. Rotational adjustment of the object images assures that the registrations which follow will not exclude a second reference point, whose selection is discussed below. Each rotated object image is then translated both vertically and horizontally by an amount which causes superposition of the projected image of the first object point within each object image with the projected image of the first object point within the representative object image.
At step 1017, a second object point visible on all of the scaled, rotated, and translated object images is selected. The distance between the projected images of the second object point and the first object point is measured, at step 1019, for each of the object images. If the relative change in distance does not exceed a task-dependent threshold value and produce a well-distributed range of values, the accuracy of the subsequent non-linear regression may be compromised. Accordingly, at step 1021, it is determined whether the measured distances exceed the task-dependent threshold. If the threshold is not exceeded, a new second object point is selected at step 1017. If the threshold is exceeded, the process proceeds to step 1023.
At step 1023, the actual distance between the first object point and the second object point is estimated from the measured distance separating the projected images of the first and second object points in the recorded object images. The estimate of the actual distance is determined using the effective displacement of the camera from the object and a sinusoidal curve fitting procedure, as well as the projection angle defined by a line connecting the first and second object points and the plane of the representative object image.
Using affine projection geometry, the recorded angle data, and the recorded displacement data, each object image is remapped onto the plane defined by the representative object image selected above at step 1025. The remapping is performed using the first object point as the common point of superposition. At step 1027, the object images are then tomosynthetically reconstructed using the second object point as a disparity marker. The distances between object images is then calibrated, at step 1029, using the estimate for the distance between the first and second object points and trigonometrically correcting the object images for foreshortening caused by variations in the projection angle.
Reducing the uncertainty of the projection geometry through the constraint of one or more degrees of freedom reduces the complexity of the resulting reconstruction. An arrangement of the system of the present invention which is somewhat constrained is depicted in
The computational steps involved in synthesizing a three-dimensional image using three spherical, non-linear reference markers in a system wherein the orthogonal distance between the radiation source and the recording medium is fixed at a distance short enough so that the images cast by the reference markers are magnified relative to the size of the actual reference markers (i.e., a system with eight degrees of freedom as depicted in
In the embodiment shown in
In one embodiment, the aiming device 250 comprises an X-ray source operated in an ultra-low exposure mode and the projected image is obtained using the same X-ray source operated in a full-exposure mode. Alternatively, a real-time ultra-low dose fluoroscopic video display can be mounted into the handle 248 of the source 227 via a microchannel plate (MCP) coupled to a CCD. The video display switches to a lower gain (high signal-to-noise) frame grabbing mode when the alignment is considered optimal and the trigger 246 is squeezed more tightly.
An alternate embodiment of an aiming device in accordance with the present invention is shown in
Referring again to
In one particular embodiment depicted in
Yet another embodiment is depicted in
In the embodiment shown in
Similarly, as depicted in
The present invention also relates to a method for creating a slice image through the object 21 of
At step 47, a fiducial reference 22 comprising at least two reference markers, 23 and 123, is selected which bears a fixed relationship to the selected object 21. Accordingly, the fiducial reference 22 may be affixed directly to the selected object 21. The minimum required number of reference markers 23 is determined by the number of degrees of freedom in the system, as discussed above. When the fiducial reference 22 comprises reference markers 23 of a finite size, the size and shape of the reference markers 23 are typically recorded.
The selected object 21 and fiducial reference 22 are exposed to radiation from any desired projection geometry at step 49 and a two-dimensional projected image 38 is recorded at step 51. Referring to
At step 53, it is determined whether additional projected images 38 are desired. The desired number of projected images 38 is determined by the task to be accomplished. Fewer images reduce the signal-to-noise ratio of the reconstructions and increase the intensities of component “blur” artifacts. Additional images provide information which supplements the information contained in the prior images, thereby improving the accuracy of the three-dimensional radiographic display. If additional projected images 38 are not desired, then the process continues at step 60.
If additional projected images 38 are desired, the system geometry is altered at step 55 by varying the relative positions of (1) the radiation source 27, (2) the selected object 21 and the fiducial reference 22, and (3) the recording medium 31. The geometry of the system can be varied by moving the radiation source 27 and/or the recording medium 31. Alternatively, the source 27 and recording medium 31, the selected object 21 and fiducial reference 22 are moved. When the radiation source and recording medium produce images using visible light (e.g., video camera), the geometry of the system must be varied to produce images from various sides of the object in order to obtain information about the entire object. After the system geometry has been varied, the process returns to step 49.
After all of the desired projected images have been recorded, a slice position is selecteu at step 60. The slice position corresponds to the position at which the image slice is Lo be generated through the object.
After the slice position has been selected, each projected image 38 is projectively warped onto a virtual projection plane 37 at step 65. The warping procedure produces a virtual image corresponding to each of the actual projected images. Each virtual image is identical to the image which would have been produced had the projection plane been positioned at the virtual projection plane with the projection geometry for the radiation source 27, the selected object 21, and the fiducial reference 22 of the corresponding actual projected image. The details of the steps involved in warping the projection plane 37 are shown in
At step 72, a virtual projection plane 37 is selected. In most cases it is possible to arrange for one of the projected images to closely approximate the virtual projection plane position. That image can then be used as the basis for transformation of all the other images 38. Alternatively, as shown for example in
One of the recorded projected images 38 is selected at step 74 and the identity of the reference images 39 cast by each reference marker 23 is determined at step 76. In the specialized case, such as the one shown in
The position of each reference image 39 cast by each reference marker 23 is measured at step 78. When a spherical reference marker 23 is irradiated by source 27, the projected center 41 of the reference marker 23 does not necessarily correspond to the center 42 of the reference image 39 cast by that reference marker 23. Accordingly, the projected center 41 of the reference marker 23 must be determined. One method of determining the projected center 41 of the reference marker 23 is shown in
Returning to step 78 of
Because the attenuation of a spherical reference marker 23 to X-rays approaches zero at tangential extremes, the projected minor diameter of resulting elliptical reference images 39 will be slightly smaller than that determined geometrically by projection of the reference marker's actual diameter. The amount of the resulting error is a function of the energy of the X-ray beam and the spectral sensitivity of the recording medium 31. This error can be eliminated by computing an effective radiographic diameter of the reference marker 23 as determined by the X-ray beam energy and the recording medium sensitivity in lieu of the actual diameter.
One method of obtaining the effective radiographic diameter is to generate a series of tomosynthetic slices through the center of the reference marker 23 using a range of values for the reference marker diameter decreasing systematically from the actual value and noting when the gradient of the reference image 39 along the minor diameter is a maximum. The value for the reference marker diameter resulting in the maximum gradient is the desired effective radiographic diameter to be used for computing magnification.
Further, each projected image can be scaled by an appropriate magnification. For fiducial references 22 comprising spherical reference markers 23, the minor diameter of the reference image 39 is preferably used to determine the magnification since the minor diameter does not depend on the angle between the source 27 and the recording medium 31. Accordingly, the magnification of a spherical reference marker 23 can be determined from the measured radius of the reference marker 23, the minor diameter of the reference image 39 on the recording medium 31, the vertical distance between the center of the reference marker 23 and the recording medium 31, and the vertical distance between the recording medium 31 and the virtual projection plane 37.
At step 84, it is determined whether all of the projected images 38 have been analyzed. If all of the projected images 38 have not been analyzed, the process returns to step 74, wherein an unanalyzed image 38 is selected. If no additional projected images 38 are to be analyzed, then the process proceeds through step 85 of
After each image has been warped onto the virtual projection plane, an image slice through the object 21 at the selected slice position is generated at step 90. An algorithm, such as that described in U.S. Pat. No. 5,359,637, which is incorporated herein by reference, can be used for that purpose. The position of the reference image cast by the alignment marker or markers 23 in each projected image 38 are used as the basis for application of the algorithm to generate the image slices.
By generating image slices at more than one slice position, a true three-dimensional representation can be synthesized. Accordingly, it is determined whether an additional slice position is to be selected at step 92. If an additional slice position is not desired, the process proceeds to step 94. If a new slice position is to be selected, the process returns to step 60.
If image slices at multiple slice positions have been generated, the entire set of image slices is integrated into a single three-dimensional representation at step 94. Alternative bases for interactively analyzing and displaying the three-dimensional data can be employed using any number of well-established three-dimensional recording and displaying methods. Additionally, the three-dimensional representation can be displayed using the display device depicted in
Instead of creating a slice image or a three-dimensional representation from one or more series of two-dimensional images, a nearly isotropic three-dimensional image can be created from a single pair of two-dimensional projections as depicted in
The steps of a method for producing a three-dimensional image of an object from a single pair of two-dimensional projections is shown in
A first projected image is then produced on a first projection plane at step 1102. The relative positions of the object, the radiation source, and the detector are then altered so that a second projected image can be recorded on a second projection plane at step 1104. The second projection plane must be selected so that it intersects the first projection plane at a known angle. However, for the resultant three-dimensional representation to be mathematically well conditioned, the angle should be or approach orthogonality.
At step 1106, a projective transformation of each projected image is performed to map the images of the fiducial reference on each face into an orthogonal, affine representation of the face. For example, when a cubic fiducial reference is used, the projective transformation amounts to converting the identifiable corners of the image of fiducial reference corresponding to a projected face of the fiducial reference into a perfect square having the same dimensions as a face of the fiducial reference.
Each of the transformed projected images is then extruded, at step 1108, such that both projected images occupy the same virtual volume. The extrusion step is equivalent to the creation of a virtual volume having the same dimensions as the fiducial reference containing the sum of the transformed projected images. At step 1110, an optional non-linear filtering technique is used to limit visualization of the three-dimensional representation to the logical intersection of the transformed projected images.
The three-dimensional representation can be refined by optionally recording additional projected images. At step 1112, it is determined whether additional projected images are to be recorded. If additional projected images are desired, the process returns to step 1104. However, if additional projected images are not desired, the three-dimensional representation is displayed at step 1114.
The present invention also relates to a method for reducing distortions in the three-dimensional representation. Tomosynthesis uses two-dimensional image projections constrained within a limited range of angles relative to the irradiated object to produce a three-dimensional representation of the object. The limited range of angles precludes complete and uniform sampling of the object. This results in incomplete three-dimensional visualization of spatial relationships hidden in the resulting undersampled shadows or null spaces. Another limiting factor which interferes with artifact-free tomosynthetic reconstruction is the change in slice magnification with depth caused by the relative proximity of the source of radiation. These distortions can be reduced by merging independently generated sets of tomosynthetic image slices, as shown in
At step 1120, a fiducial reference is functionally associated with the object and at least two independent sets of image slices are recorded. The angular disparity between the sets of image slices is noted. For example, the first set of image slices may comprise multiple anterior-posterior projections while the second set of image slices comprises multiple lateral projections. The sets of image slices are then integrated to create a first and a second three-dimensional tomosynthetic matrix volume at step 1122.
At step 1124, the resulting three-dimensional matrix volumes are affinized to counteract the effects of having a finite focal-object distance. Affinization is accomplished by first identifying the reference images of the appropriate reference markers of the fiducial reference. Once the reference images have been identified, the three-dimensional matrix volumes are shifted and scaled in order to correct for geometrical and surface imperfections. The transformation of the first three-dimensional matrix volumes is carried out in accordance with the following equation:
where A is the first three-dimensional matrix volume, A′ is the shifted and scaled first three-dimensional matrix volume, and C is the affine correction matrix for the first three-dimensional matrix volume. The affine correction matrix C is determined by the number of slices comprising the three-dimensional matrix volume, the correlation angle (i.e., the greatest angle of the projection sequence in the range
measured from an axis normal to the detector surface), and the correlation distance (i.e., the apex-to-apex distance created by the intersection of the most disparate projections of the sequence). The transformation of the second three-dimensional matrix volume is analogously determined in accordance with the following equation:
where L is the second three-dimensional matrix volume, L′ is the shifted and scaled second three-dimensional matrix volume, and D is the affine correction matrix for the second three-dimensional matrix volume.
At step 1126, the second three-dimensional matrix volume is rotated by an angle φ. The angle φ is defined as the angular disparity between the first and the second three-dimensional matrix volumes. Specifically, the shifted and scaled second three-dimensional matrix volume, L′, is rotated in accordance with the following equation:
where L″ is the rotated, shifted, and scaled second three-dimensional matrix volume and R100 is the rotational transform matrix.
The transformed matrix volumes, A′ and L″, are then merged using matrix averaging at step 1128. The matrix averaging is accomplished in accordance with the following equation:
where M is the averaged matrix of the two component transformed matrix volumes, A′ and L″. Alternatively, a non-linear combination of the transformed matrix volumes, A′ and L″, is performed.
The present invention further relates to a method for generating tomosynthetic images optimized for a specific diagnostic task. A task-dependent method for tomosynthetic image reconstruction can be used to mitigate the effects of ringing artifacts from unregistered details located outside the focal plane of reconstruction, which are intrinsic to the tomosynthetic reconstruction process. The production and elimination of blurring artifacts is depicted schematically in
The non-linear tomosynthetic approach in accordance with the present invention is beneficial when, for example, physicians want to know with relative assurance whether a lesion or tumor has encroached into a vital organ. When viewing a linear tomosynthetic reconstruction of the general region in three dimensions, the ringing artifacts tend to blur the interface between the lesion or tumor and the surrounding tissues. However, since tumors are typically more dense than the tissues that are at risk of invasion, the non-linear tomosynthetic reconstruction can be employed such that only the relatively radiopaque tumor structures of interest are retained in the reconstructed image. Similarly, a different non-linear operator could be used such that only relatively radiolucent structures of interest are retained in the reconstructed image to determine whether a lytic process is occurring in relatively radiopaque tissues.
The use of non-linear operators to reduce the affects of ringing artifacts is effective because images of many structures of radiographic interest have projection patterns determined almost entirely by discrete variations in mass or thickness of relatively uniform materials. Under these conditions, changes in radiographic appearance map closely with simple changes in either material thickness or density. In other words, complicating attributes associated with visual images, such as specular reflections, diverse energy-dependent (e.g., color) differences, etc., do not contribute significantly to many diagnostic radiographic applications. This simplification assures that many tissues can be identified easily by their position in a monotonic range of X-ray attenuations. Accordingly, selection of only projections yielding maximum or minimum attenuations when performing tomosynthetic reconstructions derived from such structures assures that resulting image slices yield results characterized by only extremes of a potential continuum of display options. Such displays make sense when the diagnostic task is more concerned with specificity (i.e., a low likelihood of mistaking an artifact for a diagnostic signal) than sensitivity (i.e., a low likelihood of missing a diagnostic signal).
A method for task-dependent tomosynthetic image reconstruction is depicted in the flow chart of
Once the projected images have been acquired and appropriately shifted, the type and degree of task-dependent processing is chosen. At step 906, it is determined whether only those features characterized by a relatively high attenuation are to be unequivocally identified. If only features having a high attenuation are to be identified, a pixel value corresponding to a desired minimum attenuation is selected. The selected pixel value is used as a minimum threshold value whereby each projected image is analyzed, pixel by pixel, and all pixels having an associated attenuation value below the selected pixel value are disregarded when an image slice is generated.
If however, at step 906, it is determined that features having a low attenuation are to be identified or that the entire range of attenuating structures are to be identified, then it is determined at step 910 whether only features characterized by a relatively low attenuation are to be unequivocally identified. If only features having a low attenuation are to be identified, a pixel value corresponding to a desired maximum attenuation is selected. The selected pixel value is used as a maximum threshold value whereby each projected image is analyzed, pixel by pixel, and all pixels having an associated attenuation value above the selected pixel value are disregarded when an image slice is generated.
If it is determined at step 910 that features having a low attenuation are not to be identified or that the entire range of attenuating structures are to be identified, then it is determined at step 916 whether an unbiased estimate of the three-dimensional configuration of the entire range of attenuating structures is to be identified. If the entire range of attenuating structures is to be identified, then conventional tomosynthesis is performed at step 918, whereby the attenuation values from all of the projected images are averaged.
If the features having a high attenuation, the features having a low attenuation, and the features covering the entire range of attenuations are not to be identified, then it is determined at step 920 whether the user desires to restart the selection of features to be identified. If the user wants to restart the identification process, then the method returns to step 906. If the user decides not to restart the identification process, then the method ends at step 922.
Once it has been determined which features are to be identified, then an image slice is generated at a selected slice position at step 924. The process for generating the image slice at step 924 is essentially the same as discussed previously in connection with step 90 of
In another aspect of the present invention, a method is provided for determining temporal changes in three-dimensions. The me hod enables two or more sets of image data collected at different times to be compared by adjusting the recorded sets of image data for arbitrary changes in the vantage points from which the image data were recorded. The method takes advantage of the fact that a single three-dimensional object will present a variety of different two-dimensional projection patterns, depending on the object's orientation to the projection system. Most of this variety is caused by the fact that a three-dimensional structure is being collapsed into a single two-dimensional image by the projection system. Limiting projection options to only two-dimensional slices precludes this source of variation. The result is a much reduced search space for appropriate registration of the images required to accomplish volumetrically meaningful subtraction.
A flow chart showing the steps involved in the method for determining temporal changes in three-dimensions of the present invention is depicted in
At step 1184, the first set of image slices is spatially cross-correlated with the second set of image slices. The cross-correlation is accomplished by individually comparing each image slice comprising the first set of image slices with the individual image slices comprising the second set of image slices. The comparison is performed in order to determine which image slice in the second set of image slices corresponds to a slice through the object at approximately the same relative position through the object as that of the image slice of the first set of image slices to which the comparison is being made.
After each of the image slices in the first set of image slices is correlated to an image slice in the second set of image slices, each of the correlated pairs of image slices are individually aligned at step 1186. The alignment is performed in order to maximize the associated cross-correlations by maximizing the overlap between the image slices comprising the correlated pairs of image slices. The cross-correlations are maximized by shifting the image slices relative to one another until the projected image of the object on one image slice is optimally aligned with the projected image of the object on the other image slice. Once each correlated pair of image slices has been aligned, the image slices from one set of image slices is subtracted from the image slices from the other set of image slices at step 1188 to form a set of difference images.
At step 1190, the difference images are displayed. The difference images can be presented as a series of individual differences corresponding to various different slice positions. Alternatively, the individual difference images can be integrated to yield a composite difference representing a three-dimensional image of the temporal changes associated with the selected object.
The present invention further relates to a source comparator and a method for matching radiation sources for use in quantitative radiology. Meaningful quantitative comparisons of different image data can be made only when the radiation source or sources used to record the image data is very nearly unchanged. However, conventional radiation sources produce radiation that varies with changes in tube potential, beam filtration, beam orientation with respect to the radiation target, tube current, and distance from the focal spot. The source comparator and method of the present invention enable the radiation output from one radiation source to be matched to that of another radiation source or to that of the same radiation source at a different time.
The source comparator 1200 for matching radiation sources in accordance with the present invention is depicted in
The source comparator 1200 of
When a second set of data images is to be recorded, the source settings for the radiation source to be used to record the second set of data images are adjusted to match the settings used for recording the first set of data images. At step 1222, the source comparator is positioned between the radiation source and the detector and a first gradient image is recorded. The source comparator is then rotated perpendicularly to the detector by an angle of 180° and a second gradient image recorded at step 1224. The first and second gradient images are compared and the source comparator oriented to produce the smaller gradient at step 1226. By so doing, it is assured that the source comparator bears the same relative relationship to the radiation source for both sets of data and, thereby, eliminates the potential for confounding the data by spatial variations in the cross-sectional intensity of the output from the radiation source.
The individual settings on the radiation source are then iteratively adjusted. At step 1230, the beam energy is matched by adjusting the kVp on the radiation source so that the measured gradient value approaches the gradient value of the original gradient image. The beam quality is then matched at step 1232 by adjusting the filtration of the radiation source so that the angle of the maximum gradient relative to the edge of the source comparator approaches that of the original gradient image. The beam exposures are then estimated by integrating the detector response across a fixed region of the source comparator and matched at step 1234 by adjusting the mAs of the radiation source so that the exposure approaches that of the original gradient image. At step 1236 it is determined whether the gradient image is substantially the same as the original gradient image. If the two images are significantly different, the beam energy, beam quality, and exposure are readjusted. If, however, asymptotic convergence has been reached and the two gradient images are substantially the same, the radiation sources are matched and the process ends at step 1238. Once the radiation sources have been matched, the second set of data images can be recorded and quantitatively compared to the first set of data images.
In the embodiment shown in
Radiation from the source 627 passes through collimator 647, irradiates object 621, and produces an object image on the primary imager 632. In addition, radiation from the source 627 which impinges upon the radiopaque shield 633 passes through the aperture 636 to produce a ring-shaped reference image of the aperture 636 on the secondary imager 634. Since the secondary imager 634 is not used to record object images, the secondary imager 634 can be a low quality imager such as a low resolution CCD. Alternatively, a lower surface of the primary imager 632 can be coated with a phosphorescent material 635, so that radiation impinging upon the primary imager 632 causes the phosphorescent material 635 to phosphoresce. The phosphorescence passes through the aperture 636 to produce the reference image on the secondary imager 634.
In operation, the reference image produced using the system depicted in
The center of the fitted circle can be determined as follows. A pixel or point on the secondary imager 634 that lies within the fitted circle is selected as a seed point. For convenience, the center pixel of the secondary imager 634 can be selected, since the center point will typically lie within the fitted circle. A point R is determined by propagating from the seed point towards the right until the fitted circle is intersected. Similarly, a point L is determined by propagating from the seed point towards the left until the fitted circle is intersected. For each pixel along the arc L-R, the average of the number of pixels traversed by propagating from that pixel upwardly until the fitted circle is intersected and the number of pixels traversed by propagating from that pixel downwardly until the fitted circle is intersected is determined. Any statistical outliers from the averages can be discarded and the average of the remaining values calculated. This average represents the row address of the fitted circle's center. To obtain the column address, the entire reference image is rotated by 90° and the process is repeated. The row address and column address together represent the position of the center of the fitted circle.
Although the above embodiments have been described in relation to projected images of objects produced using X-rays, the present invention is equally applicable to images produced using a variety of technologies, such as visible light, ultrasound, or electron microscopy images. Specifically, intermediate voltage electron microscope (IVEM) images can be used to provide quantitative three-dimensional ultrastructural information. Further, the present invention can also be used to reconstruct three-dimensional images of objects which either emit or scatter radiation.
When IVEM images are used, the present invention allows cellular changes to be detected and quantified in an efficient and cost-effective manner. Quantitation of three-dimensional structure facilitates comparison with other quantitative techniques, such as biochemical analysis. For example, increases in the Golgi apparatus in cells accumulating abnormal amounts of cholesterol can be measured and correlated with biochemically measured increases in cellular cholesterol.
When photographic images are used, it is possible to create a true three-dimensional model of a diffusely illuminated fixed scene from any number of arbitrary camera positions and angles. The resulting three-dimensional image permits inverse engineering of structural sizes and shapes, and may be expressed as a series of topographic slices or as a projective model that can be manipulated interactively. This capability is particularly useful in retrofitting existing structures or quantifying three-dimensional attributes using non-invasive methods. In addition, the present invention can be applied to construct topological images of geological structures by recording images of the structure created by the sun.
Representative lumpectomy specimens containing cancer from human breasts were radiographed using a digital mammographic machine (Delta 16, Instrumentarium, Inc.). Exposure parameters were regulated by an automatic exposure control mechanism built into the unit. Seven distinct projections of each specimen were made using a swing arm containing the tube head that swept across each specimen in a single arched path. This resulted in mammographic projections having angular disparities of 15, 10, 5, 0, −5, −10, and −15 degrees from vertical. These data were processed to yield a series of tomosynthetic slices distributed throughout the breast tissues in three ways: 1) conventional linear summation of all seven appropriately shifted projections (
All five radiologists preferred the nonlinearly generated tomosynthetic mammograms over those produced conventionally (with or without subsequent blurring via interactive deconvolution). A similar statistically significant result (p<0.05) was produced when the performance of the hole-depth experiment was objectively determined.
This approach is very efficient: it is simpler to implement than conventional tomosynthetic back-projection methods; and it produces sharp-appearing images that do not require additional computationally intensive inverse filtering or interative deconvolution schemes. Therefore, it has the potential for implementation with full-field digital mammograms using only modest computer processing resources that lie well within the current state of the art. For certain tasks that are unduly compromised by tomosynthetic blurring, a simple nonlinear tomosynthetic reconstruction algorithm may improve diagnostic performance over the status quo with no increase in cost or complexity.
Although the above discussion has centered around computed tomography, it will be appreciated by those skilled in the art that the present invention is useful for other three-dimensional imaging modalities. For example, the present invention is also intended to relate to images obtained using magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT), positron emission tomography (PET), conventional tomography, tomosynthesis, and tuned-aperture computed tomography (TACT), as well as microscopic methods including confocal optical schemes.
It will be recognized by those skilled in the art that changes or modifications may be made to the above-described embodiments without departing from the broad inventive concepts of the invention. It should therefore be understood that this invention is not limited to the particular embodiments described herein, but is intended to include all changes and modifications that are within the scope and spirit of the invention as set forth in the claims.
|Patente citada||Fecha de presentación||Fecha de publicación||Solicitante||Título|
|US4662379 *||27 Ago 1986||5 May 1987||Stanford University||Coronary artery imaging system using gated tomosynthesis|
|US4722056 *||18 Feb 1986||26 Ene 1988||Trustees Of Dartmouth College||Reference display systems for superimposing a tomagraphic image onto the focal plane of an operating microscope|
|US4920491 *||16 May 1988||24 Abr 1990||General Electric Company||Enhancement of image quality by utilization of a priori information|
|US4941164 *||29 Ene 1988||10 Jul 1990||The Governors Of The University Of Alberta||Method and apparatus for improving the alignment of radiographic images|
|US5008947 *||10 Oct 1989||16 Abr 1991||Kabushiki Kaisha Toshiba||Method and apparatus for correcting extension rates of images|
|US5051904 *||24 Mar 1988||24 Sep 1991||Olganix Corporation||Computerized dynamic tomography system|
|US5070454 *||30 Abr 1990||3 Dic 1991||Olganix Corporation||Reference marker orientation system for a radiographic film-based computerized tomography system|
|US5081577 *||22 Dic 1989||14 Ene 1992||Harris Corporation||State controlled device driver for a real time computer control system|
|US5227969 *||23 May 1991||13 Jul 1993||W. L. Systems, Inc.||Manipulable three-dimensional projection imaging method|
|US5299254 *||19 Oct 1992||29 Mar 1994||Technomed International||Method and apparatus for determining the position of a target relative to a reference of known co-ordinates and without a priori knowledge of the position of a source of radiation|
|US5319550 *||5 Ene 1990||7 Jun 1994||Olganix Corporation||High resolution digital image registration|
|US5359637 *||28 Abr 1992||25 Oct 1994||Wake Forest University||Self-calibrated tomosynthetic, radiographic-imaging system, method, and device|
|US5446548 *||8 Oct 1993||29 Ago 1995||Siemens Medical Systems, Inc.||Patient positioning and monitoring system|
|US5642293 *||3 Jun 1996||24 Jun 1997||Camsys, Inc.||Method and apparatus for determining surface profile and/or surface strain|
|US5668844 *||20 Oct 1994||16 Sep 1997||Webber; Richard L.||Self-calibrated tomosynthetic, radiographic-imaging system, method, and device|
|US5751787 *||25 Sep 1996||12 May 1998||Nanoptics, Inc.||Materials and methods for improved radiography|
|US5755725 *||6 Sep 1994||26 May 1998||Deemed International, S.A.||Computer-assisted microsurgery methods and equipment|
|US5828722 *||14 May 1997||27 Oct 1998||Sirona Dental Systems Gmbh & Co., Kg||X-ray diagnostic apparatus for tomosynthesis having a detector that detects positional relationships|
|US5872828 *||22 Jul 1997||16 Feb 1999||The General Hospital Corporation||Tomosynthesis system for breast imaging|
|US5878104 *||16 May 1997||2 Mar 1999||Sirona Dental Systems Gmbh & Co. Kg||Method for producing tomosynthesis exposures employing a reference object formed by a region of the examination subject|
|US5964530 *||15 Dic 1997||12 Oct 1999||Fons; Lloyd C.||Method for compensating earth surface temperatures for the skyward effect thereon|
|US6081577 *||19 Feb 1999||27 Jun 2000||Wake Forest University||Method and system for creating task-dependent three-dimensional images|
|US6118845 *||29 Jun 1998||12 Sep 2000||Surgical Navigation Technologies, Inc.||System and methods for the reduction and elimination of image artifacts in the calibration of X-ray imagers|
|US6120180 *||16 Oct 1998||19 Sep 2000||Siemens Aktiengesellschaft||X-ray exposure system for 3D imaging|
|US6122541 *||10 Dic 1996||19 Sep 2000||Radionics, Inc.||Head band for frameless stereotactic registration|
|US6146390 *||25 Feb 2000||14 Nov 2000||Sofamor Danek Holdings, Inc.||Apparatus and method for photogrammetric surgical localization|
|US6249568 *||18 Jun 1999||19 Jun 2001||Commissariat A L'energie Atomique||Process for improving a signal/noise ratio of the image of a moving object|
|US6275725 *||5 May 1997||14 Ago 2001||Radionics, Inc.||Stereotactic optical navigation|
|US6289235 *||5 Mar 1998||11 Sep 2001||Wake Forest University||Method and system for creating three-dimensional images using tomosynthetic computed tomography|
|US6351573 *||7 Oct 1996||26 Feb 2002||Schneider Medical Technologies, Inc.||Imaging device and method|
|US6405072 *||1 Dic 1997||11 Jun 2002||Sherwood Services Ag||Apparatus and method for determining a location of an anatomical target with reference to a medical apparatus|
|US6549607 *||28 Abr 2000||15 Abr 2003||Wake Forest University||Method and system for creating task-dependent three-dimensional images|
|US6684098 *||4 Jun 1997||27 Ene 2004||Brigham And Women's Hospital, Inc.||Versatile stereotactic device and methods of use|
|US6801597 *||14 Abr 2003||5 Oct 2004||Wake Forest University Health Sciences||Method and system for creating task-dependent three-dimensional images|
|US6810278 *||21 May 2001||26 Oct 2004||Wake Forest University||Method and system for creating three-dimensional images using tomosynthetic computed tomography|
|US20010034482 *||21 May 2001||25 Oct 2001||Webber Richard L.||Method and system for creating three-dimensional images using tomosynthetic computed tomography|
|US20030026469 *||6 Nov 2001||6 Feb 2003||Accuimage Diagnostics Corp.||Methods and systems for combining a plurality of radiographic images|
|Patente citante||Fecha de presentación||Fecha de publicación||Solicitante||Título|
|US7110807||25 Oct 2004||19 Sep 2006||Wake Forest University Health Sciences||Method and system for creating three-dimensional images using tomosynthetic computed tomography|
|US7519415 *||17 Dic 2004||14 Abr 2009||Siemens Aktiengesellschaft||Method and apparatus for image support of an operative procedure implemented with a medical instrument|
|US7620209 *||14 Oct 2004||17 Nov 2009||Stevick Glen R||Method and apparatus for dynamic space-time imaging system|
|US7722565||4 Nov 2005||25 May 2010||Traxtal, Inc.||Access system|
|US7751868||14 Nov 2005||6 Jul 2010||Philips Electronics Ltd||Integrated skin-mounted multifunction device for use in image-guided surgery|
|US7801587||16 Ago 2006||21 Sep 2010||Wake Forest University Health Sciences||Method and system for creating three-dimensional images using tomosynthetic computed tomography|
|US7805269||14 Nov 2005||28 Sep 2010||Philips Electronics Ltd||Device and method for ensuring the accuracy of a tracking device in a volume|
|US7840254||18 Ene 2006||23 Nov 2010||Philips Electronics Ltd||Electromagnetically tracked K-wire device|
|US8611983||18 Ene 2006||17 Dic 2013||Philips Electronics Ltd||Method and apparatus for guiding an instrument to a target in the lung|
|US8632461||21 Jun 2006||21 Ene 2014||Koninklijke Philips N.V.||System, method and apparatus for navigated therapy and diagnosis|
|US8633967 *||23 Abr 2007||21 Ene 2014||Expert Treuhand Gmbh||Method and device for the creation of pseudo-holographic images|
|US8884958 *||7 Jun 2012||11 Nov 2014||Kabushiki Kaisha Toshiba||Image processing system and method thereof|
|US9053522 *||18 May 2011||9 Jun 2015||Nec Corporation||Image processing device, image processing method, and image processing program|
|US9083963 *||22 Nov 2013||14 Jul 2015||Expert Treuhand Gmbh||Method and device for the creation of pseudo-holographic images|
|US20050163279 *||17 Dic 2004||28 Jul 2005||Matthias Mitschke||Method and apparatus for image support of an operative procedure implemented with a medical instrument|
|US20050182319 *||17 Feb 2005||18 Ago 2005||Glossop Neil D.||Method and apparatus for registration, verification, and referencing of internal organs|
|US20060082590 *||14 Oct 2004||20 Abr 2006||Stevick Glen R||Method and apparatus for dynamic space-time imaging system|
|US20060122497 *||14 Nov 2005||8 Jun 2006||Glossop Neil D||Device and method for ensuring the accuracy of a tracking device in a volume|
|US20060173269 *||14 Nov 2005||3 Ago 2006||Glossop Neil D||Integrated skin-mounted multifunction device for use in image-guided surgery|
|US20060173291 *||18 Ene 2006||3 Ago 2006||Glossop Neil D||Electromagnetically tracked K-wire device|
|US20060184016 *||18 Ene 2006||17 Ago 2006||Glossop Neil D||Method and apparatus for guiding an instrument to a target in the lung|
|US20070032723 *||21 Jun 2006||8 Feb 2007||Glossop Neil D||System, method and apparatus for navigated therapy and diagnosis|
|US20070055128 *||24 Ago 2006||8 Mar 2007||Glossop Neil D||System, method and devices for navigated flexible endoscopy|
|US20100171811 *||23 Abr 2007||8 Jul 2010||Expert Treuhand Gmbh||Method and device for the creation of pseudo-holographic images|
|US20120313943 *||13 Dic 2012||Toshiba Medical Systems Corporation||Image processing system and method thereof|
|US20130064430 *||18 May 2011||14 Mar 2013||Nec Corporation||Image processing device, image processing method, and image processing program|
|US20140152782 *||22 Nov 2013||5 Jun 2014||Expert Treuhand Gmbh||Method and device for the creation of pseudo-holographic images|
|WO2008045016A2 *||21 Jun 2006||17 Abr 2008||Traxtal Inc||Device and method for a trackable ultrasound|
|Clasificación de EE.UU.||600/426|
|Clasificación cooperativa||A61B6/5258, G01N2223/419, Y10S378/901, G01N23/046, A61B6/548|