|Número de publicación||USRE37088 E1|
|Tipo de publicación||Concesión|
|Número de solicitud||US 09/078,378|
|Fecha de publicación||6 Mar 2001|
|Fecha de presentación||13 May 1998|
|Fecha de prioridad||30 Ago 1994|
|También publicado como||DE19531419A1, DE19531419B4, US5515856|
|Número de publicación||078378, 09078378, US RE37088 E1, US RE37088E1, US-E1-RE37088, USRE37088 E1, USRE37088E1|
|Inventores||Bjorn Olstad, Eivind Holm, James Ashman|
|Cesionario original||Vingmed Sound A/S|
|Exportar cita||BiBTeX, EndNote, RefMan|
|Citas de patentes (13), Otras citas (9), Citada por (15), Clasificaciones (12), Eventos legales (3)|
|Enlaces externos: USPTO, Cesión de USPTO, Espacenet|
This invention relates to a method for generating anatomical M-Mode displays in ultrasonic investigation of living biological structures during movement, for example a heart function, employing an ultrasonic transducer.
The invention describes a technique for obtaining anatomically meaningful M-Mode displays by data extraction from 2D (two dimensional) and 3D (three dimensional) ultrasonic imaging. Conventional M-Mode is acquired along one acoustical beam of an ultrasonic transducer employed, displaying the tide-variant time-variant data in a display unit with time along the x-axis and depth along the y-axis. The localization of the M-Mode line in conventional M-Mode is limited to the set of beam directions that can be generated (scanned) by the transducer.
In cardiology, the use of the M-Mode method is fairly standardized, requiring specific cuts through the heart at standard positions and angles. To be able to perform a good M-Mode measurement, important criteria are:
1. Image quality. The borders and interfaces between different structures of the heart must be clearly visible. One of the most important factors to achieve this, is to position the ultrasound transducer on the body concerned at a point where the acoustic properties are optimum. These places are often referred to as “acoustic windows”. On older patients, these windows are scarce, and hard to find.
2. Alignment. The standardized M-Mode measurements require that the recording is taken at specific angles, usually 90 degrees relative to the heart structure being investigated.
3. Motion. As the heart moves inside the chest during contraction and relaxation, a correct M-Mode line position at one point in the heart cycle may be wrong at another point in the same heart cycle. This is very difficult to compensate for manually, since the probe must be moved synchronous to the heartbeats. Therefore, most sonographers settle for a fixed, compromise direction of the M-Mode line, i.e. transducer beam.
4. Wall thickening analysis. With coronary diseases, an important parameter to observe is the thickening of the left ventricular muscle at various positions.
In many cases there can be problems getting the correct alignment at a good acoustical window. Often, the good acoustic windows give bad alignment, and vice versa. Hence, the sonographer or user spends much time and effort trying to optimize the image for the two criteria (alignment, image quality).
With the advent of high-performance digital front-end control for phased transducer array probes, the possibility exists for acquiring 2D images at very high framerates (<10 ms per 2D image). These 2D data are stored in a computer RAM, with storage capacity enough to hold one or more full heart cycles worth of 2D data recordings. M-Mode displays can be generated based on these recordings with an adequate temporal resolution. According to the present invention this allows for complete flexibility in the positioning of the M-Mode lines. The invention describes how this flexibility can be utilized to improve the anatomical information content in the extracted M-Mode displays.
The invention also applies to extraction of M-Mode displays from a time series with 3D images. In 3D it is possible to compensate for the true 3D motion of the ventricle. Based on 2D recordings the operator will be limited to compensate for the movements that can be measured in the imaged plane. The invention also describes how local M-Mode information extracted from 3D acquisitions can be utilized to obtain a color encoding of the ventricle wall providing information about wall thickening.
The anatomical M-Mode displays can be generated in real-time during scanning of a 2D image or during real-time volumetric scanning. The invention then describes how multiple M-Mode displays can be maintained together with the live 2D or 3D image. These M-Mode displays can also be freely positioned and even allowed to track the location and direction of the ventricle wall during the cardiac cycle. During real-time scanning, time resolution of anatomical M-Mode displays may be increased by constraining the 2D or volumetric scanning to the area defined by the ultrasound probe and the M-Mode line. This requires complete control of the ultrasound scanner front-end.
The anatomical M-Mode can also be used as a post-processing tool, where the user acquires the 2D/3D image sequence at super-high framerates, without making any M-Mode recordings. As long as the 2D data includes an adequate cut/view through the heart, the user may use the anatomical M-Mode to do the M-Mode analysis later.
The computer processing of data sets are previously known, as for example described in: J. D. Foley, A van Dam, S. K. Seiner, J. F. Hughes “Computer Graphics: Principles and Practice”, Addison Wesley U.S.A. (1990). Among other things line drawing algorithms are described in this reference. Thus, such computer processing, operations and steps are not explained in detail in the following description. Other references relating more specifically to techniques of particular interest here are the following:
B. Olstad, “Maximizing image variance in rendering of volumetric data sets,” Journal of Electronic Imaging, 1:245-265, July 1992.
E. Steen and B. Olstad, “Volume rendering in medical ultrasound imaging”. Proceedings of 8th Scandinavian Conference on Image Analysis. Tromsø, Norway May 1993.
G. Borgefors, “Distance transformations in digital images”, Computer vision, graphics and image processing 34, 1986, pp. 344-371.
Peter Seitz, “Optical Superresolution Using Solid State Cameras and Digital Signal Processing”, Optical Engineering 27(7) July 1988.
On the background of known techniques this invention takes as a starting-point methods for computation of conventional M-Mode and established clinical procedures for utilization of M-Mode imaging. The invention includes new techniques for the computation of anatomical M-Mode displays based on a time series of 2D or 3D ultrasonic images. The anatomical M-Mode is derived as a virtual M-Mode measurement along an arbitrary or virtual, tilted M-Mode line. What is novel and specific in the method according to the invention is defined more specifically in the appended claims.
Some of the advantages obtained with this invention can be summarized as follows: Multiple M-Mode displays with arbitrary positioning can be computer on the basis of a 2D or 3D acquisition. The position of the M-Mode line is not limited to the scanning geometry and can be freely positioned. Global heart movements can be compensated for by moving the M-Mode line according to the motion of the heart during the cardiac cycle. Wall thickening analysis is improved due to the possibility of keeping the M-Mode line perpendicular to the ventricle wall during the entire cardiac cycle. Reference points in the scene can be fixed at a given y-coordinate in the M-Mode display, hence improving the visual interpretability of relative motion/thickening phenomenons. 3D acquisitions can be visualized by mapping properties extracted from local M-Mode lines in a color encoding of the ventricle wall.
The invention shall be described in more detail in the following description of various embodiments with reference to the drawings, in which:
FIG. 1 schematically illustrates the computation of M-Mode displays according to the prior art.
FIG. 2 schematically illustrates the inventive concept of a tilted anatomical or virtual M-Mode line for computation of corresponding M-Mode displays.
FIG. 3 indicates a setting with multiple M-Mode lines, according to an embodiment of this invention.
FIG. 4 illustrates how movement of the position of the M-Mode line as a function of the position in the cardiac cycle can be used to obtain motion correction.
FIG. 5 illustrates an anatomical M-Mode whereby no reference point is specified.
FIG. 6 illustrates an anatomical M-Mode line when a reference point has been specified and fixed to given vertical position in the display of the anatomical M-Mode.
FIG. 7 illustrates wall thickening analysis in a setting with 3 simultaneous anatomical M-Mode displays.
FIG. 8 indicates how the anatomical M-Mode displays are computed in a situation where the position of the M-Mode line is fixed during the cardiac cycle.
FIG. 9 schematically illustrates how a color encoding of the ventricle wall representing wall thickening can be computed in 4D ultrasonic imaging.
FIG. 10 schematically illustrates how the acquisition of the ultrasound data can be optimized for used in anatomical M-Mode, reducing the amount of data used for each image, enabling more images to be acquired during a given time span.
FIG. 1 illustrates conventional M-Mode imaging. An ultrasound transducer 11 is schematically indicated in relation to an ultrasonic image 12 obtained by angular scanning of the acoustical beam of the transducer. In this conventional method by the M-Mode line or corresponding acoustical beam 13 is fixed at a given position and the ultrasonic signal along the beam is mapped as a function of time in the M-Mode display 14. Extreme temporal resolution can be achieved with this prior art because a new time sample can be generated as soon as the data for one beam has been gathered. This prior art for M-Mode imaging will on the other hand limit the positioning of the M-Mode line 13 according to the acoustic windows and scanning geometry.
This invention relates to how M-Mode images can be generated by extraction of interpolated displays from time series of 2D or 3D images. The concept of a “tilted” M-Mode display 24 is illustrated in FIG. 2. The “virtual” M-Mode line 23 is in this case freely moveable, not being restricted to coincide with one acoustic beam (transducer 21) originating at the top of the 2D image(s) 22.
FIG. 3 illustrates an example where two tilted M-Mode displays 34A, 34B have been computed or calculated from a single 2D sequence or image 32, with corresponding virtual, tilted M-Mode lines indicated at 33A and 33B respectively. Basing the generation of M-Mode displays on 2D or 3D images, any sector number of M-Mode displays can be generated, enabling analysis of various dimensions from the same heartbeat. Thus, acquired time series as indicated at 1, 2, 3, 4 in FIG. 2 are arranged to constitute data sets, at least one virtual M-Mode line 23 or 33A, 33B in FIG. 3, are provided and co-registered with the data sets, and these are then subjected to computer processing with interpolation along the virtual M-Mode line concerned. The importance of interpolation will be explained further below.
As the heart moves inside the chest during contraction and relaxation, a correct M-Mode line position at one point in the heart cycle may be wrong at another point in the same heart cycle. This is very difficult to compensate for manually, the probe must be moved synchronous to the heartbeats.
The anatomical M-Modes according to this invention can compensate for this motion. FIG. 4 illustrates this concept. The user defines the position of the M-Mode line 43A and 43B respectively, at different points in the heart cycle such as by scrolling a 2D cineloop and fixing a new M-Mode line position. Appropriate computer operations or software are available and known to those of ordinary skill in this field, as shown in the above references, is utilized to interpolate the M-Mode line positions between the “fixed” M-Mode lines 43A and 43B, and generates an M-Mode display 44 where each vertical line in the M-Mode display is extracted along the local definition of the M-Mode line.
In this manner the position and/or orientation of the virtual M-Mode line can be movable in response to other rhythmic movements in the biological structure or body concerned, other than the heartbeats referred to in the description of FIG. 4.
When studying an organ's time-variant dimensions in a living body, there is often a wish to study the different structures' dimensions relative to each other, without observing the whole organ's displacement inside the body. This is especially interesting when looking at the heart's ventricular contractions and relaxations, where the thickening of the muscle tissue is the important parameter to observe.
To enhance the relative variations, according to an embodiment of this invention, the user can define a reference point on the “fixed” M-Mode lines described in the previous paragraph on motion correction. Typically, this point will correspond to an easily defined clinical structure. FIGS. 5 and 6 illustrate M-Mode generation without and with a fixation of a given reference point 66 in the imaged scene 62. Thus, on the basis of the reference point 66 associated with the interpolated M-Mode line positions 63A to 63B shown in FIG. 6, there is generated a M-Mode display 64 with this point 66 appearing as a straight line 67 (no motion) i.e. at a chosen vertical coordinate in the display. Alternatively, a given y-coordinate can be tracked in the M-Mode display and the M-Mode display regenerated by sliding the position of the M-Mode lines at the various time locations such that the tracked image structure appears as a horizontal structure in the final M-Mode display.
With coronary diseases, an important parameter to observe is the thickening of the left ventricular muscle at various positions. Combining the techniques described in previous paragraphs, this invention provides a specially useful tool for left ventricle thickening analysis, as illustrated by FIG. 7.
Each M-Mode display 74A, 74B and 74C represents the regional wall thickening and contraction of one part of the ventricle 70, each part being penetrated by a corresponding virtual M-Mode line 73A, 73B and 73C respectively. FIG. 7 shows a short axis view of the left ventricle 70 and three anatomical M-Mode displays 74A, 74B, 74C generated with the techniques described in the previous paragraphs.
The sequence of 2D/3D frames is stored in the scanner/computer employed as a 3- or 4-dimensional array or data set(s) of ultrasound samples. This array may have different geometric properties, depending on the transducer probe geometry used, and whether images have been scanconverted to a rectangular format prior to storing. For illustration, in the setting shown in FIG. 8, we use an example where the 2D sector data have been scanconverted previously (typically using an ultrasound scanner's hardware scanconverter) and stored to disk/memory in a rectangular data set format, as a 3D-array 82 of samples with the dimensions being [x,y,t].
Generating an M-Mode display 84 can then be viewed upon as cutting a plane 88 through the 3D data set 82, interpolating and resampling the data to fit into the desired display rectangle 84. The motion correction techniques described above will modify the cutting plane 88 to a curved surface that is linear in the intersections with the [x,y] planes. It is of primary importance that adequate interpolation techniques are applied both in the spatial and temporal dimension. Such interpolation can to some extent compensate for inferior resolution compared with conventional M-Mode along the acoustical beams generated by the transducer as shown in FIG. 1.
Temporal resolution of the M-Mode displays may be increased by controlling the image acquisition to encompass only the necessary area. In FIG. 10, the virtual M-Mode line 101 defines the minimum necessary image area 104. By controlling the front-end of the ultrasound scanner to only acquire the necessary acoustical beams 102, and not acquiring the data 103 outside the virtual M-Mode line, the ultrasound scanner uses less time for acquiring the image, and this time is used to improve the temporal resolution of the time series. This special enhancement can be done at the cost of freely positioning other virtual M-Mode lines during post-processing.
According to an embodiment of the invention it is an additional and advantageous step to let the result of the above computer processing including interpolation, be subjected to an image processing as known per se for edge enhancement, to produce the resulting computed anatomical M-Mode display.
All the techniques described here apply both to a sequence of 2D and a sequence of 3D ultrasonic images. 3D acquisitions further improves the potential of motion correction described, because the true 3D motion of the heart can be estimated.
In addition to the actual generation of M-Mode displays the techniques according to this invention can be further utilized to extract anatomical M-Modes for all points across the endocard surface in the left ventricle. This setting is illustrated with an example in FIG. 9. A 4 dimensional ultrasound data set 92 is assumed consisting of m short axis planes and n 3D cubes recorded during the cardiac cycle. For simplicity in the figure only three virtual M-Mode lines 93A, 93B, 93C with the associated M-Mode displays 94A, 94B and 94C, respectively, have been drawn, but similar M-Mode displays should be associated with every point or position on the endocard surface in the ventricle 90.
Each of the individual M-Mode displays 94A, 94B, 94C . . . , are then processed in order to obtain a characterization that can be visualized as a color encoding of the associated location on the ventricle wall. The mapping strategy is illustrated in FIG. 9 and is similar to the approach found in ref. B. Olstad, “Maximizing image variance in rendering of volumetric data sets,” Journal of Electronic Imaging, 1:245-265, July 1992 and E. Steen and B. Olstad, “Volume rendering in medical ultrasound imaging”. Proceedings of 8th Scandinavian Conference on Image Analysis. Tromsø, Norway May 1993 identified previously. The characterization routine thus operates on an anatomical M-Mode display and generates a single value or a color index that reflects physiological properties derived in the M-Mode image. One of these properties is a quantification of wall thickening by estimation of thickening variations during the cardiac cycle. Each of the anatomical M-Mode displays 94A, 94B and 94C are in this case analyzed. The wall is located in the said M-Mode displays methods such as those described in Peter Seitz, “Optical Superresolution Using Solid State Cameras and Digital Signal Processing”, Optical Engineering 27(7) July 1988 for superresolution edge localization at the various time instances in the M-Mode displays and the thickness variations are used to define the said estimated quantification of wall thickening. A second property is given by a characterization of the temporal signal characteristics at a given spatial coordinate or for a range of spatial coordinates in the M-Mode displays 94A, 94B and 94C.
A second alternative is to use only two cubes that are either temporal neighbors or that are located at End-Systole and End-Diastole. The associated M-Modes will in this case reduce to simply two samples in the temporal direction. This approach is more easily computed and will provide differential thickening information across the ventricle wall if the cubes are temporal neighbors. The wall thickening analysis is in this case a comparison of two one dimensional signals where thickenings can be estimated with the methods described in Peter Seitz, “Optical Superresolution Using Solid State Cameras and Digital Signal Processing”, Optical Engineering 27(7) July 1988 for superresolution edge localization.
The color encodings described for 3D also applies to 2D imaging, but the color encodings are in this case associated with the boundary of the blood area in the 2D image. FIG. 7 illustrates such a 2D image sequence. The figure includes only three virtual M-Mode lines 73A, 73B and 73C with the associated M-Mode displays 74A, 74B and 74C, respectively, but similar M-Mode displays should be associated with every point or position on the endocard surface in the ventricle 70. Each of the individual M-Mode displays 74A, 74B and 74C are then processed with the same techniques as described above for the corresponding M-Mode displays 94A, 94B and 94C in the three-dimensional case.
The M-Mode lines in this embodiment of the invention are associated with each point or position identified on the surface of the ventricle wall and the direction is computed to be perpendicular to the ventricle wall. The direction of the local M-Modes are computed as the direction obtained in a 2- or 3-dimensional distance transform of a binary 2- or 3-dimensional binary image representing the position of the points on the ventricle wall. See ref. G. Borgefors, “Distance transformations in digital images”, Computer vision, graphics and image processing 34, 1986, pp. 344-371 for information on a suitable distance transform.
In summary this invention as described above provides a method for computation of anatomical M-Mode displays based on a time series of 2D or 3D ultrasonic images. The method is used for the investigation of living biological structures during movement, for example a heart function. The main application will be in hospitals and the like. The anatomical M-Mode displays can be computed in real-time during the image acquisition or by postprocessing of a 2D or 3D cineloop. The anatomical M-Mode is derived as a virtual M-Mode measurement along an arbitrary tilted M-Mode line. Multiple, simultaneous M-Mode lines and displays can be specified. The arbitrary positioning of the M-Mode line allows for anatomically meaningful M-Mode measurements that are independent of acoustic windows that limit the positioning of M-Modes in the prior art. The positioning of the M-Mode line can be changed as a function of time to compensate for global motion. The M-Mode line can in this way be made perpendicular to the heart wall during the entire heart cycle. This property increases the value of M-Modes in wall thickening analysis because erroneous thickenings caused by inclined measurements can be avoided. Furthermore, reference points in the image scene can be fixed in the M-Mode display such that the visual interpretation of relative variations can be improved. In 3D cineloops the M-Modes can be computed locally at all points in the ventricle wall along M-Mode lines that are perpendicular to the endocard surface. These local M-Modes are exploited to assess wall thickening and to utilize these measurements in a color encoding of the endocard surface.
|Patente citada||Fecha de presentación||Fecha de publicación||Solicitante||Título|
|US3955561||16 Sep 1974||11 May 1976||Indianapolis Center For Advanced Research, Inc.||Cardioscan probe|
|US4271842||3 Mar 1978||9 Jun 1981||Smith Kline Instruments, Inc.||Apparatus and method for providing multiple ultrasonic sector image displays|
|US4413521||22 Abr 1982||8 Nov 1983||U.S. Philips Corporation||Apparatus for examining an object by means of ultrasonic waves|
|US4501277||28 Sep 1982||26 Feb 1985||Tokyo Shibaura Denki Kabushiki Kaisha||Selected beam marking system for rapid ultrasound measurements|
|US4735211||31 Ene 1986||5 Abr 1988||Hitachi, Ltd.||Ultrasonic measurement apparatus|
|US4932414||2 Nov 1987||12 Jun 1990||Cornell Research Foundation, Inc.||System of therapeutic ultrasound and real-time ultrasonic scanning|
|US5097836||15 Feb 1990||24 Mar 1992||Fujitsu Limited||Untrasound diagnostic equipment for calculating and displaying integrated backscatter or scattering coefficients by using scattering power or scattering power spectrum of blood|
|US5105813||23 Ago 1990||21 Abr 1992||Kabushiki Kaisha Toshiba||Ultrasonic diagnosing apparatus with steerable ultrasonic beams|
|US5127409||25 Abr 1991||7 Jul 1992||Daigle Ronald E||Ultrasound Doppler position sensing|
|US5195521||9 Nov 1990||23 Mar 1993||Hewlett-Packard Company||Tissue measurements|
|US5285788||16 Oct 1992||15 Feb 1994||Acuson Corporation||Ultrasonic tissue imaging method and apparatus with doppler velocity and acceleration processing|
|US5355887||30 Oct 1992||18 Oct 1994||Fujitsu Limited||Ultrasonic diagnostic apparatus|
|US5375599||26 Abr 1993||27 Dic 1994||Shimadzu Corporation||Organically responsive scrolling in ultrasonic diagnostic equipment|
|1||"New Transforming System From Tomogram Echocardiography To M-Mode", 40-C-5.1, 7pgs.|
|2||Borgefors, G., "Distance Transformations In Digital Images"., Computer Vision, Graphics And Image Processing 34, 1986, pp. 344-371.|
|3||Foley, et al., "Computer Graphics: Principles And Practice," Addison Wesley USA (1990) (only bibliographic pages included).|
|4||Olstad, B., "Maximizing Image Variance In Rendering Of Columetric Data Sets", Journal Of Electronic Imaging, 1:256-265, Jul. 1992.|
|5||R. Omoto, Y. Yokote, et al. "B/M Conversion System with Free Setting of Cursor Line: Clinical Applications Thereof ", 41-PA-31, 3pp.|
|6||R. Omoto, Y. Yokote, et al. "B/M Conversion System With Free Setting Of Cursor Line: Clinical Applications Thereof", 41-PA-31, 3pgs. (Saitama Medical School).|
|7||R. Omoto, Y. Yokote, et al. "New System for Converting From Tomogram Echocardiography To M-Mode Freely Set Cursor Line", 40-C-51, 2pgs. (Saitama Medical School).|
|8||R. Omoto, Y. Yokote, et al. "New System For Converting From Tomogram Echocardiography To M-Mode Freely Set Cursor Line", 40-C-51, 2pp.|
|9||Seitz, P., "Optical Superresolution Using Solid State Cameras And Digital Signal Processing", Optical Engineering 27(7) Jul. 1998, pp. 535-540.|
|Patente citante||Fecha de presentación||Fecha de publicación||Solicitante||Título|
|US6863655 *||10 Jun 2002||8 Mar 2005||Ge Medical Systems Global Technology Company, Llc||Ultrasound display of tissue, tracking and tagging|
|US7803112 *||8 Dic 2004||28 Sep 2010||Medison Co., Ltd.||Apparatus and method for displaying sectional planes of target object utilizing 3-dimensional ultrasound data|
|US8059876 *||27 May 2008||15 Nov 2011||Fujifilm Corporation||Cardiac function analysis apparatus, method and program|
|US8142358||7 Jun 2006||27 Mar 2012||Esaote S.P.A.||Measurement method of time varying events in a target body and a method for displaying measurement data of different parameters of a target in which time dependent events occur|
|US8287457 *||17 May 2007||16 Oct 2012||Koninklijke Philips Electronics N.V||3D echocardiographic shape analysis|
|US8585598||21 Dic 2011||19 Nov 2013||Inneroptic Technology, Inc.||Systems, methods, apparatuses, and computer-readable media for image guided surgery|
|US8641621||26 Ene 2011||4 Feb 2014||Inneroptic Technology, Inc.||Systems, methods, apparatuses, and computer-readable media for image management in image-guided medical procedures|
|US8670816||29 Ene 2013||11 Mar 2014||Inneroptic Technology, Inc.||Multiple medical device guidance|
|US8690776||9 Feb 2010||8 Abr 2014||Inneroptic Technology, Inc.||Systems, methods, apparatuses, and computer-readable media for image guided surgery|
|US8900149||2 Abr 2004||2 Dic 2014||Teratech Corporation||Wall motion analyzer|
|US9107698||7 Oct 2013||18 Ago 2015||Inneroptic Technology, Inc.||Image annotation in image-guided medical procedures|
|US20030013964 *||10 Jun 2002||16 Ene 2003||Steinar Bjaerum||Ultrasound display of tissue, tracking and tagging|
|US20050187474 *||8 Dic 2004||25 Ago 2005||Medison Co., Ltd.||Apparatus and method for displaying sectional planes of target object utilizing 3-dimensional ultrasound data|
|US20060281993 *||7 Jun 2006||14 Dic 2006||Gianni Pedrizzetti||Measurement method of time varying events in a target body and a method for displaying measurement data of different parameters of a target in which time dependent events occur|
|US20080312527 *||27 May 2008||18 Dic 2008||Jun Masumoto||Cardiac function analysis apparatus, method and program|
|Clasificación de EE.UU.||600/440|
|Clasificación internacional||G01S7/52, G01S15/89|
|Clasificación cooperativa||G01S7/52071, A61B8/486, G01S15/8993, G01S7/52074, G01S7/52066|
|Clasificación europea||A61B8/48H, G01S7/52S8B6, G01S15/89D9, G01S7/52S8B2D|
|30 Jun 2003||FPAY||Fee payment|
Year of fee payment: 8
|31 May 2007||FPAY||Fee payment|
Year of fee payment: 12
|14 Abr 2009||CC||Certificate of correction|