US20020012478A1 - Image processing electronic device for detecting dimensional variations - Google Patents

Image processing electronic device for detecting dimensional variations Download PDF

Info

Publication number
US20020012478A1
US20020012478A1 US09/214,929 US21492999A US2002012478A1 US 20020012478 A1 US20020012478 A1 US 20020012478A1 US 21492999 A US21492999 A US 21492999A US 2002012478 A1 US2002012478 A1 US 2002012478A1
Authority
US
United States
Prior art keywords
interest
image
image data
areas
sets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/214,929
Other versions
US6373998B2 (en
Inventor
Jean-Philippe Thirion
Guillaume Calmon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institut National de Recherche en Informatique et en Automatique INRIA
Original Assignee
Institut National de Recherche en Informatique et en Automatique INRIA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institut National de Recherche en Informatique et en Automatique INRIA filed Critical Institut National de Recherche en Informatique et en Automatique INRIA
Assigned to INRIA INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE reassignment INRIA INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CALMON, GUILLAUME, THIRION, JEAN-PHILIPE
Publication of US20020012478A1 publication Critical patent/US20020012478A1/en
Application granted granted Critical
Publication of US6373998B2 publication Critical patent/US6373998B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • G06T3/14
    • G06T5/80
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S128/00Surgery
    • Y10S128/92Computer assisted medical diagnostics
    • Y10S128/922Computer assisted medical diagnostics including image analysis

Definitions

  • the invention concerns the field of processing of comparable digital images, for the purpose of detecting (or determining) dimensional variations. They may be “two-dimensional” (2D) images, in which case the variation will be termed surface variation, or “three-dimensional” (3D) images, and in this case the variation will be termed volume variation.
  • 2D two-dimensional
  • 3D three-dimensional
  • the invention applies more particularly, but not exclusively, to images termed medical images, and especially to the analysis of comparable digital images of regions of the brain, in order to study areas of interest comprising, for example, lesions or tumours, or active anatomical structures such as the heart or the ventricles of the brain.
  • comparable images there is meant images taken either of substantially identical regions of the same “subject” at different moments, or of substantially identical regions of two separate “subjects”, or even of a single image and the associated image symmetrized with respect to a plane (or also termed “chiral”), when the region analyzed has a certain degree of symmetry.
  • nD n-dimensional
  • MRI nuclear magnetic resonance apparatuses
  • the 3D image of a region observed consists of a multiplicity of stacked 2D sections, in which the variations in intensity represent the proton density of the tissues.
  • segmentation techniques consist in delineating (or attributing a contour to) an area of interest on two images of an active region, which are spaced in time, then subtracting the “volumes” contained within the two contours in order to estimate the variation in volume of the area of interest within the time interval separating the two images.
  • volume measurement is carried out by counting reference volume elements (voxels), of very small size, contained in a closed contour of an area of interest, the dimension of which is generally very large compared with that of a voxel.
  • This counting can only be carried out by (semi-)automatic methods such as, for example, that termed “3D snakes”, which are difficult to put into practice for the non-specialist such as is generally the practitioner who carries out the analysis of the images.
  • the aim of the present invention is therefore to improve the situation in this field of processing of digital images of active regions.
  • an electronic image processing device which comprises:
  • registration means making it possible to determine a registration transformation between one of the images and the other, starting from the two sets of image data
  • sampling means operating according to this registration in order to re-sample a first of the two sets of image data into a third set of image data relating to the same image, and able to be superposed directly, sample by sample, on the second set of image data, and
  • processing means which operate starting from the second and third sets of image data in order to obtain therefrom at least one set of difference data, representing differences between superposable areas of interest of the images constituted respectively by the said second and third sets of image data.
  • the expression “difference” should be taken in the wider sense, that is to say that it may be a question of the appearance of a new area of interest, or of a modification/transformation of a known area of interest. More generally, any type of difference between the two images is concerned here.
  • the processing means comprise a calculation module to determine firstly a deformation vector field, from the second and third sets of image data, in such a manner as to make it possible to provide the set of difference data.
  • the processing means comprise first calculation means for applying to the deformation vector field at least a first operator so as to provide the set of difference data, which is then termed a first set of difference data.
  • the processing means may also comprise second calculation means for applying to the deformation vector field a second operator, different from the first operator, so as to provide another set of difference data, which is then termed a second set of difference data.
  • the processing means may additionally comprise third calculation means for applying to the deformation vector field a third operator, a composition of the first and second operators, so as to provide another set of difference data, which is then termed a third set of difference data.
  • third calculation means for applying to the deformation vector field a third operator, a composition of the first and second operators, so as to provide another set of difference data, which is then termed a third set of difference data.
  • the first and second operators are advantageously selected from a group comprising an operator of the modulus type and an operator based on partial derivatives, of the divergence or Jacobian type, for example.
  • the modulus type operator will provide information more particularly representing movements, while the operator based on partial derivatives will provide information representing more particularly growth or diminution (volume variation or mass effect).
  • the processing means may comprise detection means in order to transform each first, second and third set of difference data into a fourth set of image data forming a card.
  • the detection means will be arranged either to allow manual selection by a user, from one of the cards, of the areas of interest, or to carry out automatic selection of the areas of interest in one of the cards.
  • the detection means are capable of determining the closed contours which respectively delimit selected parts of the areas of interest. This determination may be effected by approximation by spheres or by ellipsoids.
  • the processing means may comprise, separately, or in parallel with the detection means, quantification means for determining, from the deformation vector field and the second and third sets of image data, volume data representing differences of the volume variation type, so as to form the set of difference data, which is then termed a set of volume data.
  • This determination of the volume variations in an area of interest preferably comprises:
  • the reference contour may be substantially identical to the shape of the area of interest, or may be spherical, or even ellipsoidal,
  • the quantification means calculate, in each area of interest, a multiplicity of volume variations of the selected area of interest, for reference contours which are closed and nested in one another, and comprised between the contour comparable with a point of zero dimension and the reference contour, then determine from this multiplicity of volume variations that which is the most probable. This makes it possible to improve further the accuracy of the volume variation calculation.
  • the processing means comprise both quantification means and detection means
  • the quantification means operate on closed contours determined by the detection means in the areas of interest selected by the latter. This makes it possible to reduce the processing time very significantly, without thereby reducing the quality and accuracy of the results obtained, since it is not necessary to carry out quantification everywhere in the image.
  • segmentation means can be provided which are intended to supply the quantification module with the areas of interest, from the second set of image data.
  • the invention applies more particularly to medical digital images, and most particularly to three-dimensional medical images of regions of a living being (animal or human), which regions comprise areas of interest including lesions or tumours, active or not, or active anatomical structures such as the heart or the ventricles of the brain.
  • the second image may be deduced from the first image by a symmetry with respect to a plane.
  • the invention also proposes a method for processing comparable digital images, which comprises the following steps:
  • FIGS. 1A and 1B are two views in section of the same region of a human brain affected by an active lesion, which are obtained at different times;
  • FIGS. 2A to 2 D are processed images representing an area of interest in FIG. 1A, based on its position in this image 1 A, after subtraction of the images 1 A and 1 B, after application to the deformation vector field of a first operator of the modulus type, after application to the deformation vector field of a second operator of the divergence type, and after application to the deformation vector field of a third operator produced from the first and second operators;
  • FIGS. 3A and 3B illustrate diagrammatically an area of interest before and after evolution of the central deformation type with change of intensity
  • FIGS. 4A and 4B illustrate diagrammatically an area of interest before and after evolution of the diffuse deformation type without change of intensity
  • FIGS. 5A and 5B illustrate diagrammatically an area of interest before and after evolution of the transformation type without displacement, but with variation of intensity
  • FIG. 6 is a flow chart illustrating the general operation of the device
  • FIG. 7 is a flow chart illustrating the operation of the detection module of the device.
  • FIG. 8 illustrates a graphic example of calculation of volume variation in the case of a points distribution of the network type
  • FIG. 9 illustrates a two-dimensional (2D) example of a family of closed and nested forms
  • FIG. 10 illustrates a three-dimensional (3D) example of deformation vector field
  • FIG. 11 is a diagram representing the estimation of the volume variation of a lesion according to the radius of a reference sphere which encompasses it.
  • FIG. 1B The image of FIG. 1B will be called the first image, and the image of FIG. 1A the second image. In these two images there is framed by dash/dotted lines an area termed area of interest, containing an active lesion induced by a disease of the plaque sclerosis type.
  • FIGS. 1A and 1B in fact represent a two-dimensional (2D) part of a three-dimensional (3D) image of a region of the brain, the other parts forming with the 2D part illustrated a stack of 2D image sections. Such sections may be obtained, in particularly, by magnetic resonance imaging (MRI).
  • MRI magnetic resonance imaging
  • Each image tranche in fact constitutes an intensity card representing the proton density of the constituents of the region, here the tissues and lesions.
  • a three-dimensional image is consequently constituted by a set of digital image data, each image datum representing the position of an image voxel with respect to a three-dimensional point of reference, and the intensity of the voxel, which is generally between the values 0 and 1.
  • the image data form an ordered list (or table), and the position of the datum in this list implicitly provides the position co-ordinates.
  • the device according to the invention is suitable for processing such sets of image data representing, respectively, comparable digital images.
  • comparable there is meant here images of the same region taken at different times.
  • it could be a question of images of identical regions of different patients, or of different subjects, or even of a first image of a region exhibiting a certain degree of symmetry and of a second “symmetrized” (or chiral) image.
  • the principal object of the device according to the invention is to process two sets of image data representing two digital images, at least one of which contains at least one area of interest including at least one active structure, in such a manner as to quantify the differences which might exist between the two images.
  • the modifications which may appear in an “active” area may be of several types. It may be a question (see FIGS. 3A and 3B) of a modification of the central deformation type with change of intensity.
  • the area of interest comprises healthy tissues T in the centre of which there is a lesion L, the volume of which increases, or decreases, in the course of time, thus causing displacement of the tissues.
  • the modification may also be of the diffuse deformation type without change of intensity.
  • the lesion L which is located in the centre of healthy tissues T is not visible, and only displacements of the tissues which surround it reveal its presence.
  • the modification may also be of the transformation type without displacement, but with change of intensity, as is illustrated in FIGS. 5A and 5B.
  • the lesion, visible or not, which is located at the centre of the healthy tissues T increases or decreases without causing displacement of the said healthy tissues. Combinations of these different types of modifications may of course also occur.
  • the device according to the invention comprises, for the purpose of processing the first and second sets of image data, a certain number of modules which co-operate with one another.
  • a registration module 10 is charged with receiving the first and second sets of image data in order to determine a registration transformation termed “rigid” TR between the first and second images.
  • the rigid registration is described in particular in Patent Application 92 03900 of the Applicant, and also in the publication “New feature points based on geometric invariance for 3D image registration”, in the International Journal of Computer Vision, 18(2), pp. 121-137, May 1996, by J-P. Thirion.
  • This rigid registration operation makes it possible to obtain a first superposition of the two initial images having an accuracy which may reach a tenth of an image volume element (or voxel).
  • the registration transformation T R makes it possible to pass from the image 1 in FIG. 1B to the image 2 in FIG. 1A (which here serves as a reference image).
  • the registration transformation applied to the image 1 then feeds a sampling module 20 intended to re-sample the image 1 processed by the registration transformation T R so as to form a third set of image data representing a third image able to be precisely superposed, sample by sample, on the second image (reference image).
  • this superposition is effective everywhere except in the areas which have undergone evolution (or transformation) from one image to the other (that is to say, here, the areas of interest comprising lesions).
  • the second and third sets of image data representing, respectively, the second image and the first image which has been processed by registration and re-sampling (or third image), are addressed to a processing module 30 , and more precisely to a deformation field calculation module 35 which the said processing module 30 comprises.
  • a deformation processing termed “non-rigid”, which is described in particular in the publication “Non rigid machine using demons”, in Computer Vision and Pattern Recognition, CVPR' 96 , San Francisco, Calif., USA, June 1996, by J-P. Thirion.
  • This technique resembles the technique of optical flow, when the deformations considered are very small. It makes it possible to determine a deformation vector field ⁇ right arrow over (D) ⁇ representing the displacement vector distribution (3D), during the passage from the second image to the third image (transform of the first image), based on each image element or voxel of the second image.
  • a deformation vector field ⁇ right arrow over (D) ⁇ is illustrated in FIG. 10.
  • the deformation vector field ⁇ right arrow over (D) ⁇ therefore indicates, by means of a vector for each image voxel, the direction, and the direction of displacement, of the voxel, and also the variation in intensity undergone by this voxel associated with the said vector, when considering its passage from the image 3 (transform of the image 1 ) to the image 2 (reference image), based on hat same image 2 .
  • the deformation vector field ⁇ right arrow over (D) ⁇ determined by the module 35 is used by a quantification module 60 intended to determine the volume variations of the areas of interest of the images, and preferably integrated with the processing module 30 .
  • the processing which it carries out in order to do this will be explained with more particular reference to FIG. 8.
  • the quantification carried out by the quantification module 60 consists firstly of encompassing the active lesion, contained in an area of interest, within a reference shape FR delimited by a reference contour which ray either be of a shape similar to that F of the active lesion delimited by a closed contour (the calculation of which will be described in detail hereinafter), which may take any type of shape, such as, for example, a shape similar to that of the active lesion, or ellipsoidal, or even of spherical or cubic shape (as illustrated in FIG. 8).
  • the closed contour as well as the reference contour are topologically closed and orientated surfaces, or in other words having an interior in the mathematical sense of the term (the shape F of the active lesion) and an exterior. Moreover, the deformation vector field ⁇ right arrow over (D) ⁇ is assumed to be continuous and bijective.
  • D 2,1 will be given to the deformation vector field ⁇ right arrow over (D) ⁇ making it possible to pass from image 2 to image 1 (by way of its transform (image 3 )).
  • F 1 and F 2 are the respective shapes of the active lesions in the first and second images.
  • F 1 and F 2 belong to the same space E as D 2,1 .
  • F 1 D 2,1 (F 2 )
  • the shape F 1 of the active lesion of the first image is equal to the transform by the deformation vector field D 2,1 of the shape F 2 of the same active lesion of the second image (reference image).
  • the reference shape F R which is determined by the quantification module 60 is selected such that F 1 ⁇ F R and F 2 ⁇ F R .
  • the volume V R of the reference shape FR is known, since it has been attributed by the quantification module 60 .
  • the volume V 2 may advantageously be evaluated by a stochastic method of the Monte Carlo type, which consists in taking NR points randomly within the shape FRI for a constant distribution density.
  • N 2 which is the number of points falling within the shape F 2 , is measured, and the relationship:
  • V 2 V(F 2 ) ⁇ (N 2 /N R ) ⁇ V R
  • the sign “ ⁇ ” means that (N 2 /N R ) ⁇ V R tends towards V 2 when N R tends towards infinity.
  • V 1 V(F 1 ) ⁇ (N 1 /N R ) ⁇ V R
  • this point P belongs to the first shape F 1 if, and only if, the transform of the said point P by the deformation vector field D 1,2 (equal to D ⁇ 1 1,2 ) belongs to the transform by the same deformation vector field ⁇ right arrow over (D) ⁇ 1,2 of the first shape F 1 , which is equal to the second shape F 2 .
  • the number N 1 of points which fall within the shape F 1 is equal to the number of points taken randomly within the reference shape FR verifying the relationship D 1,2 (P) ⁇ F 2 .
  • the deformation field D 1,2 is not always continuous, but its representation may be discretized (D ⁇ 1,2 ) In such a case, only those co-ordinates of the volume elements which correspond to points of a regular grid G, that is to say, the points D ⁇ 1,2 (G), are available.
  • V R the number of points of the regular grid G which fall within the reference shape F R , multiplied by the volume of the mesh;
  • V 2 the number of points of the regular grid G which fall within the second shape F 2 , multiplied by the volume of the mesh
  • V 1 the number of points of D ⁇ 1,2 (G) which fall within the second shape F 2 , multiplied by the volume of the lattice mesh.
  • quantification consists in carrying out the following steps (see FIG. 8):
  • the number of cubic mesh nodes comprised within the sphere F forming the closed contour is equal to approximately 21 before the application of the deformation field ⁇ right arrow over (D) ⁇ , and this number of nodes is equal to no more than approximately 9 after the application of the same deformation field ⁇ right arrow over (D) ⁇ .
  • the volume variations thus determined form a set of difference data, in which the differences are volume variations; the set is then termed a “set of volume data”.
  • This set may be put into the form of a set of image data with a view to displaying it on an intensity card, for example, based on the second image; the intensity differences represent the volume variation amplitudes.
  • a distance card for example by means of the chamfer method, or by Gaussian smoothing, and then define a set of closed and nested shapes in the form of iso-surfaces defined from the distance card.
  • a volume variation calculation was made starting, in particular, from the closed contour of a lesion.
  • This closed lesion contour may be determined in three ways: by a manual or automatic segmentation module 70 , or directly by the quantification module 60 , or by a detection module 50 from difference data provided by a calculation module 40 .
  • the segmentation methods are well known to a person skilled in the art.
  • An example of such a method is described, for example, in the publication by Isaac COHEN, Laurent COHEN, and Nicholas AYACHE, “using deformable surfaces to segment 3 D images and infer differential structures”, in CVGIP: Image understanding ' 92 , September 1992 .
  • This methods consists in determining areas of interest directly from the second image.
  • the quantification module 60 is fed by the module for calculating the deformation field 35 and by the segmentation module 70 which provides the areas of interest in which quantification is to be carried out.
  • the segmentation module 70 may be integrated in the processing module 30 .
  • the calculation module 40 is intended to transform the deformation vector field ⁇ right arrow over (D) ⁇ , provided by the module for determining the deformation field 35 , into at least one set of difference data, preferably by the application of at least a first operator.
  • the calculation module 40 applies two operators, in parallel, to the deformation vector field ⁇ right arrow over (D) ⁇ in order to determine a first and a second set of difference data.
  • the first and second operators are selected from a group comprising a modulus type operator and an operator based on partial derivatives.
  • the first operator may be of the modulus type ( ⁇ ⁇ ) while the second operator is based on partial derivatives.
  • the modulus operator consists in transforming the vectors representing the field ⁇ right arrow over (D) ⁇ into intensities ⁇ right arrow over (D) ⁇ based on the second image, so as to form a first set of difference data which can then be transformed into a first “fourth” set of image data, forming an intensity card representing modifications of the tissue displacement type.
  • the operator based on partial derivatives is preferably of the divergence type (Div), but it may also be of the Jacobian type.
  • the application of such an operator to the deformation vector field ⁇ right arrow over (D) ⁇ makes it possible to transform the vectors representing the field ⁇ right arrow over (D) ⁇ into intensities Div ⁇ right arrow over (D) ⁇ based on the second image, so as to form a second set of difference data which can then be transformed into a second “fourth” set of image data, forming another intensity card representing modifications of the volume variation type.
  • the sign of the divergence of the field ⁇ right arrow over (D) ⁇ (Div ⁇ right arrow over (D) ⁇ ) at a given point makes it possible to indicate whether the lesion is in a growth phase or in a diminution phase.
  • This third operator provides a third set of difference data which can then be transformed into a third “fourth” set of image data, forming yet another intensity card representing, for each voxel based on the second image, both displacement areas and volume variation areas, representing volume variations.
  • the device according to the invention may also comprise a comparison module 80 , dependent or not dependent on the processing module 30 , to provide another set of difference data from the subtraction of the second and third sets of image data.
  • This other set may also give an intensity card representing differences, in the first sense of the word, between the images 2 and 3 .
  • FIGS. 2A to 2 D show, by way of comparison, the different intensity image cards obtained by direct subtraction of the second and third sets of image data, after application to the field ⁇ right arrow over (D) ⁇ of a first operator of the modulus type, after application to the deformation vector field of a second operator of the divergence type, and after application to that same field ⁇ right arrow over (D) ⁇ of a third operator produced from the first and second operators.
  • the device Starting from at least one of the fourth sets of image data, or more directly from the corresponding set of difference data, the device will make it possible to determine the closed contours of the lesions contained in the areas of interest.
  • Detection may be either automatic or manual (intervention of a technician or the practitioner having Interest in the images). It is clear that in the manual case, detection/selection can be carried out only from the display of an intensity card (fourth set of image data). In either case, detection is made possible by a module for detecting areas of interest 50 which forms part of the processing module 30 .
  • a user interface may be provided, such as, for example, a mouse, in order to make it easier to select from images of the treated deformation vector field ⁇ right arrow over (D) ⁇ and from the second and third images.
  • the device according to the invention, and more particularly its detection module 50 is then capable of determining the closed contour of the lesion contained in the selected area or areas of interest.
  • the shape of the closed contour is either similar to that of the active lesion within the area of interest, or ellipsoidal, or even spherical.
  • the detection module 50 is arranged to search among the nearby two-dimensional images of the three-dimensional stack, forming the 3D image, the parts comprising the active lesion.
  • the detection module 50 determines the different areas of interest and which, consequently, determines a closed contour for each active lesion that they respectively contain, just as in the manual procedure.
  • this automatic detection of the areas of interest is carried out by means of a technique termed “by connex elements” (or connex parts search), which is well known to a person skilled in the art.
  • the selection/determination of the areas of interest comprises first of all the production of a mask 51 from the second image (reference image), and the combination of this mask, by a logic operation of the “ET” type 52 with one of the sets of difference data resulting from the application to the deformation vector field ⁇ right arrow over (D) ⁇ of at least one of the operators.
  • the result of this logic operation between a set of difference data (or the associated fourth set of image data) and the mask of the second image provides a “masked” image which makes it possible to locate, in the mask of the second image, the areas of difference determined by the application of the operator or operators to the deformation field.
  • the mask may correspond, for example, to the white matter of the brain.
  • the masked image is “de-noised”, a search is made for the related parts which it contains 54 , so as to determine the shapes of the lesions contained in the areas of interest, or an approximated spherical or ellipsoidal shape, from a calculation of moments of the order 0 or the order 1.
  • the closed contours of each active lesion and their location relative to the second image are then addressed to the quantification module 60 , when, of course, the device comprises one, so that quantification is carried out only from data corresponding to the areas of interest and more particularly to the closed contours contained therein. It is clear that the device according to the invention can function without the calculation module 40 and detection module 50 .
  • the main object of the detection of the areas of interest is to avoid the quantification of the volume variations being carried out on the entirety of a set of difference data.
  • quantification is carried out only on one or more parts (or areas of interest) of the sets (second and third) of image data. This makes it possible to reduce very significantly the processing time for quantification, without thereby reducing the quality and the accuracy of the results obtained.
  • the areas of Interest may be obtained by the quantification module 60 from the second image.
  • quantification is carried out by means of a sphere of a given radius centred on the said voxel, then the volume variation value thus measured (image datum) is attributed to the corresponding voxel of a new image.
  • the device according to the invention may be installed in a memory, for example a mass memory, of a work station, in the form of software.
  • the processing of two medical images obtained at different moments has been described above. But the processing may equally apply to images in another field, such as, for example, that of high precision welding. Moreover, the processing may also be carried out starting from a first image and from its image symmetrized with respect to a plane, when the first image is sufficiently symmetrical for this to be done.
  • a device comprising both calculation and detection means and quantification means. But it is clear that a device according to the invention may comprise only calculation means (application of one or more operators), or only calculation means and detection means, or even only quantification means.

Abstract

A device comprises means (10) for determining a registration transformation between a first set of data of a first image and a second set of data of a second image, means (20) for re-sampling the first set of data into a third set of data able to be superposed directly, sample by sample, on the second set of data, processing means (30) for determining, starting from the second and third set of data, a set of difference data representing differences between superposable areas of the images constituted by the second and third sets of data.

Description

  • The invention concerns the field of processing of comparable digital images, for the purpose of detecting (or determining) dimensional variations. They may be “two-dimensional” (2D) images, in which case the variation will be termed surface variation, or “three-dimensional” (3D) images, and in this case the variation will be termed volume variation. [0001]
  • The invention applies more particularly, but not exclusively, to images termed medical images, and especially to the analysis of comparable digital images of regions of the brain, in order to study areas of interest comprising, for example, lesions or tumours, or active anatomical structures such as the heart or the ventricles of the brain. By comparable images, there is meant images taken either of substantially identical regions of the same “subject” at different moments, or of substantially identical regions of two separate “subjects”, or even of a single image and the associated image symmetrized with respect to a plane (or also termed “chiral”), when the region analyzed has a certain degree of symmetry. [0002]
  • In many fields it is very important to make comparative analyses of regions in order to see their evolution over time. This is especially the case in the field of high precision welding. But it is even more the case in the medical field, where the detection of lesions and/or following the course of their evolution is absolutely essential in order to adapt a treatment to a patient or to carry out clinical tests, for example. By evolution, there is meant any modification of a region, whether it is of the deformation type (mass effect) and/or of the transformation type (structural modification without deformation). [0003]
  • In the medical field, a set of image data forming an n-dimensional (nD) image is obtained by means of such apparatus as X-ray scanners or nuclear magnetic resonance apparatuses (MRI), or more generally any type of apparatus capable of acquiring images with variations in intensity. Each elementary part of a region represented by an nD image is defined by n spatial co-ordinates and an intensity (measured magnitude). [0004]
  • Thus, in the case of an MRI, the 3D image of a region observed consists of a multiplicity of stacked 2D sections, in which the variations in intensity represent the proton density of the tissues. [0005]
  • Techniques are already known which make it possible to detect and/or estimate variations in volume in active regions: [0006]
  • S. A. Roll, A. C. F. Colchester, L. D. Griffin, P. E. Summers, F. Bello, B. Sharrack, and D. Leibfritz, “Volume estimation of synthetic multiple sclerosis lesions: An evaluation of methods”, in the 3rd Annual Meeting of the Society of Magnetic Resonance, p. 120, Nice, France, August 1994; and [0007]
  • C. Roszmanith, H. Handels, S. J. Pöppl, E. Rinast, and H. D. Weiss, “Characterization and classification of brain tumours in three-dimensional MR image sequences”, in Visualization in Biomedical Computing, VBC'96, Hamburg, Germany, September 1996. [0008]
  • These techniques, termed “segmentation” techniques, consist in delineating (or attributing a contour to) an area of interest on two images of an active region, which are spaced in time, then subtracting the “volumes” contained within the two contours in order to estimate the variation in volume of the area of interest within the time interval separating the two images. [0009]
  • These techniques are particularly difficult to put into practice in the case of 3D images, owing to the difficulty encountered when delineating the area of interest. Moreover, the volume measurement is carried out by counting reference volume elements (voxels), of very small size, contained in a closed contour of an area of interest, the dimension of which is generally very large compared with that of a voxel. This counting can only be carried out by (semi-)automatic methods such as, for example, that termed “3D snakes”, which are difficult to put into practice for the non-specialist such as is generally the practitioner who carries out the analysis of the images. [0010]
  • The result is that the uncertainty of the measurement of the volume of an area of interest is very often greater than the estimated variation in volume, which reduces the interest of such volume measurements to a considerable extent. The accuracy of these measurements is even poorer when man has to intervene, since the measurement is then dependent on the observer. [0011]
  • Moreover, the areas of interest are frequently difficult to detect, owing to the fact that the materials of which they consist are not always well contrasted in the images. [0012]
  • The aim of the present invention is therefore to improve the situation in this field of processing of digital images of active regions. [0013]
  • To this end, it proposes an electronic image processing device which comprises: [0014]
  • registration means making it possible to determine a registration transformation between one of the images and the other, starting from the two sets of image data, [0015]
  • sampling means operating according to this registration in order to re-sample a first of the two sets of image data into a third set of image data relating to the same image, and able to be superposed directly, sample by sample, on the second set of image data, and [0016]
  • processing means which operate starting from the second and third sets of image data in order to obtain therefrom at least one set of difference data, representing differences between superposable areas of interest of the images constituted respectively by the said second and third sets of image data. [0017]
  • Here, the expression “difference” should be taken in the wider sense, that is to say that it may be a question of the appearance of a new area of interest, or of a modification/transformation of a known area of interest. More generally, any type of difference between the two images is concerned here. [0018]
  • According to another feature of the invention, the processing means comprise a calculation module to determine firstly a deformation vector field, from the second and third sets of image data, in such a manner as to make it possible to provide the set of difference data. [0019]
  • Preferably, the processing means comprise first calculation means for applying to the deformation vector field at least a first operator so as to provide the set of difference data, which is then termed a first set of difference data. [0020]
  • The processing means may also comprise second calculation means for applying to the deformation vector field a second operator, different from the first operator, so as to provide another set of difference data, which is then termed a second set of difference data. [0021]
  • In this way, two sets of difference data are obtained which include complementary information on the areas of interest. [0022]
  • The processing means may additionally comprise third calculation means for applying to the deformation vector field a third operator, a composition of the first and second operators, so as to provide another set of difference data, which is then termed a third set of difference data. This makes it possible to obtain other information on the areas of interest, complementary to those obtained with a single operator, and moreover much less subject to noise interference, and consequently more precise, owing to the fact that the respective contributions of the “noise” generated by the application of these operators are decorrelated. [0023]
  • Consequently, the contrast of the areas of interest is significantly improved, which makes it possible to detect them more easily. [0024]
  • The first and second operators are advantageously selected from a group comprising an operator of the modulus type and an operator based on partial derivatives, of the divergence or Jacobian type, for example. [0025]
  • The modulus type operator will provide information more particularly representing movements, while the operator based on partial derivatives will provide information representing more particularly growth or diminution (volume variation or mass effect). [0026]
  • According to yet another feature of the invention, the processing means may comprise detection means in order to transform each first, second and third set of difference data into a fourth set of image data forming a card. [0027]
  • Depending on the variants, the detection means will be arranged either to allow manual selection by a user, from one of the cards, of the areas of interest, or to carry out automatic selection of the areas of interest in one of the cards. [0028]
  • In the case of automatic selection, it is of advantage that this selection is effected by analysis of the connex elements type. [0029]
  • Advantageously, the detection means are capable of determining the closed contours which respectively delimit selected parts of the areas of interest. This determination may be effected by approximation by spheres or by ellipsoids. [0030]
  • According to yet another feature of the invention, the processing means may comprise, separately, or in parallel with the detection means, quantification means for determining, from the deformation vector field and the second and third sets of image data, volume data representing differences of the volume variation type, so as to form the set of difference data, which is then termed a set of volume data. [0031]
  • This determination of the volume variations in an area of interest preferably comprises: [0032]
  • the association with a closed contour, representing the area of interest, of a reference contour encompassing this closed contour; the reference contour may be substantially identical to the shape of the area of interest, or may be spherical, or even ellipsoidal, [0033]
  • the breaking down into elements, by means of a points distribution, of the space contained in the reference contour; this breaking down of the space may be effected by means of a regular points distribution, forming a lattice, or stochastically by means of a random points distribution, [0034]
  • the counting of the elements contained within the closed contour of the area of interest, [0035]
  • the application to this points distribution of the deformation vector field, without deforming the closed contour of the area of interest, [0036]
  • the counting of the remaining elements within the closed contour of the area of interest, and [0037]
  • the subtraction of the two numbers of elements so as to determine the image data of the set of volume data representing volume variations of the area of interest. [0038]
  • Preferably, the quantification means calculate, in each area of interest, a multiplicity of volume variations of the selected area of interest, for reference contours which are closed and nested in one another, and comprised between the contour comparable with a point of zero dimension and the reference contour, then determine from this multiplicity of volume variations that which is the most probable. This makes it possible to improve further the accuracy of the volume variation calculation. [0039]
  • When the processing means comprise both quantification means and detection means, it is particularly advantageous that the quantification means operate on closed contours determined by the detection means in the areas of interest selected by the latter. This makes it possible to reduce the processing time very significantly, without thereby reducing the quality and accuracy of the results obtained, since it is not necessary to carry out quantification everywhere in the image. [0040]
  • Moreover, when the device does not comprise detection means, segmentation means can be provided which are intended to supply the quantification module with the areas of interest, from the second set of image data. [0041]
  • The invention applies more particularly to medical digital images, and most particularly to three-dimensional medical images of regions of a living being (animal or human), which regions comprise areas of interest including lesions or tumours, active or not, or active anatomical structures such as the heart or the ventricles of the brain. The second image may be deduced from the first image by a symmetry with respect to a plane. [0042]
  • The invention also proposes a method for processing comparable digital images, which comprises the following steps: [0043]
  • determining a registration transformation between one of the images and the other, starting from the two sets of image data, [0044]
  • re-sampling a first of the two sets of image data, representing the registration image, into a third set of image data relating to the same image and able to be superposed directly, sample by sample, on the second set of image data, [0045]
  • determining, from the second and third sets of image data, at least one set of difference data representing differences between superposable areas of the images constituted, respectively, by the said second and third sets of image data. [0046]
  • Other features and advantages of the invention will be revealed on examination of the detailed description which follows, and of the appended drawings, in which:[0047]
  • FIGS. 1A and 1B are two views in section of the same region of a human brain affected by an active lesion, which are obtained at different times; [0048]
  • FIGS. 2A to [0049] 2D are processed images representing an area of interest in FIG. 1A, based on its position in this image 1A, after subtraction of the images 1A and 1B, after application to the deformation vector field of a first operator of the modulus type, after application to the deformation vector field of a second operator of the divergence type, and after application to the deformation vector field of a third operator produced from the first and second operators;
  • FIGS. 3A and 3B illustrate diagrammatically an area of interest before and after evolution of the central deformation type with change of intensity; [0050]
  • FIGS. 4A and 4B illustrate diagrammatically an area of interest before and after evolution of the diffuse deformation type without change of intensity; [0051]
  • FIGS. 5A and 5B illustrate diagrammatically an area of interest before and after evolution of the transformation type without displacement, but with variation of intensity; [0052]
  • FIG. 6 is a flow chart illustrating the general operation of the device; [0053]
  • FIG. 7 is a flow chart illustrating the operation of the detection module of the device; [0054]
  • FIG. 8 illustrates a graphic example of calculation of volume variation in the case of a points distribution of the network type; [0055]
  • FIG. 9 illustrates a two-dimensional (2D) example of a family of closed and nested forms; and [0056]
  • FIG. 10 illustrates a three-dimensional (3D) example of deformation vector field; and [0057]
  • FIG. 11 is a diagram representing the estimation of the volume variation of a lesion according to the radius of a reference sphere which encompasses it.[0058]
  • The drawings are essentially of a definite nature. Consequently they form an integral part of the present description. They may therefore serve not only to allow better understanding of the invention, but also to contribute to the definition of the latter. [0059]
  • Reference will be made hereinafter to the processing of medical digital images, and more particularly, but only by way of example, to images of regions of the brain of the type which are illustrated partially in FIGS. 1A and 1B and which have been obtained from the same human subject at an interval of approximately two months. [0060]
  • The image of FIG. 1B will be called the first image, and the image of FIG. 1A the second image. In these two images there is framed by dash/dotted lines an area termed area of interest, containing an active lesion induced by a disease of the plaque sclerosis type. [0061]
  • FIGS. 1A and 1B in fact represent a two-dimensional (2D) part of a three-dimensional (3D) image of a region of the brain, the other parts forming with the 2D part illustrated a stack of 2D image sections. Such sections may be obtained, in particularly, by magnetic resonance imaging (MRI). Each image tranche in fact constitutes an intensity card representing the proton density of the constituents of the region, here the tissues and lesions. [0062]
  • A three-dimensional image is consequently constituted by a set of digital image data, each image datum representing the position of an image voxel with respect to a three-dimensional point of reference, and the intensity of the voxel, which is generally between the [0063] values 0 and 1. In fact, to be more precise, the image data form an ordered list (or table), and the position of the datum in this list implicitly provides the position co-ordinates.
  • The device according to the invention is suitable for processing such sets of image data representing, respectively, comparable digital images. By comparable, there is meant here images of the same region taken at different times. But, in other image processing applications, it could be a question of images of identical regions of different patients, or of different subjects, or even of a first image of a region exhibiting a certain degree of symmetry and of a second “symmetrized” (or chiral) image. [0064]
  • The principal object of the device according to the invention is to process two sets of image data representing two digital images, at least one of which contains at least one area of interest including at least one active structure, in such a manner as to quantify the differences which might exist between the two images. [0065]
  • As is illustrated diagrammatically in FIGS. [0066] 3 to 5, the modifications which may appear in an “active” area may be of several types. It may be a question (see FIGS. 3A and 3B) of a modification of the central deformation type with change of intensity. In this case, the area of interest comprises healthy tissues T in the centre of which there is a lesion L, the volume of which increases, or decreases, in the course of time, thus causing displacement of the tissues. The modification may also be of the diffuse deformation type without change of intensity. In this case, as illustrated in FIGS. 4A and 4B, the lesion L which is located in the centre of healthy tissues T is not visible, and only displacements of the tissues which surround it reveal its presence. The modification may also be of the transformation type without displacement, but with change of intensity, as is illustrated in FIGS. 5A and 5B. In this case, the lesion, visible or not, which is located at the centre of the healthy tissues T increases or decreases without causing displacement of the said healthy tissues. Combinations of these different types of modifications may of course also occur.
  • The device according to the invention comprises, for the purpose of processing the first and second sets of image data, a certain number of modules which co-operate with one another. [0067]
  • A [0068] registration module 10 is charged with receiving the first and second sets of image data in order to determine a registration transformation termed “rigid” TR between the first and second images. The rigid registration is described in particular in Patent Application 92 03900 of the Applicant, and also in the publication “New feature points based on geometric invariance for 3D image registration”, in the International Journal of Computer Vision, 18(2), pp. 121-137, May 1996, by J-P. Thirion.
  • This rigid registration operation makes it possible to obtain a first superposition of the two initial images having an accuracy which may reach a tenth of an image volume element (or voxel). In the example illustrated in FIG. 6, the registration transformation T[0069] R makes it possible to pass from the image 1 in FIG. 1B to the image 2 in FIG. 1A (which here serves as a reference image).
  • The registration transformation applied to the [0070] image 1 then feeds a sampling module 20 intended to re-sample the image 1 processed by the registration transformation TR so as to form a third set of image data representing a third image able to be precisely superposed, sample by sample, on the second image (reference image). Obviously, this superposition is effective everywhere except in the areas which have undergone evolution (or transformation) from one image to the other (that is to say, here, the areas of interest comprising lesions).
  • The second and third sets of image data representing, respectively, the second image and the first image which has been processed by registration and re-sampling (or third image), are addressed to a [0071] processing module 30, and more precisely to a deformation field calculation module 35 which the said processing module 30 comprises. There is firstly applied to them a deformation processing termed “non-rigid”, which is described in particular in the publication “Non rigid machine using demons”, in Computer Vision and Pattern Recognition, CVPR'96, San Francisco, Calif., USA, June 1996, by J-P. Thirion.
  • This technique resembles the technique of optical flow, when the deformations considered are very small. It makes it possible to determine a deformation vector field {right arrow over (D)} representing the displacement vector distribution (3D), during the passage from the second image to the third image (transform of the first image), based on each image element or voxel of the second image. An example of a deformation vector field {right arrow over (D)} is illustrated in FIG. 10. [0072]
  • The deformation vector field {right arrow over (D)} therefore indicates, by means of a vector for each image voxel, the direction, and the direction of displacement, of the voxel, and also the variation in intensity undergone by this voxel associated with the said vector, when considering its passage from the image [0073] 3 (transform of the image 1) to the image 2 (reference image), based on hat same image 2.
  • The deformation vector field {right arrow over (D)} determined by the [0074] module 35 is used by a quantification module 60 intended to determine the volume variations of the areas of interest of the images, and preferably integrated with the processing module 30. The processing which it carries out in order to do this will be explained with more particular reference to FIG. 8.
  • The quantification carried out by the [0075] quantification module 60 consists firstly of encompassing the active lesion, contained in an area of interest, within a reference shape FR delimited by a reference contour which ray either be of a shape similar to that F of the active lesion delimited by a closed contour (the calculation of which will be described in detail hereinafter), which may take any type of shape, such as, for example, a shape similar to that of the active lesion, or ellipsoidal, or even of spherical or cubic shape (as illustrated in FIG. 8).
  • The closed contour as well as the reference contour are topologically closed and orientated surfaces, or in other words having an interior in the mathematical sense of the term (the shape F of the active lesion) and an exterior. Moreover, the deformation vector field {right arrow over (D)} is assumed to be continuous and bijective. [0076]
  • In the following, for reasons of convenience, the term D[0077] 2,1 will be given to the deformation vector field {right arrow over (D)} making it possible to pass from image 2 to image 1 (by way of its transform (image 3)).
  • F[0078] 1 and F2 are the respective shapes of the active lesions in the first and second images. F1 and F2 belong to the same space E as D2,1. There is therefore the following relationship: F1=D2,1 (F2), which means that the shape F1 of the active lesion of the first image is equal to the transform by the deformation vector field D2,1 of the shape F2 of the same active lesion of the second image (reference image).
  • The reference shape F[0079] R which is determined by the quantification module 60 is selected such that F1 ⊂ FR and F2 ⊂ FR. The volume VR of the reference shape FR is known, since it has been attributed by the quantification module 60.
  • In order to determine the volume variation ΔV between the shapes F[0080] 2 and F1 it is necessary to determine the respective volumes V1and V2 of the active lesions of the shape F1 and F2.
  • The volume V[0081] 2 may advantageously be evaluated by a stochastic method of the Monte Carlo type, which consists in taking NR points randomly within the shape FRI for a constant distribution density.
  • Then N[0082] 2, which is the number of points falling within the shape F2, is measured, and the relationship:
  • V2=V(F2)≅(N2/NR)×VR
  • is obtained. [0083]
  • Here, the sign “≅” means that (N[0084] 2/NR)×VR tends towards V2 when NR tends towards infinity.
  • Similarly, the number of points N[0085] 1 which fall within the shape F1 are measured, and:
  • V1=V(F1)≅(N1/NR)×VR
  • is obtained. [0086]
  • Therefore, taking a point P belonging to the reference shape F[0087] R, this point P belongs to the first shape F1 if, and only if, the transform of the said point P by the deformation vector field D1,2 (equal to D−1 1,2) belongs to the transform by the same deformation vector field {right arrow over (D)}1,2 of the first shape F1, which is equal to the second shape F2. The relationship
  • P ∈ F1
    Figure US20020012478A1-20020131-P00900
    D1,2 (P) ∈ D1,2 (F1)=F2
  • is obtained. [0088]
  • This relationship is made possible by the fact that the deformation vector field D[0089] 2,1 is continuous and bijective or, in other words, that: D1,2 is equal to D−2 2,1.
  • Consequently, the number N[0090] 1 of points which fall within the shape F1 is equal to the number of points taken randomly within the reference shape FR verifying the relationship D1,2 (P) ∈ F2.
  • The Applicant observed that it was more advantageous to use the latter property to evaluate N[0091] 1, since it is sufficient to determine a single shape F2, and not two shapes F1 and F2.
  • There is then deduced therefrom the volume variation ΔV of the active lesion observed on the first (third) and second images: [0092]
  • ΔV=V1−V2=(N1−N2)×VR/NR
  • In the above formula, it is possible to replace the ratio V[0093] R/NR by the constant distribution density d of points contained in FR, which is substantially equivalent to the said ratio VR/NR.
  • As is illustrated in FIG. 8, instead of a random (or stochastic) points distribution, it is possible to take a regular distribution forming a grid G defined in the space E. It is not of course obligatory for the elementary mesh of the grid G to be of the cubic type as illustrated in FIG. 8. Any type of lattice (network) may be envisaged. [0094]
  • For a regular lattice, the value of the volume variation ΔV tends towards the true value when the resolution of the grid (the volume of its mesh) tends towards the [0095] value 0.
  • In practice, the deformation field D[0096] 1,2 is not always continuous, but its representation may be discretized (D 1,2) In such a case, only those co-ordinates of the volume elements which correspond to points of a regular grid G, that is to say, the points D 1,2(G), are available.
  • Under these conditions: [0097]
  • V[0098] R≅the number of points of the regular grid G which fall within the reference shape FR, multiplied by the volume of the mesh;
  • V[0099] 2≅the number of points of the regular grid G which fall within the second shape F2, multiplied by the volume of the mesh; and
  • V[0100] 1≅the number of points of D 1,2(G) which fall within the second shape F2, multiplied by the volume of the lattice mesh.
  • In the case of a discretized representation D[0101] 1,2(G) of the deformation field, it is also possible to use a stochastic distribution of points Pl, the co-ordinates of which are floating in the space E. It is then sufficient, in order to evaluate D1,2(Pl), to use the discretized field D 1,2 and an n-linear type interpolation of the discrete field, within the mesh G in which the point Pl falls. In this mesh i, the point Pl has co-ordinates αll and γl, all between 0 and 1 (inclusive values). The interpolation will be 2-linear in the case of a 2D image and 3-linear in the case of a 3D image.
  • To sum up, quantification consists in carrying out the following steps (see FIG. 8): [0102]
  • firstly, associating with a closed contour, representing an active lesion of an area of interest, a reference contour F[0103] R (square in the example in FIG. 8) which encompasses the closed contour F,
  • then, by means of a points distribution which may be stochastic or regular, breaking down into simple elements (for example into meshes) the space contained in the reference contour F[0104] R;
  • then counting the meshes (or elements, or mesh nodes) which are contained within the closed contour F of the area of interest; [0105]
  • then, applying to the points distribution (here the cubic mesh grid) the deformation vector field D[0106] 1,2 without deforming the closed contour F;
  • then counting the meshes (or elements, or nodes) remaining within the closed contour F; [0107]
  • then carrying out the subtraction between the two numbers of meshes (or elements, or nodes) thus determined so as to determine the volume variation of the active lesion of the area of interest analyzed. [0108]
  • In the example illustrated in FIG. 8, the number of cubic mesh nodes comprised within the sphere F forming the closed contour is equal to approximately 21 before the application of the deformation field {right arrow over (D)}, and this number of nodes is equal to no more than approximately 9 after the application of the same deformation field {right arrow over (D)}. [0109]
  • The volume variations thus determined form a set of difference data, in which the differences are volume variations; the set is then termed a “set of volume data”. This set may be put into the form of a set of image data with a view to displaying it on an intensity card, for example, based on the second image; the intensity differences represent the volume variation amplitudes. [0110]
  • According to the invention, it is possible to improve the calculation of the volume variation of an area of interest. In order to do this, the [0111] quantification module 60 may be arranged to calculate volume variations Δi for a whole family of closed and nested shapes i comprised between a zero volume (comparable to a geometric point) and the volume VR of the reference shape FR which encompasses a closed contour of an area of interest (see FIG. 9). For each shape of the family the number of meshes (or elements) which fall between two successive surfaces i and i+1, which delimit a “shell”, is counted. Then, the contributions of the shells are summated as follows: N G i = l i C G i N G i + 1 = N G l + C G l + 1
    Figure US20020012478A1-20020131-M00001
  • Starting from a shape F, it is possible to calculate a distance card, for example by means of the chamfer method, or by Gaussian smoothing, and then define a set of closed and nested shapes in the form of iso-surfaces defined from the distance card. [0112]
  • This provides a volume variation curve of the type represented in FIG. 11. The maximum of this curve provides the most probable value of the volume variation of the active lesion analyzed. This makes it possible to improve the accuracy and strength of this measurement of volume variation. [0113]
  • In the preceding description, a volume variation calculation was made starting, in particular, from the closed contour of a lesion. This closed lesion contour may be determined in three ways: by a manual or [0114] automatic segmentation module 70, or directly by the quantification module 60, or by a detection module 50 from difference data provided by a calculation module 40.
  • The segmentation methods are well known to a person skilled in the art. An example of such a method is described, for example, in the publication by Isaac COHEN, Laurent COHEN, and Nicholas AYACHE, “using deformable surfaces to segment [0115] 3D images and infer differential structures”, in CVGIP: Image understanding '92, September 1992. This methods consists in determining areas of interest directly from the second image. In the diagram illustrated in FIG. 6, when a detection module 50 is not used, the quantification module 60 is fed by the module for calculating the deformation field 35 and by the segmentation module 70 which provides the areas of interest in which quantification is to be carried out. The segmentation module 70 may be integrated in the processing module 30.
  • The [0116] calculation module 40 is intended to transform the deformation vector field {right arrow over (D)}, provided by the module for determining the deformation field 35, into at least one set of difference data, preferably by the application of at least a first operator.
  • Preferably, but this is in no way obligatory, the [0117] calculation module 40 applies two operators, in parallel, to the deformation vector field {right arrow over (D)} in order to determine a first and a second set of difference data. Preferably, the first and second operators are selected from a group comprising a modulus type operator and an operator based on partial derivatives. Thus, the first operator may be of the modulus type (∥ ∥) while the second operator is based on partial derivatives.
  • The modulus operator consists in transforming the vectors representing the field {right arrow over (D)} into intensities ∥{right arrow over (D)}∥ based on the second image, so as to form a first set of difference data which can then be transformed into a first “fourth” set of image data, forming an intensity card representing modifications of the tissue displacement type. [0118]
  • The operator based on partial derivatives is preferably of the divergence type (Div), but it may also be of the Jacobian type. The application of such an operator to the deformation vector field {right arrow over (D)} makes it possible to transform the vectors representing the field {right arrow over (D)} into intensities Div {right arrow over (D)} based on the second image, so as to form a second set of difference data which can then be transformed into a second “fourth” set of image data, forming another intensity card representing modifications of the volume variation type. When the second operator is of the divergence type, the sign of the divergence of the field {right arrow over (D)} (Div {right arrow over (D)}) at a given point makes it possible to indicate whether the lesion is in a growth phase or in a diminution phase. [0119]
  • The Applicant also observed, in particular in the characterization of the lesions induced by plaque sclerosis, that is was advantageous to apply to {right arrow over (D)} a third operator, a composition of the first and second operators. In other words, it is of particular interest that the [0120] calculation module 40 of the processing module 30 effects the product of the modulus of the deformation vector field {right arrow over (D)} and of the divergence of that same vector field {right arrow over (D)}, that is to say, ∥{right arrow over (D)}∥ * (Div {right arrow over (D)}). The result of the application of this third operator provides a third set of difference data which can then be transformed into a third “fourth” set of image data, forming yet another intensity card representing, for each voxel based on the second image, both displacement areas and volume variation areas, representing volume variations.
  • Moreover, since the respective digital noises of the first and second sets of difference data, obtained by application of the first and second operators, are Generally decorrelated, the composition of their difference data makes it possible to eliminate the noise almost completely. This makes it possible to improve very significantly the contrast of the intensity card, compared with that obtained by application either of the first operator alone, or of the second operator alone. [0121]
  • The device according to the invention may also comprise a [0122] comparison module 80, dependent or not dependent on the processing module 30, to provide another set of difference data from the subtraction of the second and third sets of image data. This other set may also give an intensity card representing differences, in the first sense of the word, between the images 2 and 3.
  • FIGS. 2A to [0123] 2D show, by way of comparison, the different intensity image cards obtained by direct subtraction of the second and third sets of image data, after application to the field {right arrow over (D)} of a first operator of the modulus type, after application to the deformation vector field of a second operator of the divergence type, and after application to that same field {right arrow over (D)} of a third operator produced from the first and second operators.
  • These four intensity image cards obtained from fourth sets of different image data make it possible to obtain substantially complementary information, and consequently to display and/or characterize better the areas of interest including active lesions or not. [0124]
  • It is clear that the object of the transformations of the field {right arrow over (D)} into a set of difference data, then into a fourth set of image data is to allow the display, on a video screen, or a work station terminal, differences (in the wider sense of the word) between the images (also called areas of interest), when the device according to the invention is incorporated therein. This incorporation may take place, for example, in the mass memory managed by the operating system of the work station which is operated by a technician or a practitioner. [0125]
  • Starting from at least one of the fourth sets of image data, or more directly from the corresponding set of difference data, the device will make it possible to determine the closed contours of the lesions contained in the areas of interest. [0126]
  • Detection may be either automatic or manual (intervention of a technician or the practitioner having Interest in the images). It is clear that in the manual case, detection/selection can be carried out only from the display of an intensity card (fourth set of image data). In either case, detection is made possible by a module for detecting areas of [0127] interest 50 which forms part of the processing module 30.
  • When it is the technician who selects the areas of interest manually, a user interface may be provided, such as, for example, a mouse, in order to make it easier to select from images of the treated deformation vector field {right arrow over (D)} and from the second and third images. The device according to the invention, and more particularly its [0128] detection module 50 is then capable of determining the closed contour of the lesion contained in the selected area or areas of interest. Depending on the variants, the shape of the closed contour is either similar to that of the active lesion within the area of interest, or ellipsoidal, or even spherical.
  • It is clear that, in the case of three-dimensional images, even if the selection of an area of interest is carried out on one of the two-dimensional images of the three-dimensional region analyzed, the [0129] detection module 50 is arranged to search among the nearby two-dimensional images of the three-dimensional stack, forming the 3D image, the parts comprising the active lesion.
  • In the case of automatic selection, it is the [0130] detection module 50 which determines the different areas of interest and which, consequently, determines a closed contour for each active lesion that they respectively contain, just as in the manual procedure.
  • Preferably, this automatic detection of the areas of interest is carried out by means of a technique termed “by connex elements” (or connex parts search), which is well known to a person skilled in the art. [0131]
  • More precisely (see FIG. 7), the selection/determination of the areas of interest comprises first of all the production of a [0132] mask 51 from the second image (reference image), and the combination of this mask, by a logic operation of the “ET” type 52 with one of the sets of difference data resulting from the application to the deformation vector field {right arrow over (D)} of at least one of the operators. The result of this logic operation between a set of difference data (or the associated fourth set of image data) and the mask of the second image provides a “masked” image which makes it possible to locate, in the mask of the second image, the areas of difference determined by the application of the operator or operators to the deformation field. It is therefore a question of a procedure tending to allow the location of the different areas of difference (or areas of interest) relative to the second image. In the case of images of the brain, and more particularly of plaque sclerosis, the mask may correspond, for example, to the white matter of the brain.
  • The data constituting this masked image are then subjected to processing [0133] 53 termed thresholding by hysteresis, making it possible to retain in the masked image all the related components above a first selected minimum threshold and containing at least one point of intensity above a second threshold, higher than the first threshold. This allows the electronic noise of this masked image to be reduced.
  • Once the masked image is “de-noised”, a search is made for the related parts which it contains [0134] 54, so as to determine the shapes of the lesions contained in the areas of interest, or an approximated spherical or ellipsoidal shape, from a calculation of moments of the order 0 or the order 1. The closed contours of each active lesion and their location relative to the second image are then addressed to the quantification module 60, when, of course, the device comprises one, so that quantification is carried out only from data corresponding to the areas of interest and more particularly to the closed contours contained therein. It is clear that the device according to the invention can function without the calculation module 40 and detection module 50. In fact, the main object of the detection of the areas of interest is to avoid the quantification of the volume variations being carried out on the entirety of a set of difference data. Thus, quantification is carried out only on one or more parts (or areas of interest) of the sets (second and third) of image data. This makes it possible to reduce very significantly the processing time for quantification, without thereby reducing the quality and the accuracy of the results obtained.
  • Similarly, in the absence of the [0135] calculation module 40 and detection module 50 or of the segmentation module 70, the areas of Interest may be obtained by the quantification module 60 from the second image. In order to do this, for each voxel of the second image, quantification is carried out by means of a sphere of a given radius centred on the said voxel, then the volume variation value thus measured (image datum) is attributed to the corresponding voxel of a new image.
  • The device according to the invention may be installed in a memory, for example a mass memory, of a work station, in the form of software. [0136]
  • For information, it is stated that more detailed descriptive elements were filed on the Feb. 11, 1997, under confidential cover, at the Société des Gens de Lettres, under reference No. 1997.02.0216/00216. This document, entitled “Deformation analysis to detect and quantify active lesions in 3D medical image sequences”, Search Report No. 3101 of INRIA, February 1997, authors Jean-Phillipe Thirion and Guillaume Calmon, will be made public after the present Patent Application has been filed. [0137]
  • The invention is not limited to the embodiment described above, but encompasses all the variants which a person skilled in the art may develop within the framework of the claims which follow. [0138]
  • Thus, the processing of two medical images obtained at different moments has been described above. But the processing may equally apply to images in another field, such as, for example, that of high precision welding. Moreover, the processing may also be carried out starting from a first image and from its image symmetrized with respect to a plane, when the first image is sufficiently symmetrical for this to be done. [0139]
  • Moreover, a device has been described comprising both calculation and detection means and quantification means. But it is clear that a device according to the invention may comprise only calculation means (application of one or more operators), or only calculation means and detection means, or even only quantification means. [0140]
  • Finally, a device has been described in which the processing means calculate a deformation vector field from the second and third sets of image data in such a manner as to determine a set of difference data. But it is clear that another vector field, different from a deformation field, could be calculated. [0141]

Claims (31)

1. Electronic image processing device, intended to receive two sets of image data representing, respectively, two comparable digital images, characterized in that it comprises:
registration means (10) capable of determining, starting from these two sets of image data, a registration transformation (TR) between one of the images and the other,
sampling means (20) operating according to this registration in order to re-sample a first of the two sets of image data into a third set of image data relating to the same image, and able to be superposed directly, sample by sample, on the second set of image data, and
processing means (30) operating from the second and third sets of image data in order to take therefrom at least one set of difference data, representing differences between superposable areas of interest of the images constituted respectively by the said second and third sets of image data.
2. Device according to claim 1, characterized in that the processing means (30) are arranged to determine a deformation vector field from the second and third sets of image data in such a manner as to make it possible to provide the set of difference data.
3. Device according to claim 2, characterized in that the processing means (30) comprise first calculation means (40) capable of applying to the said deformation vector field at least a first operator in order to provide a set of difference data termed a first set of difference data.
4. Device according to one of claims 2 and 3, characterized in that the processing means (30) comprise second calculating means (40) capable of applying to the said deformation vector field a second operator, different from the first operator, in order to provide another set of difference data termed a second set of difference data.
5. Device according to claim 3 in combination with claim 4, characterized in that the processing means (30) comprise third calculating means (40) capable of applying to the said deformation vector field a third operator, a composition of the first and second operators, in order to provide another set of difference data termed a third set of difference data.
6. Device according to one of claims 3 to 5, characterized in that the first operator is selected from a group comprising an operator of the modulus type and an cperator based on partial derivatives.
7. Device according to claim 6, characterized in that the second operator is selected from the said group comprising an operator of the modulus type and an operator based on partial derivatives.
8. Device according to either of claims 6 and 7, characterized in that the operator based on partial derivatives is of the divergence type.
9. Device according to either of claims 6 and 7, characterized in that the operator based on partial derivatives is of the Jacobian type.
10. Device according to one of claims 3 to 9, characterized in that the processing means (30) comprise detection means (50) capable of transforming the first, second and third sets of difference data into a fourth set of image data forming a card.
11. Device according to claim 10, characterized in that the detection means (50) are arranged to permit manual selection by a user, from the said card, of the areas of interest.
12. Device according to claim 10, characterized in that the detection means (50) are arranged to carry out automatic selection of the areas of interest in the said card.
13. Device according to claim 12, characterized in that selection is carried out by analysis of The connex elements type.
14. Device according to one of claims 10 to 13, characterized in that the detection means (50) are capable of determining for each selected area of interest a closed contour which delimits it.
15. Device according to claim 14, characterized in that the detection means (50) are capable of attributing a spherical shape to the closed contours of the selected areas of interest.
16. Device according to claim 14, characterized in that the detection means (50) are capable of attributing an ellipsoidal form to the closed contours of the selected areas of interest.
17. Device according to one of claims 2 to 16, characterized in that the processing means (30) comprise quantification means (60) capable of determining, starting from the deformation vector field and the second and third sets of image data, volume data representing differences of the volume variation type, so as to form the said set of difference data, which is then termed a set of volume data.
18. Device according to claim 17, characterized in that the quantification means (60) are arranged to:
associate with a closed contour, representing an area of interest, a reference contour encompassing the said closed contour,
breaking down into elements, by means of a points distribution, the space contained in the said reference contour,
counting the elements contained within the closed contour of the area of interest,
applying to the said points distribution the deformation vector field, without deforming the said closed contour of the area of interest,
counting the remaining elements within the closed contour of the area of interest, and
carrying out subtraction between the two numbers of elements so as to determine the volume data of the set of volume data which represent volume variations of the area of interest.
19. Device according to claim 18, characterized in that the quantification means (60) are arranged to attribute a spherical shape to the reference contours of the areas of interest.
20. Device according to claim 19, characterized in that the quantification means (60) are arranged to attribute an ellipsoidal shape to the reference contours of the areas of interest.
21. Device according to one of claims 18 to 20, characterized in that the quantification means (60) are arranged to calculate, in each area of interest, a multiplicity of volume variations for reference contours, closed and nested in one another, and comprised between a contour comparable with a point of zero dimension and the selected reference contour, and to determine, from the said multiplicity of volume variations, a volume variation value which is the most probable for each area of interest.
22. Device according to one of claims 18 to 21, characterized in that the quantification means (60) are arranged to break down the space by means of a regular points distribution, forming a lattice.
23. Device according to one of claims 18 to 21, characterized in that the quantification means (60) are arranged to break down the space stochastically by means of a random points distribution.
24. Device according to one of claims 14 to 16 in combination with one of claims 18 to 23, characterized in that the quantification means operate on closed contours determined by the said detection means in the selected areas of interest.
25. Device according to one of claims 17 to 24, characterized in that it comprises segmentation means (70) capable of supplying the said areas of interest to the said quantification module from the second set of image data.
26. Device according to one of the preceding claims, characterized in that the comparable digital images are medical images.
27. Device according to claim 26, characterized in that the comparable digital images are three-dimensional medical images of regions of the brain of a living being.
28. Device according to one of the preceding claims, characterized in that the areas of interest are active anatomical structures.
29. Device according to one of claims 1 to 28, characterized in that the areas of interest are active lesions.
30. Device according to one of the preceding claims, characterized in that the second image is an image deduced from the first image by symmetry with respect to a plane.
31. Method for processing two sets of image data representing, respectively, two comparable digital images, characterized in that it comprises the following steps:
determining a registration transformation between one of the images and the other, starting from the two sets of image data,
re-sampling a first of the two sets of image data, representing the registrated image, into a third set of image data relating to the same image and able to be superposed directly, sample by sample, on the second set of image data,
determining from the second and third sets of image data at least one set of difference data representing differences between superposable areas of the images constituted respectively by the said second and third sets of image data.
US09/214,929 1997-05-21 1998-05-15 Image processing electronic device for detecting dimensional variations Expired - Fee Related US6373998B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR9706190 1997-05-21
FR9706190A FR2763721B1 (en) 1997-05-21 1997-05-21 ELECTRONIC IMAGE PROCESSING DEVICE FOR DETECTING DIMENSIONAL VARIATIONS
PCT/FR1998/000978 WO1998053426A1 (en) 1997-05-21 1998-05-15 Image processing electronic device for detecting dimensional variations

Publications (2)

Publication Number Publication Date
US20020012478A1 true US20020012478A1 (en) 2002-01-31
US6373998B2 US6373998B2 (en) 2002-04-16

Family

ID=9507067

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/214,929 Expired - Fee Related US6373998B2 (en) 1997-05-21 1998-05-15 Image processing electronic device for detecting dimensional variations

Country Status (6)

Country Link
US (1) US6373998B2 (en)
EP (1) EP0927405B1 (en)
CA (1) CA2261069A1 (en)
DE (1) DE69811049T2 (en)
FR (1) FR2763721B1 (en)
WO (1) WO1998053426A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040120577A1 (en) * 2002-10-06 2004-06-24 Igor Touzov Digital video based array detector for parallel process control and radiation driver for micro-scale devices
US20050075846A1 (en) * 2003-09-22 2005-04-07 Hyeung-Yun Kim Methods for monitoring structural health conditions
US20060257027A1 (en) * 2005-03-04 2006-11-16 Alfred Hero Method of determining alignment of images in high dimensional feature space
US20060287842A1 (en) * 2003-09-22 2006-12-21 Advanced Structure Monitoring, Inc. Methods of networking interrogation devices for structural conditions
US7536912B2 (en) 2003-09-22 2009-05-26 Hyeung-Yun Kim Flexible diagnostic patches for structural health monitoring
US20090175519A1 (en) * 2008-01-09 2009-07-09 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer program storage medium
US20100021082A1 (en) * 2008-07-24 2010-01-28 Siemens Medical Solutions Usa, Inc. Interactive Manual Deformable Registration of Images
US7729035B2 (en) 2003-09-22 2010-06-01 Hyeung-Yun Kim Acousto-optic modulators for modulating light signals
US20100284582A1 (en) * 2007-05-29 2010-11-11 Laurent Petit Method and device for acquiring and processing images for detecting changing lesions
CN102037492A (en) * 2008-05-23 2011-04-27 澳大利亚国立大学 Image data processing
US20140228676A1 (en) * 2011-05-17 2014-08-14 Brainlab Ag Determination of a physically-varying anatomical structure
US20170128032A1 (en) * 2015-11-06 2017-05-11 jung diagnostics GmbH Imaging-based biomarker for characterizing the structure or function of human or animal brain tissue and related uses and methods

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6847490B1 (en) * 1997-01-13 2005-01-25 Medispectra, Inc. Optical probe accessory device for use in vivo diagnostic procedures
EP1161178A2 (en) * 1998-12-23 2001-12-12 Medispectra Inc. Systems and methods for optical examination of samples
EP1139864A1 (en) * 1998-12-23 2001-10-10 Medispectra Inc. Optical methods and systems for cervical screening
US7212201B1 (en) 1999-09-23 2007-05-01 New York University Method and apparatus for segmenting an image in order to locate a part thereof
US7260248B2 (en) * 1999-12-15 2007-08-21 Medispectra, Inc. Image processing using measures of similarity
US7187810B2 (en) * 1999-12-15 2007-03-06 Medispectra, Inc. Methods and systems for correcting image misalignment
US20020007122A1 (en) * 1999-12-15 2002-01-17 Howard Kaufman Methods of diagnosing disease
GB0009668D0 (en) * 2000-04-19 2000-09-06 Univ Manchester Non-Parametric image subtraction using grey level scattergrams
US6631202B2 (en) * 2000-12-08 2003-10-07 Landmark Graphics Corporation Method for aligning a lattice of points in response to features in a digital image
US6839661B2 (en) * 2000-12-15 2005-01-04 Medispectra, Inc. System for normalizing spectra
US6701016B1 (en) * 2000-12-22 2004-03-02 Microsoft Corporation Method of learning deformation models to facilitate pattern matching
US7282723B2 (en) * 2002-07-09 2007-10-16 Medispectra, Inc. Methods and apparatus for processing spectral data for use in tissue characterization
US7136518B2 (en) * 2003-04-18 2006-11-14 Medispectra, Inc. Methods and apparatus for displaying diagnostic data
US7309867B2 (en) * 2003-04-18 2007-12-18 Medispectra, Inc. Methods and apparatus for characterization of tissue samples
US6818903B2 (en) * 2002-07-09 2004-11-16 Medispectra, Inc. Method and apparatus for identifying spectral artifacts
US7459696B2 (en) * 2003-04-18 2008-12-02 Schomacker Kevin T Methods and apparatus for calibrating spectral data
US20040208390A1 (en) * 2003-04-18 2004-10-21 Medispectra, Inc. Methods and apparatus for processing image data for use in tissue characterization
US7469160B2 (en) * 2003-04-18 2008-12-23 Banks Perry S Methods and apparatus for evaluating image focus
US6768918B2 (en) * 2002-07-10 2004-07-27 Medispectra, Inc. Fluorescent fiberoptic probe for tissue health discrimination and method of use thereof
WO2005079306A2 (en) * 2004-02-13 2005-09-01 University Of Chicago Method, system, and computer software product for feature-based correlation of lesions from multiple images
US7711405B2 (en) * 2004-04-28 2010-05-04 Siemens Corporation Method of registering pre-operative high field closed magnetic resonance images with intra-operative low field open interventional magnetic resonance images
DE102004059133B4 (en) * 2004-12-08 2010-07-29 Siemens Ag Method for supporting an imaging medical examination method
WO2007033206A2 (en) 2005-09-13 2007-03-22 Veran Medical Technologies, Inc. Apparatus and method for image guided accuracy verification
US20070066881A1 (en) 2005-09-13 2007-03-22 Edwards Jerome R Apparatus and method for image guided accuracy verification
US7945117B2 (en) * 2006-08-22 2011-05-17 Siemens Medical Solutions Usa, Inc. Methods and systems for registration of images
US8433159B1 (en) * 2007-05-16 2013-04-30 Varian Medical Systems International Ag Compressed target movement model using interpolation
US20100207962A1 (en) * 2009-02-02 2010-08-19 Calgary Scientific Inc. Image data transmission from GPU to system memory
US10699469B2 (en) 2009-02-03 2020-06-30 Calgary Scientific Inc. Configurable depth-of-field raycaster for medical imaging
US9082191B2 (en) 2009-09-25 2015-07-14 Calgary Scientific Inc. Level set segmentation of volume data
EP2605693B1 (en) 2010-08-20 2019-11-06 Veran Medical Technologies, Inc. Apparatus for four dimensional soft tissue navigation
WO2012138871A2 (en) 2011-04-08 2012-10-11 Algotec Systems Ltd. Image analysis for specific objects
CA2840310A1 (en) 2011-06-29 2013-01-03 Calgary Scientific Inc. Method for cataloguing and accessing digital cinema frame content
EP2816966B1 (en) 2012-02-22 2023-10-25 Veran Medical Technologies, Inc. Steerable surgical catheter comprising a biopsy device at the distal end portion thereof
US10088658B2 (en) * 2013-03-18 2018-10-02 General Electric Company Referencing in multi-acquisition slide imaging
US20150305650A1 (en) 2014-04-23 2015-10-29 Mark Hunter Apparatuses and methods for endobronchial navigation to and confirmation of the location of a target tissue and percutaneous interception of the target tissue
US20150305612A1 (en) 2014-04-23 2015-10-29 Mark Hunter Apparatuses and methods for registering a real-time image feed from an imaging device to a steerable catheter
GB201416416D0 (en) * 2014-09-17 2014-10-29 Biomediq As Bias correction in images

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5185809A (en) * 1987-08-14 1993-02-09 The General Hospital Corporation Morphometric analysis of anatomical tomographic data
US5647360A (en) * 1995-06-30 1997-07-15 Siemens Corporate Research, Inc. Digital subtraction angiography for 3D diagnostic imaging
US5768413A (en) * 1995-10-04 1998-06-16 Arch Development Corp. Method and apparatus for segmenting images using stochastically deformable contours
AU2928097A (en) * 1996-04-29 1997-11-19 Government Of The United States Of America, As Represented By The Secretary Of The Department Of Health And Human Services, The Iterative image registration process using closest corresponding voxels
US6009212A (en) * 1996-07-10 1999-12-28 Washington University Method and apparatus for image registration
US6044181A (en) * 1997-08-01 2000-03-28 Microsoft Corporation Focal length estimation method and apparatus for construction of panoramic mosaic images

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040120577A1 (en) * 2002-10-06 2004-06-24 Igor Touzov Digital video based array detector for parallel process control and radiation driver for micro-scale devices
US7590510B2 (en) 2003-09-22 2009-09-15 Advanced Structure Monitoring, Inc. Systems and methods for identifying damage in a structure
WO2005031502A3 (en) * 2003-09-22 2007-03-29 Kim Hyeung-Yun Methods for monitoring structural health conditions
US7596470B2 (en) 2003-09-22 2009-09-29 Advanced Structure Monitoring, Inc. Systems and methods of prognosticating damage for structural health monitoring
US20060287842A1 (en) * 2003-09-22 2006-12-21 Advanced Structure Monitoring, Inc. Methods of networking interrogation devices for structural conditions
US7729035B2 (en) 2003-09-22 2010-06-01 Hyeung-Yun Kim Acousto-optic modulators for modulating light signals
JP2007511741A (en) * 2003-09-22 2007-05-10 ヒョン−ユン,キム Structural health status monitoring method
US7286964B2 (en) 2003-09-22 2007-10-23 Advanced Structure Monitoring, Inc. Methods for monitoring structural health conditions
WO2005031502A2 (en) * 2003-09-22 2005-04-07 Kim Hyeung-Yun Methods for monitoring structural health conditions
US7668665B2 (en) 2003-09-22 2010-02-23 Advanced Structure Monitoring, Inc. Methods of networking interrogation devices for structural conditions
US7536912B2 (en) 2003-09-22 2009-05-26 Hyeung-Yun Kim Flexible diagnostic patches for structural health monitoring
US7584075B2 (en) 2003-09-22 2009-09-01 Advanced Structure Monitoring, Inc. Systems and methods of generating diagnostic images for structural health monitoring
US20050075846A1 (en) * 2003-09-22 2005-04-07 Hyeung-Yun Kim Methods for monitoring structural health conditions
US20060257027A1 (en) * 2005-03-04 2006-11-16 Alfred Hero Method of determining alignment of images in high dimensional feature space
US7653264B2 (en) 2005-03-04 2010-01-26 The Regents Of The University Of Michigan Method of determining alignment of images in high dimensional feature space
US20100284582A1 (en) * 2007-05-29 2010-11-11 Laurent Petit Method and device for acquiring and processing images for detecting changing lesions
US9147098B2 (en) * 2008-01-09 2015-09-29 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer program storage medium
JP2009160314A (en) * 2008-01-09 2009-07-23 Canon Inc Image processing apparatus, image processing method, and computer program
US20090175519A1 (en) * 2008-01-09 2009-07-09 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer program storage medium
CN102037492A (en) * 2008-05-23 2011-04-27 澳大利亚国立大学 Image data processing
US20100021082A1 (en) * 2008-07-24 2010-01-28 Siemens Medical Solutions Usa, Inc. Interactive Manual Deformable Registration of Images
US8165425B2 (en) * 2008-07-24 2012-04-24 Siemens Medical Solutions Usa, Inc. Interactive manual deformable registration of images
US20140228676A1 (en) * 2011-05-17 2014-08-14 Brainlab Ag Determination of a physically-varying anatomical structure
US9449383B2 (en) * 2011-05-17 2016-09-20 Brainlab Ag Determination of a physically-varying anatomical structure
US20170128032A1 (en) * 2015-11-06 2017-05-11 jung diagnostics GmbH Imaging-based biomarker for characterizing the structure or function of human or animal brain tissue and related uses and methods
US10638995B2 (en) * 2015-11-06 2020-05-05 jung diagnostics GmbH Imaging-based biomarker for characterizing the structure or function of human or animal brain tissue and related uses and methods

Also Published As

Publication number Publication date
FR2763721B1 (en) 1999-08-06
CA2261069A1 (en) 1998-11-26
FR2763721A1 (en) 1998-11-27
EP0927405A1 (en) 1999-07-07
DE69811049T2 (en) 2003-09-04
DE69811049D1 (en) 2003-03-06
WO1998053426A1 (en) 1998-11-26
US6373998B2 (en) 2002-04-16
EP0927405B1 (en) 2003-01-29

Similar Documents

Publication Publication Date Title
US6373998B2 (en) Image processing electronic device for detecting dimensional variations
US6754374B1 (en) Method and apparatus for processing images with regions representing target objects
US6909794B2 (en) Automated registration of 3-D medical scans of similar anatomical structures
EP1941453B1 (en) Method of registering images, algorithm for carrying out the method of registering images, a program for registering images using the said algorithm and a method of treating biomedical images to reduce imaging artefacts caused by object movement
Saha et al. Fuzzy distance transform: theory, algorithms, and applications
US9262827B2 (en) Lung, lobe, and fissure imaging systems and methods
US4914589A (en) Three-dimensional images obtained from tomographic data using a variable threshold
US5830141A (en) Image processing method and device for automatic detection of regions of a predetermined type of cancer in an intensity image
Zhang et al. A three-dimensional fractal analysis method for quantifying white matter structure in human brain
US7136516B2 (en) Method and system for segmenting magnetic resonance images
WO2007026598A1 (en) Medical image processor and image processing method
MXPA02001035A (en) Automated image fusion alignment system and method.
US20070160276A1 (en) Cross-time inspection method for medical image diagnosis
US4953087A (en) Three-dimensional images obtained from tomographic data having unequally spaced slices
JP4823204B2 (en) Medical image processing device
Kapouleas Automatic detection of white matter lesions in magnetic resonance brain images
Kawata et al. An approach for detecting blood vessel diseases from cone-beam CT image
Dong et al. Multiresolution cube propagation for 3-D ultrasound image reconstruction
CN1836258B (en) Method and system for using structure tensors to detect lung nodules and colon polyps
US8165375B2 (en) Method and system for registering CT data sets
WO2008002325A2 (en) Cross-time inspection method for medical diagnosis
EP1141894B1 (en) Method and apparatus for processing images with regions representing target objects
Friboulet et al. Three-dimensional curvature features of the left ventricle from CT volumic images
Kumar A method of segmentation in 3d medical image for selection of region of interest (ROI)
Jamwal et al. Classification of Multimodal Brain Images employing a novel Ridgempirical Transform

Legal Events

Date Code Title Description
AS Assignment

Owner name: INRIA INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQ

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:THIRION, JEAN-PHILIPE;CALMON, GUILLAUME;REEL/FRAME:010136/0707

Effective date: 19981229

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20060416