WO2010134013A1 - Interactive image registration - Google Patents

Interactive image registration Download PDF

Info

Publication number
WO2010134013A1
WO2010134013A1 PCT/IB2010/052169 IB2010052169W WO2010134013A1 WO 2010134013 A1 WO2010134013 A1 WO 2010134013A1 IB 2010052169 W IB2010052169 W IB 2010052169W WO 2010134013 A1 WO2010134013 A1 WO 2010134013A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
image
mask
input
registering
Prior art date
Application number
PCT/IB2010/052169
Other languages
French (fr)
Inventor
Jens Von Berg
Ulrich Neitzel
Original Assignee
Koninklijke Philips Electronics N.V.
Philips Intellectual Property & Standards Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V., Philips Intellectual Property & Standards Gmbh filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2010134013A1 publication Critical patent/WO2010134013A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed

Definitions

  • the invention relates to interactively registering at least two input images.
  • Image registration occurs, for example, in temporal subtraction of thorax radiography images.
  • the images are registered with each other such that the anatomical objects are aligned, and only the clinically relevant changes are visible in the subtraction image.
  • it is not always possible to compensate the shifts of all the objects for example if two overlapping objects have moved in different directions. In such a case, subtraction artifacts remain.
  • the displacement and deformation of structure outside the lung field may be different from, and not related to, the displacement of the ribs in the lung field.
  • these image regions are excluded in order not to interfere with proper alignment of the ribs themselves. Consequently, an approximate segmentation of the lung field perimeter is performed.
  • a system for registering at least two input images.
  • a system comprises a graphical user interface for receiving a user input indicating a portion of at least one of the input images; segmenting means for segmenting an object represented by the input image, based on the portion; - mask identifying means for identifying a mask based on the object, the mask being indicative of a portion of the input image whereon the registering is to be based; and registration means for registering the images, taking into account the mask, for obtaining at least two registered images.
  • the input images may comprise projection images.
  • different overlapping objects may give contradicting registration information, for example if the overlapping objects have moved differently between the two image acquisitions. Consequently, it is especially advantageous in the case of projection images if the user can select on which segmented object the mask should be based.
  • the invention applies also to, for example, three-dimensional images.
  • different objects may give contradicting registration information.
  • the system may comprise subtracting means for subtracting one registered image from another registered image. Subtracted images thus obtained are highly sensitive to registration errors.
  • the system set forth allows to interactively choose on which objects in the image the registration mask should be based. This may be used to indicate which objects are to be taken into account in the registration and/or to selectively reduce artifacts.
  • the image may comprise a medical image.
  • the object may comprise an anatomical object.
  • the application of medical images would benefit from the system set forth.
  • the mask identifying means may be arranged for substantially or completely including the segmented object in the portion of the input image whereon the registering is to be based. This means that the mask is indicative of a portion of the input image substantially or completely including the segmented object.
  • the mask identifying means may also be arranged for creating a mask indicative of a portion of the image substantially corresponding to the segmented object. These examples allow the user to indicate an image region corresponding to an object which is to be taken into account in the registration.
  • the mask identifying means may be arranged for substantially or completely excluding the segmented object in the portion of the input image whereon the registering is to be based. This allows the user to indicate an object which is not to be taken into account in the registration.
  • the graphical user interface may be arranged for visualizing an output image based on the registered images.
  • the graphical user interface may be further arranged for receiving further input indicating a further portion of at least one of the input images.
  • the segmenting means may be arranged for segmenting a further object represented by the input image based on the further portion.
  • the mask identifying means may be arranged for identifying a further mask based on the further object.
  • the registration means may be arranged for registering the images taking into account the further mask, for obtaining at least two further registered images. This allows interactively refining or changing the registration by identifying a further object via the user interface.
  • the graphical user interface may be arranged for receiving the further input by enabling a user to indicate at least one position in the visualized output image and identifying the further portion of at least one of the input images based on the position in the visualized output image. This allows the user to interact with the final result, which is efficient because the user is interested in inspecting the final result anyway.
  • the mask identifying means may be arranged for identifying the further mask based on both the object and the further object. For example, starting from the original mask, the mask identifying means may exclude the further object or remove the further object from the mask.
  • the mask may define a weight distribution, the weight distribution indicating how heavily different image elements of an input image should be taken into account by the registration means. This allows a smoother weighting compared to a binary mask.
  • a method of interactively registering at least two input images may comprise - receiving a user input indicating a portion of at least one of the input images; segmenting an object represented by the input image based on the portion; identifying a mask based on the object, the mask being indicative of a portion of the input image whereon the registering is to be based; and registering the images, taking into account the mask, for obtaining at least two registered images.
  • a computer program product may comprise instructions for causing a processor system to perform the steps of the method set forth. It will be appreciated by those skilled in the art that two or more of the above- mentioned embodiments, implementations, and/or aspects of the invention may be combined in any way deemed useful.
  • the method may be applied to multidimensional image data, e.g., to 2-dimensional (2-D), 3-dimensional (3-D) or 4- dimensional (4-D) images, acquired by various acquisition modalities such as, but not limited to, standard X-ray Imaging, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), and Nuclear Medicine (NM).
  • CT Computed Tomography
  • MRI Magnetic Resonance Imaging
  • US Ultrasound
  • PET Positron Emission Tomography
  • SPECT Single Photon Emission Computed Tomography
  • NM Nuclear Medicine
  • Fig. 1 shows a block diagram of a system for registering at least two input images
  • Fig. 2 shows a block diagram of a method of registering at least two input images
  • Fig. 3 shows a projection X-ray image of a thorax phantom
  • Fig. 4 shows another projection X-ray image of the same thorax phantom, with an anterior inclination with respect to the projection X-ray image shown in Fig. 3.
  • Fig. 1 illustrates an example system for registering at least two input images.
  • the input images may originate from an image scanning device, for example a digital optical camera.
  • the image scanning device may comprise a medical image acquisition device such as a projection X-ray imaging apparatus or a CT scanner or MRI scanner, for example.
  • the images may be two-dimensional or three- dimensional.
  • projection X-ray images are usually two-dimensional
  • CT or MRI images are three-dimensional.
  • CT and MRI images may be two- dimensional slices or computed projections of three-dimensional CT or MRI images.
  • the input images may be stored in a memory 8.
  • This memory 8 may comprise a RAM, ROM, flash memory and/or magnetic disc memory.
  • the system may have a communications port (not shown) for receiving the images from another device such as a scanner or a central image repository such as a PACS.
  • the system may comprise a graphical user interface 1.
  • the graphical user interface may comprise a user input subsystem 7.
  • This user input subsystem 7 enables a user to indicate a portion of at least one of the input images.
  • the graphical user interface 1 is arranged for receiving, via the user input subsystem 7, a user input indicating a portion of at least one of the input images.
  • This portion may be, for example, a point in the image. Alternatively, the portion can comprise a region of interest.
  • Information relating to the indicated portion is forwarded to a segmenting means 2.
  • the segmenting means 2 is arranged for segmenting an object represented by the input image based on the portion. In this way, a segmented object is obtained. Segmentation may be performed in a way known in the art per se.
  • the segmenting means 2 may be arranged for using the indicated portion to identify the location of the to-be-segmented object.
  • the segmenting means 2 may be arranged for performing model-based segmentation by deformable contours, which is a technique known in the art per se. By incorporating prior knowledge about the objects, such as the spatial relation to each other, the segmentation quality of the model-based segmentation may be improved. Alternatively, region growing may be applied, starting from the user- indicated portion. Yet alternatively, the user may indicate a region in the image, and the segmenting means 2 may be arranged for restricting the segmentation to the indicated region, with the result that only an object or objects within the indicated region are segmented.
  • the system may comprise mask identifying means 3 for identifying a mask based on the segmented object.
  • the mask may be indicative of a portion of the input image whereon the registering is to be based.
  • the mask identifying means 3 masks the whole image except for the segmented object. A margin around the segmented object may also be excluded from the mask.
  • the mask identifying means 3 masks only the segmented object, optionally with a margin around the segmented object. Accordingly, the mask identifying means 3 may be arranged for either substantially including or substantially excluding the segmented object in the portion of the input image whereon the registration is to be based.
  • the system may comprise registration means 4 for registering the images. Registering two images can be done in two steps: First, identifying transformation vectors representing shifts of objects between the two images, and second, resampling the image while applying the transformation vectors to the image elements.
  • the registration means 4 may be arranged for taking into account the mask in particular in the step of identifying transformation vectors. The masked portion of the image may not be taken into account in the first step of the registration process.
  • the transformation vectors may be applied to the whole image or only the portion of the image for which the transformation vectors have been identified. The result is that at least two registered images are obtained.
  • the system may comprise a post-processing subsystem for performing a postprocessing operation on the registered images.
  • Such post-processing may comprise generating a combination of the two registered images, for example by means of overlay.
  • the post-processing may result in an image which may be displayed by a visualization subsystem 6 of the graphical user interface 1.
  • the visualization subsystem 6 is arranged for generating a visualization of the post-processed data.
  • the visualization subsystem 6 may also be arranged for displaying at least one or all of the registered images.
  • Post-processed images and/or the registered images may be stored in the memory 8.
  • An example post-processing subsystem is subtracting means 5.
  • the subtracting means 5 is arranged for subtracting one registered image from another registered image. This subtraction may be performed on an image element by image element basis. For example, a pixel-by-pixel basis.
  • the input of the subtracting means 5 comprises at least two registered images, and the output of the subtracting means comprises a subtracted image.
  • the subtracted image may be stored in the memory 8.
  • the image may be a medical image.
  • the object may comprise an anatomical object.
  • the object may comprise a bone or a group of bones such as the ribs.
  • Other anatomical objects are also possible, for example an organ such as a lung.
  • the system may be arranged for operating in an interactive mode in which the user may select a plurality of objects to be segmented and included in or excluded from the mask. This selection may be performed by enabling the user to indicate a plurality of portions, for example points, in the image.
  • the graphical user interface 1 may be arranged for visualizing an output image based on the registered images.
  • the graphical user interface 1 may be arranged for receiving a further input indicating a further portion of at least one of the input images.
  • the segmenting means 2 may be arranged for segmenting a further object represented by the input image based on the further portion.
  • the mask identifying means 3 may be arranged for identifying a further mask based on the further object.
  • the registration means 4 may be arranged for registering the images taking into account the further mask, for obtaining at least two further registered images.
  • the post-processing means may be arranged for further post-processing the two further registered images; for example, the subtracting means 5 may be arranged for subtracting the further registered images.
  • the graphical user interface 1 may be arranged for receiving the further input by enabling a user to indicate at least one position in the visualized output image and identifying the further portion of at least one of the input images based on the position in the visualized output image.
  • the user may indicate the portion by interacting with the subtracted image.
  • the user may indicate a portion such as a point in the subtracted image. This portion corresponds to a portion of the registered images. Using the registration transformation, the corresponding portion of the original input images can be established.
  • the mask identifying means 3 may be arranged for identifying the further mask based on both the object and the further object, for example, by including the further object in the mask which was the result of the (first) object.
  • the mask may define a weight distribution, the weight distribution indicating how heavily different image elements of an input image should be taken into account by the registration means.
  • the mask may be a binary mask, which means that it just indicates which image elements are to be taken into account in the registration and which image elements are not, without giving any further weight to individual image elements.
  • the system may be incorporated in an image acquisition apparatus. The images acquired by the image acquisition apparatus can be used as input images for the registration system. Such image acquisition apparatus may comprise a detector for acquiring a plurality of input images.
  • the system may also be incorporated in a medical imaging workstation.
  • a medical imaging workstation may comprise a computer, a monitor, and a keyboard and/or a mouse.
  • the computer may comprise a network connection for communicating with an image acquisition apparatus and/or a central image database, for example.
  • Fig. 2 illustrates a method of interactively registering at least two input images.
  • the method may be performed by means of the registration system set forth, for example.
  • step 201 the method starts by receiving a user input indicating a portion of at least one of the input images.
  • step 202 the method proceeds by segmenting an object represented by the input image based on the portion.
  • step 203 a mask is identified based on the object, the mask being indicative of a portion of the input image whereon the registering is to be based.
  • the method proceeds by registering the images, taking into account the mask, for obtaining at least two registered images.
  • the method may comprise a further post-processing step (not shown).
  • the method may be implemented as a computer program product.
  • anatomical object may contribute to a successful registration of the target structure. Also, it may not be easy to segment these anatomical objects in the image. Moreover, the target structure may be not clearly determined.
  • the registration may be disturbed if anatomical objects which are close to each other have a different displacement. Such a disturbance may occur, for example, in 3D volume images such as those from CT or MR, or selected 2D slices from a 3D volume.
  • An example registration application is a registration of inhale with exhale lung CT images of a patient to estimate the local lung motion. The ribs move in the opposite direction to the adjacent lung parenchyma and may degrade the registration quality of the lung regions close to the ribs.
  • a projection geometry e.g. radiography, fluoroscopy, scintigraphy
  • the occurrence of the distracting anatomical objects problem is even more severe because there may be different displacement vectors for a single image position i, one for each object that projects to i and that has a different displacement in the projection plane.
  • Reasons for several overlaid objects having different displacements in the projection plane may be (i) subject changes between both acquisitions like respiration, but also (ii) differences in the acquisition geometry, that is to say perspective changes (this mainly occurs in follow-up registration).
  • the user may be allowed to interactively select anatomical structures and include them in or exclude them from the computation of the similarity function which is used in the registration.
  • the registration may be immediately recalculated and visualized. Any processing of the registered imaging such as subtraction or a specific visualization or measurement may also be performed on the newly recalculated registered images and presented to the user immediately. This makes it possible to see updates of the end result, while including or excluding anatomical structures from the computation. Segmentation methods allow the selection of anatomical structures through one or more interactions (e.g. a mouse click into the structure in the image).
  • the specification, selection, and modification of the mask may be implemented as follows.
  • the user may select image regions to be included or excluded.
  • the mask may be visualized in the navigation view, e.g. by an overlay.
  • the anatomical mode selection of anatomical structures is possible, e.g. the clavicle, the ribs. Automatic image segmentation is used to assess the extent of the selected structure.
  • a fine-tuning step is possible, for example mask enlargement, shrinkage, or region growing. Such fine-tuning may not be explicitly bounded by anatomical structures, but by image features.
  • the fine-tuning mode may be used, for example, to include boundaries (edges) of the segmented objects in or exclude them from the mask.
  • a weighting parameter may be assigned to image elements (e.g. pixels or voxels) in the image, the weighting parameters determining the contribution of this image location to the registration result.
  • Certain predefined weight distributions may be selected to be applied to a selected anatomical structure. This way, for example a 'smooth decay towards the boundary' or by contrast a 'sharp boundary' can be defined by appropriately selecting the weighting distribution.
  • the method is applicable to multi-image registration (e.g. time series) as well as to registration of two images.
  • Fig. 3 and Fig. 4 illustrate two temporally sequential projection X-ray images of a thorax phantom.
  • the anterior inclination between the two images is about 6 degrees.
  • Temporal subtraction visualization technique is used for example in follow-up thorax radiographics. Temporal subtraction results can be improved by non-rigid registration of the current radiography of a patient with a previous radiography of the patient. In practice, differences in patient poses often counteract this registration, especially differences in the anterior-posterior inclination of the patient. In the projection geometry, an anterior inclination of a standing patient could lower the projection of the anterior clavicle, but not of the posterior ribs. Pulmonary structures may also project differently with respect to these two structures.
  • the posterior ribs may project at different positions in the head-to-food extent within the lung field.
  • Fig. 3 it is visible that the clavicle 301 widely overlaps with the second ribs 302, while the clavicle 301 mostly projects to intercostals space 303 in Fig. 4.
  • Registration of the area with the clavicles 301 may lead to artifacts for at least one of both structures (the clavicles 301 or the ribs 302 with intercostals space 303).
  • the user may interactively exclude either the clavicle or the ribs from a registration and thus allow a proper registration of the remaining structures.
  • interval change detection by temporal subtraction such an interactive choice is a benefit because it is not a priori known which structures need to be aligned to properly detect the interval changes.
  • the invention also extends to computer programs, particularly computer programs on or in a carrier, adapted for putting the invention into practice.
  • the program may be in the form of a source code, an object code, a code intermediate source and object code such as a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention.
  • a program may have many different architectural designs.
  • a program code implementing the functionality of the method or system according to the invention may be subdivided into one or more subroutines. Many different ways to distribute the functionality among these subroutines will be apparent to the skilled person.
  • the subroutines may be stored together in one executable file to form a self-contained program.
  • Such an executable file may comprise computer executable instructions, for example processor instructions and/or interpreter instructions (e.g. Java interpreter instructions).
  • one or more or all of the subroutines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time.
  • the main program contains at least one call to at least one of the subroutines.
  • the subroutines may comprise function calls to each other.
  • An embodiment relating to a computer program product comprises computer executable instructions corresponding to each of the processing steps of at least one of the methods set forth. These instructions may be subdivided into subroutines and/or stored in one or more files that may be linked statically or dynamically.
  • Another embodiment relating to a computer program product comprises computer executable instructions corresponding to each of the means of at least one of the systems and/or products set forth. These instructions may be subdivided into subroutines and/or stored in one or more files that may be linked statically or dynamically.
  • the carrier of a computer program may be any entity or device capable of carrying the program.
  • the carrier may include a storage medium, such as a ROM, for example a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example a floppy disc or hard disk.
  • the carrier may be a transmissible carrier such as an electrical or optical signal, which may be conveyed via electrical or optical cable or by radio or other means.
  • the carrier may be constituted by such a cable or other device or means.
  • the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted for performing, or for use in the performance of, the relevant method.

Abstract

A system for registering at least two input images is described. The input images comprise projection images. The system comprises a graphical user interface (1) for receiving a user input indicating a portion of at least one of the input images. The system comprises segmenting means (2) for segmenting an object represented by the input image, based on the portion,to obtain a segmented object. The system comprises mask identifying means (3) for identifying a mask based on the segmented object, the mask being indicative of a portion of the input image whereon the registering is to be based. The system comprises registration means (4) for registering the images, taking into account the mask, for obtaining at least two registered images. The system further comprises subtracting means (5) for subtracting one registered image from another registered image.

Description

Interactive image registration
FIELD OF THE INVENTION
The invention relates to interactively registering at least two input images.
BACKGROUND OF THE INVENTION Image registration occurs, for example, in temporal subtraction of thorax radiography images. To compensate for shifts of anatomical objects in successive radiography images, the images are registered with each other such that the anatomical objects are aligned, and only the clinically relevant changes are visible in the subtraction image. However, it is not always possible to compensate the shifts of all the objects, for example if two overlapping objects have moved in different directions. In such a case, subtraction artifacts remain.
The article "Temporal subtraction of Thorax CR Images Using a Statistical Deformation Model", by D. Loeckx et al., IEEE Transactions on Medical Imaging, Vol. 22, No. 11, November 2003, discloses a voxel-based non-rigid registration algorithm for temporal subtraction of two-dimensional thorax X-ray computed radiography images of the same subject. The aim is global rib alignment to minimize subtraction artifacts within the lung field without obliterating interval changes of clinically relevant soft-tissue abnormalities. The displacement and deformation of structure outside the lung field, such as the abdominal structures, the diaphragm or parts of the limbs, in two subsequent X-ray chest images, may be different from, and not related to, the displacement of the ribs in the lung field. Hence, these image regions are excluded in order not to interfere with proper alignment of the ribs themselves. Consequently, an approximate segmentation of the lung field perimeter is performed.
SUMMARY OF THE INVENTION
It would be advantageous to have an improved system for registering at least two input images. To better address this concern, in a first aspect of the invention a system is presented that comprises a graphical user interface for receiving a user input indicating a portion of at least one of the input images; segmenting means for segmenting an object represented by the input image, based on the portion; - mask identifying means for identifying a mask based on the object, the mask being indicative of a portion of the input image whereon the registering is to be based; and registration means for registering the images, taking into account the mask, for obtaining at least two registered images.
Using the system set forth, it is possible to segment an object by indicating a corresponding portion of the image, and have the registration performed using a mask based on the segmented object. This way, the user can choose which object is to be taken into account for identifying the mask. This has the advantage that the user can change or influence the registration result by defining a mask based on an object of choice. This change or influence can be effected in an easy way, namely by indicating a corresponding portion in the image.
The input images may comprise projection images. In projection images, different overlapping objects may give contradicting registration information, for example if the overlapping objects have moved differently between the two image acquisitions. Consequently, it is especially advantageous in the case of projection images if the user can select on which segmented object the mask should be based. However, the invention applies also to, for example, three-dimensional images. For example, in the case of rigid or affine registration, different objects may give contradicting registration information.
The system may comprise subtracting means for subtracting one registered image from another registered image. Subtracted images thus obtained are highly sensitive to registration errors. The system set forth allows to interactively choose on which objects in the image the registration mask should be based. This may be used to indicate which objects are to be taken into account in the registration and/or to selectively reduce artifacts.
The image may comprise a medical image. The object may comprise an anatomical object. The application of medical images would benefit from the system set forth.
The mask identifying means may be arranged for substantially or completely including the segmented object in the portion of the input image whereon the registering is to be based. This means that the mask is indicative of a portion of the input image substantially or completely including the segmented object. The mask identifying means may also be arranged for creating a mask indicative of a portion of the image substantially corresponding to the segmented object. These examples allow the user to indicate an image region corresponding to an object which is to be taken into account in the registration. Alternatively, the mask identifying means may be arranged for substantially or completely excluding the segmented object in the portion of the input image whereon the registering is to be based. This allows the user to indicate an object which is not to be taken into account in the registration.
The graphical user interface may be arranged for visualizing an output image based on the registered images. The graphical user interface may be further arranged for receiving further input indicating a further portion of at least one of the input images. The segmenting means may be arranged for segmenting a further object represented by the input image based on the further portion. The mask identifying means may be arranged for identifying a further mask based on the further object. The registration means may be arranged for registering the images taking into account the further mask, for obtaining at least two further registered images. This allows interactively refining or changing the registration by identifying a further object via the user interface.
The graphical user interface may be arranged for receiving the further input by enabling a user to indicate at least one position in the visualized output image and identifying the further portion of at least one of the input images based on the position in the visualized output image. This allows the user to interact with the final result, which is efficient because the user is interested in inspecting the final result anyway.
The mask identifying means may be arranged for identifying the further mask based on both the object and the further object. For example, starting from the original mask, the mask identifying means may exclude the further object or remove the further object from the mask.
The mask may define a weight distribution, the weight distribution indicating how heavily different image elements of an input image should be taken into account by the registration means. This allows a smoother weighting compared to a binary mask.
A method of interactively registering at least two input images may comprise - receiving a user input indicating a portion of at least one of the input images; segmenting an object represented by the input image based on the portion; identifying a mask based on the object, the mask being indicative of a portion of the input image whereon the registering is to be based; and registering the images, taking into account the mask, for obtaining at least two registered images.
A computer program product may comprise instructions for causing a processor system to perform the steps of the method set forth. It will be appreciated by those skilled in the art that two or more of the above- mentioned embodiments, implementations, and/or aspects of the invention may be combined in any way deemed useful.
Modifications and variations of the image acquisition apparatus, of the workstation, of the system, and/or of the computer program product, which correspond to the described modifications and variations of the system, can be carried out by a person skilled in the art on the basis of the present description.
A person skilled in the art will appreciate that the method may be applied to multidimensional image data, e.g., to 2-dimensional (2-D), 3-dimensional (3-D) or 4- dimensional (4-D) images, acquired by various acquisition modalities such as, but not limited to, standard X-ray Imaging, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), and Nuclear Medicine (NM).
BRIEF DESCRIPTION OF THE DRAWINGS These and other aspects of the invention will be further elucidated and described with reference to the drawing, in which
Fig. 1 shows a block diagram of a system for registering at least two input images;
Fig. 2 shows a block diagram of a method of registering at least two input images;
Fig. 3 shows a projection X-ray image of a thorax phantom; and Fig. 4 shows another projection X-ray image of the same thorax phantom, with an anterior inclination with respect to the projection X-ray image shown in Fig. 3.
DETAILED DESCRIPTION OF EMBODIMENTS
Fig. 1 illustrates an example system for registering at least two input images. The input images may originate from an image scanning device, for example a digital optical camera. The image scanning device may comprise a medical image acquisition device such as a projection X-ray imaging apparatus or a CT scanner or MRI scanner, for example. Depending on the origin of the images, the images may be two-dimensional or three- dimensional. For example, projection X-ray images are usually two-dimensional, whereas CT or MRI images are three-dimensional. However, CT and MRI images may be two- dimensional slices or computed projections of three-dimensional CT or MRI images. The input images may be stored in a memory 8. This memory 8 may comprise a RAM, ROM, flash memory and/or magnetic disc memory. Furthermore, the system may have a communications port (not shown) for receiving the images from another device such as a scanner or a central image repository such as a PACS.
As illustrated in the Figure, the system may comprise a graphical user interface 1. The graphical user interface may comprise a user input subsystem 7. This user input subsystem 7 enables a user to indicate a portion of at least one of the input images. The graphical user interface 1 is arranged for receiving, via the user input subsystem 7, a user input indicating a portion of at least one of the input images. This portion may be, for example, a point in the image. Alternatively, the portion can comprise a region of interest. Information relating to the indicated portion is forwarded to a segmenting means 2. The segmenting means 2 is arranged for segmenting an object represented by the input image based on the portion. In this way, a segmented object is obtained. Segmentation may be performed in a way known in the art per se. The segmenting means 2 may be arranged for using the indicated portion to identify the location of the to-be-segmented object. The segmenting means 2 may be arranged for performing model-based segmentation by deformable contours, which is a technique known in the art per se. By incorporating prior knowledge about the objects, such as the spatial relation to each other, the segmentation quality of the model-based segmentation may be improved. Alternatively, region growing may be applied, starting from the user- indicated portion. Yet alternatively, the user may indicate a region in the image, and the segmenting means 2 may be arranged for restricting the segmentation to the indicated region, with the result that only an object or objects within the indicated region are segmented.
The system may comprise mask identifying means 3 for identifying a mask based on the segmented object. The mask may be indicative of a portion of the input image whereon the registering is to be based. In one example, the mask identifying means 3 masks the whole image except for the segmented object. A margin around the segmented object may also be excluded from the mask. In another example, the mask identifying means 3 masks only the segmented object, optionally with a margin around the segmented object. Accordingly, the mask identifying means 3 may be arranged for either substantially including or substantially excluding the segmented object in the portion of the input image whereon the registration is to be based.
The system may comprise registration means 4 for registering the images. Registering two images can be done in two steps: First, identifying transformation vectors representing shifts of objects between the two images, and second, resampling the image while applying the transformation vectors to the image elements. The registration means 4 may be arranged for taking into account the mask in particular in the step of identifying transformation vectors. The masked portion of the image may not be taken into account in the first step of the registration process. During the second step of the registration process, the transformation vectors may be applied to the whole image or only the portion of the image for which the transformation vectors have been identified. The result is that at least two registered images are obtained.
The system may comprise a post-processing subsystem for performing a postprocessing operation on the registered images. Such post-processing may comprise generating a combination of the two registered images, for example by means of overlay. The post-processing may result in an image which may be displayed by a visualization subsystem 6 of the graphical user interface 1. The visualization subsystem 6 is arranged for generating a visualization of the post-processed data. The visualization subsystem 6 may also be arranged for displaying at least one or all of the registered images. Post-processed images and/or the registered images may be stored in the memory 8.
An example post-processing subsystem is subtracting means 5. The subtracting means 5 is arranged for subtracting one registered image from another registered image. This subtraction may be performed on an image element by image element basis. For example, a pixel-by-pixel basis. The input of the subtracting means 5 comprises at least two registered images, and the output of the subtracting means comprises a subtracted image. The subtracted image may be stored in the memory 8.
The image may be a medical image. Moreover, the object may comprise an anatomical object. For example, the object may comprise a bone or a group of bones such as the ribs. Other anatomical objects are also possible, for example an organ such as a lung. The system may be arranged for operating in an interactive mode in which the user may select a plurality of objects to be segmented and included in or excluded from the mask. This selection may be performed by enabling the user to indicate a plurality of portions, for example points, in the image. The graphical user interface 1 may be arranged for visualizing an output image based on the registered images. The graphical user interface 1 may be arranged for receiving a further input indicating a further portion of at least one of the input images. The segmenting means 2 may be arranged for segmenting a further object represented by the input image based on the further portion. The mask identifying means 3 may be arranged for identifying a further mask based on the further object. The registration means 4 may be arranged for registering the images taking into account the further mask, for obtaining at least two further registered images. The post-processing means may be arranged for further post-processing the two further registered images; for example, the subtracting means 5 may be arranged for subtracting the further registered images.
The graphical user interface 1 may be arranged for receiving the further input by enabling a user to indicate at least one position in the visualized output image and identifying the further portion of at least one of the input images based on the position in the visualized output image. In the example of subtraction of registered images, the user may indicate the portion by interacting with the subtracted image. For example, the user may indicate a portion such as a point in the subtracted image. This portion corresponds to a portion of the registered images. Using the registration transformation, the corresponding portion of the original input images can be established.
The mask identifying means 3 may be arranged for identifying the further mask based on both the object and the further object, for example, by including the further object in the mask which was the result of the (first) object. The mask may define a weight distribution, the weight distribution indicating how heavily different image elements of an input image should be taken into account by the registration means. Alternatively, the mask may be a binary mask, which means that it just indicates which image elements are to be taken into account in the registration and which image elements are not, without giving any further weight to individual image elements. The system may be incorporated in an image acquisition apparatus. The images acquired by the image acquisition apparatus can be used as input images for the registration system. Such image acquisition apparatus may comprise a detector for acquiring a plurality of input images.
The system may also be incorporated in a medical imaging workstation. Such a workstation may comprise a computer, a monitor, and a keyboard and/or a mouse. The computer may comprise a network connection for communicating with an image acquisition apparatus and/or a central image database, for example.
Fig. 2 illustrates a method of interactively registering at least two input images. The method may be performed by means of the registration system set forth, for example. In step 201, the method starts by receiving a user input indicating a portion of at least one of the input images. After that, in step 202, the method proceeds by segmenting an object represented by the input image based on the portion. In step 203, a mask is identified based on the object, the mask being indicative of a portion of the input image whereon the registering is to be based. In step 204, the method proceeds by registering the images, taking into account the mask, for obtaining at least two registered images. The method may comprise a further post-processing step (not shown). The method may be implemented as a computer program product.
When performing image registration using a mask to indicate which portion or portions of the image should be taken into account in the registration, the following issues may arise. First, it may not be a priori clear which anatomical objects may contribute to a successful registration of the target structure. Also, it may not be easy to segment these anatomical objects in the image. Moreover, the target structure may be not clearly determined. The registration may be disturbed if anatomical objects which are close to each other have a different displacement. Such a disturbance may occur, for example, in 3D volume images such as those from CT or MR, or selected 2D slices from a 3D volume. An example registration application is a registration of inhale with exhale lung CT images of a patient to estimate the local lung motion. The ribs move in the opposite direction to the adjacent lung parenchyma and may degrade the registration quality of the lung regions close to the ribs.
In a projection geometry (e.g. radiography, fluoroscopy, scintigraphy) where different structures are overlaid with each other in the image, the occurrence of the distracting anatomical objects problem is even more severe because there may be different displacement vectors for a single image position i, one for each object that projects to i and that has a different displacement in the projection plane. Reasons for several overlaid objects having different displacements in the projection plane may be (i) subject changes between both acquisitions like respiration, but also (ii) differences in the acquisition geometry, that is to say perspective changes (this mainly occurs in follow-up registration).
To overcome this, the user may be allowed to interactively select anatomical structures and include them in or exclude them from the computation of the similarity function which is used in the registration. The registration may be immediately recalculated and visualized. Any processing of the registered imaging such as subtraction or a specific visualization or measurement may also be performed on the newly recalculated registered images and presented to the user immediately. This makes it possible to see updates of the end result, while including or excluding anatomical structures from the computation. Segmentation methods allow the selection of anatomical structures through one or more interactions (e.g. a mouse click into the structure in the image).
The specification, selection, and modification of the mask may be implemented as follows. There may be provided a navigation view that presents the image to be registered. The user may select image regions to be included or excluded. The mask may be visualized in the navigation view, e.g. by an overlay. There may be at least two working modes: The anatomical mode and the image mode. In the anatomical mode, selection of anatomical structures is possible, e.g. the clavicle, the ribs. Automatic image segmentation is used to assess the extent of the selected structure. In the image mode, a fine-tuning step is possible, for example mask enlargement, shrinkage, or region growing. Such fine-tuning may not be explicitly bounded by anatomical structures, but by image features. The fine-tuning mode may be used, for example, to include boundaries (edges) of the segmented objects in or exclude them from the mask. In addition to a binary mask a weighting parameter may be assigned to image elements (e.g. pixels or voxels) in the image, the weighting parameters determining the contribution of this image location to the registration result. Certain predefined weight distributions may be selected to be applied to a selected anatomical structure. This way, for example a 'smooth decay towards the boundary' or by contrast a 'sharp boundary' can be defined by appropriately selecting the weighting distribution.
The method is applicable to multi-image registration (e.g. time series) as well as to registration of two images.
Fig. 3 and Fig. 4 illustrate two temporally sequential projection X-ray images of a thorax phantom. The anterior inclination between the two images is about 6 degrees. Temporal subtraction visualization technique is used for example in follow-up thorax radiographics. Temporal subtraction results can be improved by non-rigid registration of the current radiography of a patient with a previous radiography of the patient. In practice, differences in patient poses often counteract this registration, especially differences in the anterior-posterior inclination of the patient. In the projection geometry, an anterior inclination of a standing patient could lower the projection of the anterior clavicle, but not of the posterior ribs. Pulmonary structures may also project differently with respect to these two structures. Also the posterior ribs may project at different positions in the head-to-food extent within the lung field. In Fig. 3 it is visible that the clavicle 301 widely overlaps with the second ribs 302, while the clavicle 301 mostly projects to intercostals space 303 in Fig. 4. Registration of the area with the clavicles 301 may lead to artifacts for at least one of both structures (the clavicles 301 or the ribs 302 with intercostals space 303). Using the interactive techniques described herein, the user may interactively exclude either the clavicle or the ribs from a registration and thus allow a proper registration of the remaining structures. In a clinical application, such as interval change detection by temporal subtraction, such an interactive choice is a benefit because it is not a priori known which structures need to be aligned to properly detect the interval changes.
It will be appreciated that the invention also extends to computer programs, particularly computer programs on or in a carrier, adapted for putting the invention into practice. The program may be in the form of a source code, an object code, a code intermediate source and object code such as a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention. It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system according to the invention may be subdivided into one or more subroutines. Many different ways to distribute the functionality among these subroutines will be apparent to the skilled person. The subroutines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer executable instructions, for example processor instructions and/or interpreter instructions (e.g. Java interpreter instructions). Alternatively, one or more or all of the subroutines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the subroutines. Also, the subroutines may comprise function calls to each other. An embodiment relating to a computer program product comprises computer executable instructions corresponding to each of the processing steps of at least one of the methods set forth. These instructions may be subdivided into subroutines and/or stored in one or more files that may be linked statically or dynamically. Another embodiment relating to a computer program product comprises computer executable instructions corresponding to each of the means of at least one of the systems and/or products set forth. These instructions may be subdivided into subroutines and/or stored in one or more files that may be linked statically or dynamically.
The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a storage medium, such as a ROM, for example a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example a floppy disc or hard disk. Further, the carrier may be a transmissible carrier such as an electrical or optical signal, which may be conveyed via electrical or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted for performing, or for use in the performance of, the relevant method.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb "comprise" and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. The several means described in the application and claims for a particular purpose may be implemented as instrumentality for that particular purpose or as a mechanism for that particular purpose.

Claims

CLAIMS:
1. A system for registering at least two input images, comprising: a graphical user interface (1) for receiving a user input indicating a portion of at least one of the input images; segmenting means (2) for segmenting an object represented by the input image, based on the portion, to obtain a segmented object; mask identifying means (3) for identifying a mask based on the segmented object, the mask being indicative of a portion of the input image whereon the registering is to be based; and registration means (4) for registering the images, taking into account the mask, for obtaining at least two registered images.
2. The system according to claim 1, the input images comprising projection images.
3. The system according to claim 2, further comprising subtracting means (5) for subtracting one registered image from another registered image.
4. The system according to claim 1, the image comprising a medical image and the object comprising an anatomical object.
5. The system according to claim 1, the mask identifying means (3) being arranged for either substantially including or substantially excluding the segmented object in the portion of the input image whereon the registration is to be based.
6. The system according to claim 1, the graphical user interface (1) being arranged for visualizing an output image based on the registered images; the graphical user interface (1) further being arranged for receiving further input indicating a further portion of at least one of the input images; the segmenting means (2) being arranged for segmenting a further object represented by the input image, based on the further portion; the mask identifying means (3) being arranged for identifying a further mask based on the further object; - the registration means (4) being arranged for registering the images, taking into account the further mask, for obtaining at least two further registered images.
7. The system according to claim 6, the graphical user interface (1) being arranged for receiving the further input by enabling a user to indicate at least one position in the visualized output image and identifying the further portion of at least one of the input images based on the position in the visualized output image.
8. The system according to claim 6, the mask identifying means (3) being arranged for identifying the further mask based on both the object and the further object.
9. The system according to claim 1, the mask defining a weight distribution, the weight distribution indicating how heavily different image elements of an input image should be taken into account by the registration means.
10. An image acquisition apparatus comprising a detector for acquiring a plurality of input images and a system according to claim 1.
11. A medical workstation comprising a system according to claim 1.
12. A method of interactively registering at least two input images, comprising: receiving a user input indicating a portion of at least one of the input images; segmenting an object represented by the input image, based on the portion; identifying a mask based on the object, the mask being indicative of a portion of the input image whereon the registering is to be based; and - registering the images, taking into account the mask, for obtaining at least two registered images.
13. A computer program product comprising instructions for causing a processor system to perform the steps of the method according to claim 12.
PCT/IB2010/052169 2009-05-20 2010-05-17 Interactive image registration WO2010134013A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP09160741.6 2009-05-20
EP09160741 2009-05-20

Publications (1)

Publication Number Publication Date
WO2010134013A1 true WO2010134013A1 (en) 2010-11-25

Family

ID=42333556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2010/052169 WO2010134013A1 (en) 2009-05-20 2010-05-17 Interactive image registration

Country Status (1)

Country Link
WO (1) WO2010134013A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070244393A1 (en) * 2004-06-03 2007-10-18 Mitsuhiro Oshiki Image Diagnosing Support Method and Image Diagnosing Support Apparatus
US20070280556A1 (en) * 2006-06-02 2007-12-06 General Electric Company System and method for geometry driven registration
US20080025638A1 (en) * 2006-07-31 2008-01-31 Eastman Kodak Company Image fusion for radiation therapy
US20080049994A1 (en) * 2004-08-09 2008-02-28 Nicolas Rognin Image Registration Method and Apparatus for Medical Imaging Based on Multiple Masks
US20080063301A1 (en) * 2006-09-12 2008-03-13 Luca Bogoni Joint Segmentation and Registration
US20080298656A1 (en) * 2006-02-10 2008-12-04 Yim Peter J Precision subtraction computed tomographic angiography

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070244393A1 (en) * 2004-06-03 2007-10-18 Mitsuhiro Oshiki Image Diagnosing Support Method and Image Diagnosing Support Apparatus
US20080049994A1 (en) * 2004-08-09 2008-02-28 Nicolas Rognin Image Registration Method and Apparatus for Medical Imaging Based on Multiple Masks
US20080298656A1 (en) * 2006-02-10 2008-12-04 Yim Peter J Precision subtraction computed tomographic angiography
US20070280556A1 (en) * 2006-06-02 2007-12-06 General Electric Company System and method for geometry driven registration
US20080025638A1 (en) * 2006-07-31 2008-01-31 Eastman Kodak Company Image fusion for radiation therapy
US20080063301A1 (en) * 2006-09-12 2008-03-13 Luca Bogoni Joint Segmentation and Registration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
D. LOECKX ET AL.: "Temporal subtraction of Thorax CR Images Using a Statistical Deformation Model", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 22, no. 11, November 2003 (2003-11-01)

Similar Documents

Publication Publication Date Title
EP2916738B1 (en) Lung, lobe, and fissure imaging systems and methods
JP5643304B2 (en) Computer-aided lung nodule detection system and method and chest image segmentation system and method in chest tomosynthesis imaging
CN109389655B (en) Reconstruction of time-varying data
US9471987B2 (en) Automatic planning for medical imaging
US9659390B2 (en) Tomosynthesis reconstruction with rib suppression
CN109419526B (en) Method and system for motion estimation and correction in digital breast tomosynthesis
JP6204927B2 (en) Method and computing system for generating image data
EP2449530B1 (en) Quantitative perfusion analysis
EP3079589B1 (en) Three dimensional (3d) pre-scan based volumetric image data processing
CN105074775A (en) Registration of medical images
US8682051B2 (en) Smoothing of dynamic data sets
JP7292942B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, METHOD AND PROGRAM
US20180064409A1 (en) Simultaneously displaying medical images
US20110274326A1 (en) Cardiac image processing and analysis
US8559758B2 (en) Apparatus for determining a modification of a size of an object
US11270434B2 (en) Motion correction for medical image data
WO2019092167A1 (en) Method of segmenting a 3d object in a medical radiation image
KR101028798B1 (en) Method for detection of hepatic tumors using registration of multi-phase liver CT images
JP2005270635A (en) Method for processing image and device for processing image
EP2449527B1 (en) Digital image subtraction
US11423554B2 (en) Registering a two-dimensional image with a three-dimensional image
WO2010134013A1 (en) Interactive image registration
Martin et al. Segmenting and tracking diaphragm and heart regions in gated-CT datasets as an aid to developing a predictive model for respiratory motion-correction
EP4231234A1 (en) Deep learning for registering anatomical to functional images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10726272

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10726272

Country of ref document: EP

Kind code of ref document: A1