WO2005101314A2 - Method and apparatus for processing images in a bowel subtraction system - Google Patents

Method and apparatus for processing images in a bowel subtraction system Download PDF

Info

Publication number
WO2005101314A2
WO2005101314A2 PCT/US2005/012325 US2005012325W WO2005101314A2 WO 2005101314 A2 WO2005101314 A2 WO 2005101314A2 US 2005012325 W US2005012325 W US 2005012325W WO 2005101314 A2 WO2005101314 A2 WO 2005101314A2
Authority
WO
WIPO (PCT)
Prior art keywords
boundary
image
value
colon
pixels
Prior art date
Application number
PCT/US2005/012325
Other languages
French (fr)
Other versions
WO2005101314A3 (en
Inventor
Michael E. Zalis
Gavriel D. Kohlberg
Original Assignee
The General Hospital Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The General Hospital Corporation filed Critical The General Hospital Corporation
Priority to JP2007508461A priority Critical patent/JP2007532251A/en
Priority to EP05736176A priority patent/EP1735750A2/en
Publication of WO2005101314A2 publication Critical patent/WO2005101314A2/en
Publication of WO2005101314A3 publication Critical patent/WO2005101314A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/68Analysis of geometric attributes of symmetry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20156Automatic seed setting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • G06T2207/30032Colon polyp

Definitions

  • the present invention relates generally to colonoscopy techniques and more particularly to a system and method for processing an image of a bowel and for detecting polyps in the image.
  • a colonoscopy refers to a medical procedure for examining a colon to detect abnormalities such as polyps, tumors or inflammatory processes in the anatomy of the colon.
  • the colonoscopy is a procedure which includes of a direct endoscopic examination of the colon with a flexible tubular structure known as a colonoscope which typically has imaging (e.g. fiber optic) or video recording capabilities at one end thereof.
  • the colonoscope is inserted through the patient's anus and directed along the length of the colon, thereby permitting direct endoscopic visualization of colon polyps and tumors and in some cases, providing a capability for endoscopic biopsy and polyp removal.
  • colonoscopy provides a precise means of colon examination, it is time-consuming, expensive to perform, and requires great care and skill by the examiner. The procedure also requires thorough patient preparation including ingestion of purgatives and enemas, and usually a moderate anesthesia. Also, since colonoscopy is an invasive procedure, there is a significant risk of injury to the colon and the possibility of colon perforation and peritonitis, which can be fatal.
  • a virtual colonoscopy makes use of images generated by computed tomography (CT) imaging systems (also referred to as computer assisted tomography (CAT) imaging systems).
  • CT computed tomography
  • CAT computer assisted tomography
  • a computer is used to produce an image of cross-sections of regions of the human body by using measure attenuation of X-rays through a cross-section of the body.
  • CT imaging system generates two-dimensional images of the inside of an intestine. A series of such two-dimensional images can be combined to provide a three-dimensional image of the colon.
  • colons tend to have folded regions (or more simply, "folds").
  • the folds are sometimes difficult to distinguish from the bowel contents and thus are sometimes inadvertently labeled or "tagged” as bowel contents.
  • the fold region is also subtracted. This results in the processed image (i.e. the image of the colon from which contents have been digitally removed) having gaps or other artifacts due to the unintentional subtraction of a fold. Such gaps or artifacts are distracting to a person (e.g. a doctor or other medical practitioner) examining the image.
  • a fold processing system includes a fold processor which detects folds in a bowel and identifies the fold as a portion of the bowel in an image.
  • the fold processing system identifies a boundary in the digital image (e.g. an air-water boundary) and uses symmetry to determine whether a fold exists in the image. If a fold is found, the fold is identified (or labeled or tagged) as being a portion of the bowel rather than bowel contents. Thus, when the bowel contents are digitally subtracted form the image the fold region is left in the image.
  • a system for processing a fold region of in a digital image of a colon includes a boundary processor which receives a first digital image and identifies in the image a boundary between a first substance having a first density and a second substance having a second different density and a symmetry processor which processes one or more portions of the image about the boundary to determine whether symmetry exists about the boundary and identifies regions in the image having symmetry about the boundary.
  • a system and technique for identifying a colon centerline in an image of an uncleansed colon includes identifying a first point (or seed point) known to be within the colon. The regions around the seed point are then labeled (e.g. identified as containing either high density or low density material). Once the image regions are labeled, the colon region is identified by finding the seed point (i.e. the point known to be in the colon) and determining what label is assigned to the seed point. Regions around the seed point with the same label are then identified as being within the colon. The next colon image is processed by labeling regions in the image (e.g.
  • the seed point may be manually identified or automatically identified. Automatic identification may be accomplished by first processing an image corresponding to an inferior aspect of the colon and using information concerning anatomical structure which should appear in the image.
  • a system and technique for detecting objects (including polyps) in a colon image dataset which has been cleaned electronically includes separating the colon surface from the rest of the image dataset.
  • the portion of the image data set which corresponds to the separated colon surface is then processed to generate a planar map of the colon in which the value of each pixel in the planar map corresponds to its radial distance from a central axis in the planar map.
  • a segregation of features in the planar map of the colon (a process referred to as segmentation) is then performed.
  • Objects, including polyps, identified by the segmentation process can then be described and classified.
  • Statistical or correlation techniques can be used to identify and/or classify objects (including polyps) within the image.
  • FIG. 1 is a block diagram of a system for digital bowel subtraction and automatic polyp detection
  • FIGs. 2 and 2A are a series of diagrams illustrating a technique for detecting and processing a bowel fold
  • FIG. 3 is a flow diagram showing a process for processing a bowel fold region in an image
  • FIGs. 4 and 4A are a series of a flow diagrams showing a process for finding a centerline in a colon
  • FIG. 5 is a diagram of a colon having a centerline
  • Fig. 6 is a cross sectional view of a colon taken across lines 6-6 on Fig. 5;
  • Figs. 7 and 7A are cross sectional views of a colon taken across lines 6-6 on Fig.
  • Fig. 8 is a pair of images aligned to extend the colon identification from a first image to a second image
  • Fig. 9 is a diagram of a colon showing the direction of processing used to define colon regions in sequential images of the colon;
  • Fig. 10 is a diagram of a colon having a centerline
  • Fig. 10A is a colon map generated after a colon centerline has been identified and which can be used for polyp detection.
  • Figs. 11 - 11 C are a series of diagrams which illustrate the mapping between colon centerline and 3D datasets.
  • DBSP digital bowel subtraction processor
  • APDP automated polyp detection processor
  • a computed tomography (CT) system generates signals which can be stored as a matrix of digital values in a storage device of a computer or other digital processing device. As described herein, the CT image is divided into a two-dimensional array of pixels, each represented by a digital word.
  • the two-dimensional array of pixels can be combined to form a three-dimensional array of pixels.
  • the value of each digital word corresponds to the intensity of the image at that pixel.
  • the array of digital data values are generally referred to as a "digital image” or more simply an “image” and may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of density values in a scene.
  • a digital data storage device such as a memory for example, as an array of numbers representing the spatial distribution of density values in a scene.
  • original image refers to an image provided from the representational matrices that are output from a CT or other type of scanner machine.
  • Each of the numbers in the array can be expressed as to a digital word typically referred to as a "picture element” or a "pixel” or as “image data.”
  • the image may be divided into a two dimensional array of pixels with each of the pixels represented by a digital word.
  • a pixel represents a single instantaneous value which is located at specific spatial coordinates in the image.
  • the digital word is comprised of a certain number of bits and that the techniques of the present invention can be used on digital words having any number of bits.
  • the digital word may be provided as an eight-bit binary value, a twelve bit binary value, a sixteen but binary value, a thirty-two bit binary value, a sixty-four bit binary value or as a binary value having any other number of bits (e.g. one hundred twenty-eight or more bits). More of less than each of the above-specified number of bits may also be used.
  • the techniques described herein may be applied equally well to either a scale images or color images.
  • each digital word corresponds to the intensity of the pixel and thus the image at that particular pixel location.
  • each pixel being represented by a predetermined number of bits (e.g. eight bits) which represent the color red (R bits), a predetermined number of bits (e.g. eight bits) which represent the color green (G bits) and a predetermined number of bits (e.g. eight bits) which represent the color blue (B-bits) using the so-called RGB color scheme in which a color and luminance value for each pixel can be computed from the RGB values.
  • R bits color red
  • G bits color green
  • B-bits predetermined number of bits
  • RGB hue, saturation, brightness
  • CMYK cyan, magenta, yellow, black
  • the techniques described herein are applicable to a plurality of color schemes including but not limited to the above mentioned RGB, HSB, CMYK schemes as well as the Luminosity and color axes a & b (Lab) YUV color difference color coordinate system, the Karhunen-Loeve color coordinate system, the retinal cone color coordinate system and the X, Y, Z scheme.
  • An "image region” or more simply a “region” is a portion of an image. For example, if an image is provided as a 32 X 32 pixel array, a region may correspond to a 4 X 4 portion of the 32 X 32 pixel array.
  • the local window is thought of as "sliding" across the image because the local window is placed above one pixel, then moves and is placed above another pixel, and then another, and so on. Sometime the "sliding" is made in a raster pattern. It should be noted, though, that other patterns can also be used.
  • detection techniques described herein are described in the context of detecting polyps in a colon, those of ordinary skill in the art should appreciate that the detection techniques can also be used search for and detect structures other than polyps and that the techniques may find application in regions of the body other than the bowel or colon.
  • One approach to subtracting contents from the colon is to first identify the intersection of a morphologic dilation of an air region, a dilated fecal matter region and a dilated edge region (which can be found by using a gradient finding function and morphologic dilation). The system then approximates the intersection of these three regions to be the residue that preferably would be removed. While this technique provides satisfactory results in terms of digitally subtracting the contents of the bowel from an image, it can lead to a problem of over subtraction, because sometimes folds that should remain in the image pass through the residue area that is removed thus causing the folds to also be removed.
  • volume averaging can cause the soft tissue region (which should be expressed as image regions having low pixel values) to be assigned pixel values which approximate the pixel values which represent bowel contents (i.e. image regions having high pixel values).
  • the soft tissue region which should be expressed as image regions having low pixel values
  • pixel values which approximate the pixel values which represent bowel contents i.e. image regions having high pixel values.
  • the fold symmetry processing approach One approach which allows subtraction of the bowel contents while allowing the fold regions to remain in the image is referred to as the fold symmetry processing approach.
  • the symmetry of the objects in the image are used to identify fold regions in the image.
  • the symmetry approach it is recognized that if a point lies in the intersection of the residue (also referred to as bowel contents) and a fold, then it should be surrounded by soft tissue like pixels on all sides. Thus, by examining the variance between this pixel and the pixels around it, this variance should be relatively low.
  • the variance of the pixel and the surrounding pixels should be relatively high.
  • the process involves searching for pixels having a relatively low variance. These pixels were then put back into the image (rather than subtracted).
  • a system for performing virtual colonoscopy 10 includes a computed tomography (CT) imaging system 12 having a database 14 coupled thereto.
  • CT computed tomography
  • the CT system 10 produces two-dimensional images of cross- sections of regions of the human body by measuring attenuation of X-rays through a cross- section of the body.
  • the images are stored as digital images or image data in the image database 14.
  • a series of such two-dimensional images can be combined using known techniques to provide a three-dimensional image of a colon.
  • a user interface 16 allows a user to operate the CT system and also allows the user to access and view the images stored in the image database.
  • a digital bowel subtraction processor (DBSP) 18 is coupled to the image database 14 and the user interface 16.
  • the DBSP receives image data from the image database and processes the image data to digitally remove the contents of the bowel from the digital image.
  • the DBSP can then store the image back into the image database 14.
  • the particular manner in which the DBSP processes the images to subtract or remove the bowel contents from the image will be described in detail below in conjunction with Figs. 2-6. Suffice it here to say that since the DBSP digitally subtracts or otherwise removes the contents of the bowel from the image provided to the DBSP, the patient undergoing the virtual colonoscopy need not purge the bowel in the conventional manner which is know to be unpleasant to the patient.
  • the DBSP 18 may operate in one of at least two modes.
  • the first mode is referred to as a raster mode in which the DBSP utilizes a map or window which is moved in a predetermined pattern across an image.
  • the pattern corresponds to a raster pattern.
  • a threshold process is used in which the window scans the entire image while threshold values are applied to pixels within the image in a predetermined sequence. The threshold process assesses whether absolute threshold values have been crossed and the rate at which they have been crossed.
  • the raster scan threshold process is used to identify pixels having values which represent low density regions (e.g. air) sometimes referred to as "air pixels" which are proximate (including adjacent to) pixels having values which represent matter or substance (e.g.
  • the processor examines each of the pixels to locate native un-enhanced soft tissue, as a boundary between soft tissue (e.g. bowel wall) and bowel contents is established, pixels are reset to predetermined values depending upon the side of the boundary on which they appear.
  • native un-enhanced soft tissue e.g. bowel wall
  • the second mode of operation for the DBSP 18 is the so-called gradient processor mode.
  • a soft tissue threshold (ST) value, an air threshold (AT) value and a bowel threshold (BT) value are selected.
  • a first mask is applied to the image and all pixels having values greater than the bowel threshold value are marked.
  • a gradient is applied to the pixels in the images to identify pixels in the image which should have air values and bowel values.
  • the gradient function identifies regions having rapidly changing pixel values. From experience, one can select bowel/air and soft tissue/air transition regions in an image by appropriate selection of the gradient threshold.
  • the gradient process uses a second mask to capture a first shoulder region in a transition region after each of the pixels having values greater than the BT value have been marked.
  • a mucosa insertion processor 19a is used to further process the sharp boundary to lesson the impact of or remove the visually distracting regions.
  • the sharp edges are located by applying a gradient operator to the image from which the bowel contents have been extracted.
  • the gradient operator may be similar to the gradient operator used to find the boundary regions in the gradient subtracter approach described herein.
  • the gradient threshold used in this case typically differs from that used to establish a boundary between bowel contents and a bowel wall.
  • [UU4bJ lhe particular gradient threshold to use can be empirically determined. Such empirical selection may be accomplished, for example, by visually inspecting the results of gradient selection on a set of images detected under similar scanning and bowel preparation techniques and adjusting gradient thresholds manually to obtain the appropriate gradient (tissue transition selector) result.
  • the sharp edges end up having the highest gradients in the subtracted image.
  • a filter is then applied to these boundary (edge) pixels in order to "smooth" the edge.
  • the filter is provided having a constrained Gaussian filter characteristic. The constraint is that the smoothing is allowed to take place only over a predetermined width along the boundary.
  • the predetermined with should be selected such that the smoothing process does not obscure any polyp of other bowel structures of possible interest.
  • the predetermined width corresponds to a width of less than ten pixels.
  • the predetermined width corresponds to a width in the range of two to five pixels and in a most preferred embodiment, the width corresponds to a width of three pixels. The result looks substantially similar and in some cases indistinguishable from the natural mucosa seen in untouched bowel wall, and permits an endoluminal evaluation of the subtracted images.
  • the digital bowel subtraction processor 18 also includes a fold processor 19b. It has been recognized that during the process to subtract from the image regions which have been identified or “tagged” as bowel contents (hereinafter "tagged regions"), it is possible to subtract regions of the colon which correspond to a fold. This occurs because during the subtraction process, the system may inadvertently subtract soft tissue elements (represented by low pixel values) that are bounded by high density tagged material (represented by high pixel values).
  • the fold processor 19b helps prevent fold regions from being removed from the image.
  • the fold processor includes a boundary processor and a symmetry processor which identify a boundary and utilizes a symmetry characteristic of the fold region of the image to identify the fold and thus prevent the fold from being subtracted from the image.
  • an automated polyp detection processor (APDP) 20.
  • the APDP 20 receives image data from the image database and processes the image data to detect and /or identify polyps, tumors, inflammatory processes, or other irregularities in the anatomy of the colon.
  • the APDP 20 can thus pre-screen each image in the database 14 such that an examiner (e.g. a doctor) need not examine every image but rather can focus attention on a subset of the images possibly having polyps or other irregularities.
  • the examiner Since the CT system 10 generates a relatively large number of images for each patient undergoing the virtual colonoscopy, the examiner is allowed more time to focus on those images in which the examiner is most likely to detect a polyp or other irregularity in the colon.
  • the particular manner in which the APDP 20 processes the images to detect and /or identify polyps in the images will be described in detail below in conjunction with Figs. 7-9. Suffice it here to say that the APDP 20 can be used to process two-dimensional or three-dimensional images of the colon. It should also be noted that APDP 20 can process images which have been generated using either conventional virtual colonoscopy techniques (e.g. techniques in which the patient purges the bowel prior to the CT scan) or the APDP 20 can process images in which the bowel contents have been digitally subtracted (e.g. images which have been generated by DBSP 18).
  • polyp detection system 20 can provide results generated thereby to an indicator system which can be used to annotate (e.g. by addition of a marker, icon or other means) or otherwise identify regions of interest in an image (e.g. by drawing a line around the region in the image, or changing the color of the region in the image) which has been processed by the detection system 20.
  • an image of a bowel cross section 100 includes a bowel wall 101 which defines a perimeter of the bowel.
  • the bowel includes a fold 102.
  • Portions of the bowel image 104a, 104b corresponds to an air region of the bowel 100 and portions of the bowel image 106a, 106b correspond to those portions of the bowel having contents therein.
  • the air regions 104a, 104b appear as low density material (and thus can be represented by pixels having a relatively low value, for example) while the bowel contents are represented as a high density material (and thus can be represented, for example, by pixels having a value which is relatively high compared with the valve of the pixel requesting the low density regions).
  • a boundary 108 exists between the low density regions 104a, 104b and the high density regions 106a, 106b.
  • the fold region 102 is relatively long and narrow and is immersed in the high density material 106. Due to the averaging of pixel values which occurs during the imaging process, the fold region 102 (or some portions of the fold region 102) can be artificially designated as a region of high density material. Thus, if left with such a designation, the fold region would be subtracted as part of the high density region 106 which represents bowel contents.
  • a point that lies along the boundary 108 in the residue 106 should have both air (low pixel values) and fecal matter (high pixel values) around it.
  • the boundary line 108 there is no symmetry about the boundary line 108 in that the pixel values in the region 106 below the boundary line 108 have a value which is substantially different (e.g. relatively high) compared with the pixel values in the region 104a or 104b above the boundary line 108.
  • a selected point which is not in the fold region 102 should have a relatively high variance of pixel values around it.
  • the pixels having a low variance can be identified by forming a window 110a and correlating pixel values in the window 110a with a kernel.
  • the boundary 108 has a width and thus the window 110a must be provided having a size which is large enough to span the width of the boundary 108.
  • the boundary 108 is typically a minimum of 3 to 5 pixels wide.
  • the size of the window 110 may be empirically determined. It should be appreciated of course that the window should fit within the expected width of a fold (which is typically in the range of about five to ten pixels but which could be larger than ten or smaller than five in some cases.
  • the window 110a is generated and slides or moves across the image along the boundary 108.
  • the window 110a When the window 110a is located so that it includes a portion of both regions 104a and 106a, the pixels below the boundary 108 have a relatively high value compared with the value of the pixels above the boundary 108. Thus, the correlation of the pixels above and below the boundary 108 results in a relatively high correlation value which indicates that the there is no symmetry about the boundary 108.
  • the correlation of the pixels above and below the boundary results in a relatively low correlation value since the pixels above and below the boundary both correspond to low density pixels.
  • the correlation of the pixels above and below the boundary again results in a relatively high correlation value since the pixel values above the boundary 108 correspond to low density material pixel values while the pixel values below the boundary 108 correspond to high density material pixel values.
  • the returned pixels can be dilated. This results in more of each desired fold being present in the image while at the same time providing a relatively thin, structure having a normal or natural appearance in the final subtracted image.
  • FIG. 3 is a flow diagram showing the processing performed by a processing apparatus which may, for example, be provided as part of a virtual colonoscopy system 10 such as that described above in conjunction with FIG. 1 to perform digital bowel subtraction including detection of fold regions and automated polyp detection.
  • the rectangular elements (typified by element 120 in FIG. 3), are herein denoted “processing blocks" and represent computer software instructions or groups of instructions.
  • the processing blocks can represent functions performed by functionally equivalent circuits such as a digital signal processor circuit or an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • the flow diagrams do not depict the syntax of any particular programming language. Rather, the flow diagrams illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required of the particular apparatus. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables are not shown. It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the spirit of the invention.
  • FIG. 3 a process for detecting and processing fold regions in an image of a bowel begins by identifying an air boundary as shown in block 100.
  • an air-water boundary it should be appreciated that reference to any specific boundary (e.g. a boundary between air and water) is intended to be exemplary and is made for reasons of promoting clarity in the description and is not intended to be and should not be construed as limiting. It should be appreciated that the boundary may be between air and some other matter or substance and not necessarily water or between two substances having different densities.
  • a correlation matrix is applied to the air- water boundary of an original image.
  • correlation matrix allows identification of those regions of the image having symmetry about the air- water boundary. Such regions correspond to low variance regions of the image and these regions are identified as shown in block 104. Once the low variance regions of the image are identified, a regional morphologic dilation of these regions is performed as shown in block 106. By recognizing that the low variance regions correspond to fold regions, the fold region is in essence being dilated. This dilation process results in more of each desired fold being present in the image while at the same time providing the image having a fold structure with a normal or natural appearance when viewed by a person examining the image.
  • a subtraction mask is used to remove the bowel contents from the image.
  • regions covered by the subtraction mask are removed.
  • the dilation operation performed in block 106 results in the removal of pixels (representing the fold region) from the subtraction mask.
  • the fold regions are left in the image after the bowel contents are subtracted from the image.
  • a process for identifying a colon centerline when the colon is not cleansed begins with block 130 in which a first image is selected.
  • the first selected image is also sometimes referred to herein as the index image.
  • Processing then flows to block 132 in which a first point (referred to herein as a seed point) known to be within the colon is identified.
  • the seed point may be identified manually (e.g. by a user) or in some embodiments a centerline processor may automatically determine the seed point. Automatic detection of the seed point may be accomplished for example by having the system first process an image corresponding to the most inferior aspect of the colon (i.e. the image which contains the rectum/anus portion of the colon.
  • the system can automatically select a seed point. Once the seed point is identified in the index image, the entire colon can then be identified in the image (including bowel contents, air and some soft tissue).
  • processing block 134 a simple subtraction is then performed.
  • the subtraction can be accomplished using a threshold and dilation technique.
  • the regions are then labeled (e.g. identified as containing either high density or low density material) as shown in block 138.
  • the colon region is identified by finding the seed point (i.e. the point known to be in the colon) and determining what label is assigned to the seed point.
  • the first image is now processed.
  • processing then flows to decision block 141 in which a determination is made as to whether there are any more images to process. If a decision is made that all images have been processed, then centerline processing ends. If on the other hand, a decision is made that all images have not yet been processed, then processing flows to processing block 142 in which the next image is selected. Also, the image which was last processed is identified as the "current image.”
  • Processing then proceeds to block 144 in which a simple subtraction is then performed on the next image (i.e. the image currently being processed).
  • the subtraction can be accomplished using a threshold and dilation technique.
  • the image is subject to a threshold operation to assign air and non-air values (or to simply assign values which indicate regions of different densities) and then the regions are labeled (e.g. as either high density or low density material) as shown in blocks 146 and 148.
  • the colon region is identified in the next image by using the colon location information in the current image. In this manner, the center of the colon in each subsequent image can be found thereby allowing a colon centerline to be established.
  • a colon 160 having bowel contents 162 therein is processed in the manner described above in conjunction with Figs. 4 and 4 A to provided a centerline 164.
  • the seed point is manually provided (e.g. by a user)
  • any of the points 168a - 168d could be used as the seed point.
  • the process begins with the inferior most image 170 (also referred to as the index image) and a point proximate the center of the image 170 corresponds to the rectum/anus 171 is selected as the seed point (i.e. a point in the image known to be part of the colon).
  • image 170 selected as the first image the process proceeds sequentially through each image moving in the direction of image 172.
  • image 172 is shown to include a first object 174 and a second object 176. It should be appreciated that both objects 174, 176 include high density regions 178, 180. Since the bowel centerline is being found without cleansing the bowel, it is possible that either object 174, 176 corresponds to the bowel and the other object corresponds to some other structure such as bone, for example. Structure 176 also includes a region 182 having a density lower than other objects or regions in the image (e.g. an air region 182) and a boundary 184.
  • the air region 182 has a boundary 190 and to help distinguish the colon from other material, the air region boundary 190 is dilated as indicated by boundary 190a.
  • the content region 180 has a boundary 192 and to help distinguish the colon from other material, the content region boundary 192 is dilated as indicated by boundary 192a.
  • the region 178 also has a boundary 194 which is dilated as indicated by boundary 194a.
  • FIG. 7A the dilation of the air and high density regions results in the generation of an overlap region 196.
  • a union of each point in each set is then made to identify the entire colon. It should be appreciated the points must be contiguous to be included as points in the union of sets. In this manner, structure 178 is distinguished as a structure which is separate from structure 178.
  • a pair of images 200, 202 in which structures 176', 178' have been identified are processed to find a centerline.
  • the current image 200 corresponds to an image in which a point 204 in the colon has already been identified.
  • the point 204 could have been identified either manually or automatically as described above.
  • the alignment should be considered conventional in that the images are obtained in the same scanning procedure and will have assigned to them a fixed position within the frame of reference established by the CT scanner.
  • the point 204 lies within structure 176". Since point 204 is known to be within the colon and the distance D which separates the two images 200, 202 along the colon is known to be small, structure 176" in image 202 is identified as a colon region.
  • the images are processed sequentially in a given direction as shown in Fig. 9. It should be appreciated that in some instances, e.g. when a turn in the colon is reached, it is necessary to reverse the direction in which the images are processed. This is important because sometimes the colon makes 180 degree turns (also referred to as bends, or flexures) and in order to correctly map the anatomy with reference to the colon, the system must change direction to follow the anatomic (as opposed to spatial) direction of the colon. Upon reversing, the system needs to continue in the 'reversed' direction at least the distance equivalent to the maximum diameter of the colon before determining that it has reached its terminus.
  • 180 degree turns also referred to as bends, or flexures
  • the system Upon reversing, the system needs to continue in the 'reversed' direction at least the distance equivalent to the maximum diameter of the colon before determining that it has reached its terminus.
  • Figs. 10 and 10 A by finding a centerline 222 of a colon 220, the colon can be "unfolded" as shown by reference number 220a in Fig. 10A.
  • polyp detection techniques can be applied to the unfolded colon 220a.
  • the centerline has two primary uses: improving subtraction (by allowing the system to clearly follow fold anatomy in three dimensions; and 2) improving polyp detection. The latter is improved because the detection system can operate within a frame of reference which is related to the colon anatomy. If all of the colon anatomy is laid out on a 'plane', as possible with a centerline, then potential lesions can be normalized with respect to scale, and simultaneously, orientation (rotation) of target lesions can be minimized.
  • polyps e.g. polyps 226a, 226b
  • the polyp locations are identified in the three dimensional image of the colon. An approach to polyp detection using a map of an unfolded colon is next described.
  • the first step in detection is to separate the colon surface from the rest of the image dataset.
  • a useful method to map the colon surface is to calculate a colon centerline (i.e. a three-dimensional curve that runs the length of the colon along the center of its lumen.
  • the centerline is a useful construct for evaluation of a wide range of image processing problems, and can be calculated by use of a morphologic thinning algorithm and the so-called medial axis transform (MAT).
  • a morphologic thinning algorithm the air within the colon lumen is taken as the object of interest and this column of air is iteratively eroded, until a single line segment remains along the central axis of the colon.
  • the medial axis transform is a complementary algorithm wherein the distance between a set of regularly spaced points within the column and the outer boundary of the air column is tabulated. Points associated with greater distance to the boundary are assigned higher values, and the medial axis is taken as the set of points with the maximum distance to the outer boundary. While the MAT is more computationally expensive, it is generally a robust approach.
  • a radial distance signature is calculated along the length of the centerline.
  • the radial signature is a graph representing the distance from center of an object to its boundary as a radial line segment is swept through 360°.
  • the distance from the centerline to the colon mucosal surface is measured at each fixed angular interval around the centerline, and this process is repeated along the length of the centerline curve.
  • the result of this procedure is a planar map of the colon where the value of each pixel represents its radial distance from the central axis.
  • the process is akin to straightening the colon, and slicing it open longitudinally, as for a pathology specimen. This approach has been employed to map the colon in both phantom and clinical cases for evaluation for CTC.
  • this method can be viewed akin to mathematically flooding the contour of the colon map with water, and evaluating the contour lines of features that remain above the water surface.
  • the water is allowed to rise until all objects are inundated, and the algorithm tabulates the position of the contour lines just before the waters from different regions of the map are allowed to admix.
  • the result of this process is a set of continuous boundaries surrounding the separate objects on the map.
  • watershed segmentation is usually applied to the gradient transform of an image.
  • the gradient transform is a rendering of the image where edges of shapes are accentuated.
  • the edge accentuation is calculated by convolving the image map with an operator matrix, such as the Sobel operator.
  • the edges to be accentuated are the transitions of radial height of each feature along the colon mucosa.
  • the result of these operations is identification of a set of objects situated along the mapped internal surface of the colon.
  • the boundary of each object is composed of the points at which the local colon surface diverges sharply inward. What follows are two methods to describe and classify these objects identified by this process of segmentation.
  • the first method is based on statistical description (i.e. it is a statistical approach to polyp detection).
  • statistical description i.e. it is a statistical approach to polyp detection.
  • the centroid is the weighted average of the object, corresponding to its center of mass.
  • the mean and variance are respectively, the standard statistical average and second moment about this average.
  • the internal texture of an object also contains useful classifying information. For example, the standard deviation of pixels within an object and the average entropy of pixels that make up each object have been shown to be useful for object discrimination.
  • the entropy of a set of pixels is defined as: - ⁇ [ p(/)log (p(/)) ], where p(t ' ) represents a histogram of the texture (CT density) values. Summation is performed for each i, comprising the range of possible texture values in the image.
  • a frequent mimicker of the colon polyp is retained fecal material. Like polyps, retained fecal material can demonstrate a nearly spherical contour.
  • Polyps however, demonstrate a uniformly soft tissue density, whereas retained feces generally contain small bubbles of air. As a result of this heterogeneity, one would expect the texture variance and entropy of polyps and fecal pseudolesions to be distinct.
  • pattern recognition can be facilitated by the combination of boundary and texture descriptors into a single multi-dimensional feature vector. It is believed that there are no published studies combining both contour and textural analyses for the purpose of polyp detection; however, there is an extensive literature describing their utility for pattern recognition.
  • feature vectors can be compared for the purpose of pattern classification by calculating the Euclidean distance between them.
  • Objects that are similar in feature will demonstrate a small Euclidean distance, and by empirically setting a threshold, or discriminator function, one can classify an unknown object based on the distance of its feature vector to the vector of a known object.
  • each class of objects designated, c ⁇ j is best described by the statistical parameters, mean feature vector, m j , covariance matrix, C j , and probability of occurrence, P(j). These parameters are analogous to the two-dimensional evaluation of populations according to mean, variance, and probability density. If these parameters are known for each class objects to be encountered, it is in theory possible to formulate an explicit discriminator function to separate them.
  • This function called the Bayesian discriminator d() for a group of classes, j, takes the form: where x is a feature vector, x ⁇ is the transpose of x, Cj and C j "1 are the covariance matrix and its inverse, nay and m j T are the mean feature vector and its transpose, and P(c ⁇ j ) is the probability of class ⁇ j occurring.
  • the different kinds of soft tissue density structures to be encountered in the colon are relatively few in number, and include polyps, haustral folds, and retained feces.
  • One approach to polyp detection is to construct a library of known colon structures and derive the mean, co-variance, and probability parameters necessary to form an explicit discriminator. While it is possible that the feature vectors of these structures will cluster sufficiently to permit construction of an explicit discriminator function, it is believed that these parameters have not yet been catalogued for the human colon.
  • the feature vector of an object is evaluated in a set of weighted nodes, analogous to neurons.
  • Each node has the property that it generates a non-linear output in response to a set of input values and associated weights.
  • the input nodes typically correspond in number to the dimension of the input feature vector, and similarly, the output nodes correspond in number to the different object classes to be identified. Determining the node weights is performed by exposing the network to a set of training cases.
  • vectors of known class are fed forward through the network and the error of the resulting output classification is stepwise propagated backward through the network.
  • Each weight is adjusted in order to minimize the local error associated at a given node with the inputs of the preceding layer.
  • unknown feature vectors are fed forward through the network without backward feedback and class membership is determined by the final state of the output nodes. It has been shown that a three-layer network, comprised of an input layer, a hidden layer, and an output layer is in theory capable of separating arbitrarily complex groups of object classes. Hence, another means exists to analyze the feature vectors of colon objects if the cataloguing method proves unfeasible.
  • a second approach taken for polyp classification combines a planar colon map with template matching.
  • the segmented representations of objects in the colon map are further modified to normalize their size and to incorporate a description of their internal texture. Normalization for size can be accomplished by analysis of the boundary points of each object around the object centroid.
  • the boundary points of an object are repositioned along their respective radii toward the centroid by the minimum radius observed in the set of boundary points for that object. This process retains the basic morphology of the boundary contour, and reduces the variations in boundary points for larger objects.
  • a description of the internal composition of an object can be represented by the standard deviation of pixels contained within the object boundary.
  • the pixels within the normalized representation of the object are set to the value of this standard deviation.
  • the result of these steps is a planar object having a contour normalized for size, and having internal pixel values set to the standard deviation of the object's internal texture.
  • a similar mapping is performed for the template polyp — its contour and internal texture are respectively normalized for size and modified to reflect internal homogeneity.
  • the template is then combined with the template in the process of correlation, described previously.
  • the pixel values in the correlation image reflect the similarity of each region of the modified map with the modified template. Sharp peaks of pixel value correspond to regions of high similarity and are taken to represent the location of polyps. Correlation peaks of this kind can be selected by means of a high pass filter and threshold.
  • FIG. 1 Another approach to further develop a polyp detection system can be provided by implementing colon mapping and segmentation as follows.
  • the centerline of the colon can be found using prior art techniques such as the algorithm described by Ferreira and Russ, with the modifications described by Zhang for handling colon flexures. With the colon centerline identified, a colon mapping process can be performed.
  • the smallest size polyp of clinical interest is approximately seven (7) mm in size, which represents ten (10) isometric voxels from thirty-five (35) cm field-of-view CTC data.
  • the software will sample the centerline at five (5) voxel intervals.
  • the angular sampling interval at each stop along the centerline derives from a similar calculation: the maximum circumference of normal, air distended colon in CTC is approximately 180 mm, leading to a seven degree (7°) interval in order to achieve fifty percent (50%) sampling overlap.
  • the technique utilizes the radial distance from the centerline to an inner surface of the colon using a three-dimensional Euclidean distance formula.
  • the mucosal surface is segmented out of the CT data using a global threshold set to exclude pixels above -50 Hounsf ⁇ eld units. Previous experience has shown that this threshold value clearly outlines the inner aspect of the mucosa.
  • the software will tabulate the maximal radius observed, r max .
  • the radial height of each point, i, along the signature will be calculated as: v max - ⁇ .
  • the data construct for the colon map comprises a three dimensional matrix, with two axes describing position along the mucosal (inner) surface of the colon, and the third dimension representing the radial height of features from the surface.
  • the internal voxels of structures protruding into the colon lumen will be incorporated into the matrix in two steps, based on the calculation of the radial signature of the segmented colon.
  • the maximum of the radial signature, r max will be taken to represent the most distended region of colon wall at each position for which it will be calculated along the colon centerline.
  • voxels located along the radial segment stretching from the inner mucosal surface to r max will be included in the map matrix.
  • the map will comprise strata representing the features protruding into the colon lumen.
  • Three additional levels along the height axis of the map will be used. The first of these, designated z 0 , will hold the radial height data. The second, zi, will hold the gradient transform of the map, and the third, z 2 , will hold a normalized form of each object for use in correlation matching.
  • the gradient transform of the colon map, located in level z ls will be calculated using the Sobel operator. The gradient will be calculated by convolution of the Sobel operator with the z 0 level data, and the result will be taken to represent the edges along the height-axis of structures protruding into the colon lumen.
  • a watershed segmentation on the zi transform data is then performed.
  • This watershed segmentation can be accomplished using any prior art technique such as the technique of Bieniek. While the watershed technique has several known advantages in terms of its function and output, it is also known that the algorithm can generate unwanted boundaries due to minor variations in the surface being analyzed. This problem, known as over-segmentation, can be addressed in the following manner. It has been shown that preprocessing the surface to be analyzed in order to bring additional information to the segmentation step can improve the output of the segmentation. For example, application of a smoothing filter can diminish the presence of minor surface variations, removing them as possible false targets of segmentation.
  • the choice of a sufficiently high gradient threshold in the gradient transform may permit selection of only the larger features of interest along the colon surface.
  • the boundary variance, boundary compactness, textural variance, and textural entropy are determined.
  • the centroid of the boundary points of each object represented in the zi level of the data structure are determined. This may be accomplished using conventional techniques.
  • the variance of the distance of boundary points to the centroid as well as the compactness can then be determined. This may also be accomplished using conventional techniques.
  • the composite voxels for each three-dimensional object arising from the colon mucosa are gathered. In one approach, this can be accomplished in two steps.
  • the zi boundary points can be used to define within the plane of zi the set of internal points of each object, designated, zi,intemaij-
  • the data structure is designed, for each point, p , in the set zijntemaijj the composite elements of each three-dimensional object are held in the column of strata above ?i.
  • the vertical delimiter of these elements is the radial signature stored in each corresponding position of the z 0 layer.
  • a normalized version of each segmented object is determined and these representations are stored in the z 2 layer of the data structure.
  • the normalized version of each object will be centered on the object centroid calculated previously, and stored in the zi layer. Normalization for size for each object will proceed as follows. The radial distance separating the set of boundary points and the centroid for each object are determined. The minimum radial distance observed for each set of boundary points, designated, r m i n , will be used to displace each boundary point in a radial direction toward the centroid by dividing each boundary point radius by r m i n . The result will be a set of boundary points that incorporate the shape of each object, but reduced in size.
  • each obj ect in z 2 will be incorporated into this representation by calculating the standard deviation of voxel density.
  • the internal elements comprising each three-dimensional object represented in zi and z 2 will be gathered together as described above for the statistical approach.
  • the result in the z 2 level will be a set of objects, whose boundaries correspond in morphology to the objects in zi, but normalized for size, and whose internal elements incorporate the object texture.
  • Lesions may be catalogued using either a statistical approach or a correlation approach.
  • an estimate is made of the boundary variance, boundary compactness, textural variance and textural entropy of a group of colonoscopy confirmed polyps, haustral folds, and fecal pseudo lesions.
  • the polyps and other objects will be extracted from the large group of research CTC cases performed previously.. Initially objects that have been identified on the traditionally cleansed CTC exams can be used, in order to exclude noise introduced by the DSBC processing.
  • fifty lesions already present in a library include a spectrum of polyps ranging in size from 5 to 35 mm.
  • the four statistical boundary and textural descriptors can be calculated from the CTC datasets, following the segmentation steps described above. These data will be used to estimate a mean feature vector, m, covariance matrix C, and probability of occurrence for polyps, haustral folds, and fecal pseudolesions.
  • the mean vector and covariance matrix can serve as the basis for constructing a Bayesian pattern discriminator function, utilizing prior art techniques such as the method described by Gonzalez.
  • the software will flag objects assigned to the polyp class by resetting the centroid of the zi layer of each object to an arbitrarily high value.
  • the group of marked centroids will be taken to represent the group of polyp candidates and these data will be passed to a polyp mark-up routine for identification to the evaluating radiologist.
  • the Bayesian discriminator is unable to adequately distinguish between obj ect classes, it is conceivable that the variety of polyp morphologies, from sessile to pedunculated will prove too complex for the above-described approach. If the Bayesian analysis of the feature vectors outlined above proves too limited, the feature vectors maybe analyzed by means of a three layer artificial neural network. Determination of inadequate performance of the Bayesian technique will be made based upon evaluation of polyps within the colon phantom, a series of steps described below. It is believed that the neural network approach may be necessary if the Bayesian approach yields a sensitivity for polyps in the colon phantom of ⁇ 70%.
  • the three layer neural network will be composed of four input nodes, four hidden nodes, and three output nodes.
  • Initial training of the network will take place with a set of polyps, folds, and pseudo lesions obtained from traditionally cleaned CTC.
  • Network node weights will initially be set to small random values with zero mean. These will be adjusted according to the method of back propagation, and the output nodes will be monitored during training.
  • the process continues by exposing the network to a set of known phantom structures obtained using the DSBC technique, as it has been shown that the performance of an artificial neural network can increase with graded exposure to noise during training. Training will be deemed complete for the network when upon presentation with an object of class, i, output for node 0 is > 0.95 and output for all other nodes 0 ,j ⁇ i, is ⁇ 0.05.
  • a polyp template can be generated from a library of previous data by applying the processing steps outlined above to generate a family of normalized polyp representations. These polyp templates are convolved with the normalized representation of the colon stored in the z 2 level of the imaging data structure.
  • the resulting correlation image is filtered using a two-dimensional high pass filter, in order to emphasize sharp peaks of correlation, as these are most likely to represent regions of match.

Abstract

Described are, for digital bowel substraction, a method and system for processin a fold region of an image comprising: identifying in the image a boundary between a first substance having a first density and a second substance having a second different density; processing one or more portions of the image about the boundary to determine whether symmetry exists about the boundary; and identifying regions in the image having symmetry about the boundary. Also described are a method and system for identifying a colon centerline in an image of an uncleansed colon comprising: identifying a seed point known to be within the colon.

Description

METHOD AND APPARATUS FOR PROCESSING IMAGES IN A BOWEL SUBTRACTION SYSTEM
FIELD OF THE INVENTION
[0001] The present invention relates generally to colonoscopy techniques and more particularly to a system and method for processing an image of a bowel and for detecting polyps in the image.
BACKGROUND OF THE INVENTION
[0002] As is known in the art, a colonoscopy refers to a medical procedure for examining a colon to detect abnormalities such as polyps, tumors or inflammatory processes in the anatomy of the colon. The colonoscopy is a procedure which includes of a direct endoscopic examination of the colon with a flexible tubular structure known as a colonoscope which typically has imaging (e.g. fiber optic) or video recording capabilities at one end thereof. The colonoscope is inserted through the patient's anus and directed along the length of the colon, thereby permitting direct endoscopic visualization of colon polyps and tumors and in some cases, providing a capability for endoscopic biopsy and polyp removal. Although colonoscopy provides a precise means of colon examination, it is time-consuming, expensive to perform, and requires great care and skill by the examiner. The procedure also requires thorough patient preparation including ingestion of purgatives and enemas, and usually a moderate anesthesia. Also, since colonoscopy is an invasive procedure, there is a significant risk of injury to the colon and the possibility of colon perforation and peritonitis, which can be fatal.
[0003] To overcome these drawbacks, the virtual colonoscopy was conceived. A virtual colonoscopy makes use of images generated by computed tomography (CT) imaging systems (also referred to as computer assisted tomography (CAT) imaging systems). In a CT (or CAT) imaging system, a computer is used to produce an image of cross-sections of regions of the human body by using measure attenuation of X-rays through a cross-section of the body. In a virtual colonoscopy, the CT imaging system generates two-dimensional images of the inside of an intestine. A series of such two-dimensional images can be combined to provide a three-dimensional image of the colon. While this approach does not require insertion of an endoscope into a patient and thus avoids the risk of injury to the colon and the possibility of colon perforation and peritonitis, the approach still requires thorough patient preparation including purgatives and enemas. Generally, the patient must stop eating and purge the bowel by ingesting (typically by drinking) a relatively large amount of a purgative. Another problem with the virtual colonoscopy approach is that, the accuracy of examinations and diagnosis using virtual colonoscopy techniques is not as accurate as is desired. This is due, at least in part, to the relatively large number of images the examiner (e.g. a doctor) must examine to determine if a polyp, tumor or an abnormality exists in the colon.
[0004] Moreover, colons tend to have folded regions (or more simply, "folds"). In an image of the colon, the folds are sometimes difficult to distinguish from the bowel contents and thus are sometimes inadvertently labeled or "tagged" as bowel contents. When the bowel contents are digitally subtracted from the image, the fold region is also subtracted. This results in the processed image (i.e. the image of the colon from which contents have been digitally removed) having gaps or other artifacts due to the unintentional subtraction of a fold. Such gaps or artifacts are distracting to a person (e.g. a doctor or other medical practitioner) examining the image.
[0005] It would, therefore, be desirable to provide a technique for processing fold regions which appear in images of a bowel.
SUMMARY OF THE INVENTION
[0006] In accordance with the present invention, a fold processing system includes a fold processor which detects folds in a bowel and identifies the fold as a portion of the bowel in an image. With this particular arrangement, a system and technique which reduces the unnecessary subtraction of bowel folds is provided. In a preferred embodiment, the fold processing system identifies a boundary in the digital image (e.g. an air-water boundary) and uses symmetry to determine whether a fold exists in the image. If a fold is found, the fold is identified (or labeled or tagged) as being a portion of the bowel rather than bowel contents. Thus, when the bowel contents are digitally subtracted form the image the fold region is left in the image. [0007] In accordance with a further aspect of the present invention, a system for processing a fold region of in a digital image of a colon includes a boundary processor which receives a first digital image and identifies in the image a boundary between a first substance having a first density and a second substance having a second different density and a symmetry processor which processes one or more portions of the image about the boundary to determine whether symmetry exists about the boundary and identifies regions in the image having symmetry about the boundary.
[0008] In accordance with a still further aspect of the present invention, a system and technique for identifying a colon centerline in an image of an uncleansed colon includes identifying a first point (or seed point) known to be within the colon. The regions around the seed point are then labeled (e.g. identified as containing either high density or low density material). Once the image regions are labeled, the colon region is identified by finding the seed point (i.e. the point known to be in the colon) and determining what label is assigned to the seed point. Regions around the seed point with the same label are then identified as being within the colon. The next colon image is processed by labeling regions in the image (e.g. as high density and low density regions) and using the colon information from the previously process image to identify the colon in the image currently being processed. Once the colon is identified in each image, the center of the colon in each image can be found thereby allowing a colon centerline to be established. The seed point may be manually identified or automatically identified. Automatic identification may be accomplished by first processing an image corresponding to an inferior aspect of the colon and using information concerning anatomical structure which should appear in the image.
[0009] In accordance with a still further aspect of the present invention, a system and technique for detecting objects (including polyps) in a colon image dataset (e.g. a CT image dataset of the colon) which has been cleaned electronically includes separating the colon surface from the rest of the image dataset. The portion of the image data set which corresponds to the separated colon surface is then processed to generate a planar map of the colon in which the value of each pixel in the planar map corresponds to its radial distance from a central axis in the planar map. A segregation of features in the planar map of the colon (a process referred to as segmentation) is then performed. Objects, including polyps, identified by the segmentation process can then be described and classified. Statistical or correlation techniques can be used to identify and/or classify objects (including polyps) within the image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The foregoing features of this invention, as well as the invention itself, may be more fully understood from the following description of the drawings in which:
[0011] Fig. 1 is a block diagram of a system for digital bowel subtraction and automatic polyp detection; and
[0012] Figs. 2 and 2A are a series of diagrams illustrating a technique for detecting and processing a bowel fold;
[0013] Fig. 3 is a flow diagram showing a process for processing a bowel fold region in an image;
[0014] Figs. 4 and 4A are a series of a flow diagrams showing a process for finding a centerline in a colon;
[0015] Fig. 5 is a diagram of a colon having a centerline;
[0016] Fig. 6 is a cross sectional view of a colon taken across lines 6-6 on Fig. 5;
[0017] Figs. 7 and 7A are cross sectional views of a colon taken across lines 6-6 on Fig.
5 and having dilated region boundaries;
[0018] Fig. 8 is a pair of images aligned to extend the colon identification from a first image to a second image;
[0019] Fig. 9 is a diagram of a colon showing the direction of processing used to define colon regions in sequential images of the colon;
[0020] Fig. 10 is a diagram of a colon having a centerline; and
[0021] Fig. 10A is a colon map generated after a colon centerline has been identified and which can be used for polyp detection.
[0022] Figs. 11 - 11 C are a series of diagrams which illustrate the mapping between colon centerline and 3D datasets.
DETAILED DESCRIPTION OF THE INVENTION
[0023] Before describing a virtual colonoscopy system which includes a digital bowel subtraction processor (DBSP) and / or automated polyp detection processor (APDP) and the operations performed to digitally cleanse a bowel and automatically detect a polyp, some introductory concepts and terminology are explained.
[0024] A computed tomography (CT) system generates signals which can be stored as a matrix of digital values in a storage device of a computer or other digital processing device. As described herein, the CT image is divided into a two-dimensional array of pixels, each represented by a digital word. One of ordinary skill in the art will recognize that the techniques described herein are applicable to various sizes and shapes of arrays. The two-dimensional array of pixels can be combined to form a three-dimensional array of pixels. The value of each digital word corresponds to the intensity of the image at that pixel. Techniques for displaying images represented in such a fashion, as well as techniques for passing such images from one processor to another, are known.
[0025] As also described herein, the array of digital data values (or numbers) are generally referred to as a "digital image" or more simply an "image" and may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of density values in a scene. As used herein, the phrase "original image" refers to an image provided from the representational matrices that are output from a CT or other type of scanner machine.
[0026] Each of the numbers in the array can be expressed as to a digital word typically referred to as a "picture element" or a "pixel" or as "image data." The image may be divided into a two dimensional array of pixels with each of the pixels represented by a digital word. Thus, a pixel represents a single instantaneous value which is located at specific spatial coordinates in the image.
[0027] It should be appreciated that the digital word is comprised of a certain number of bits and that the techniques of the present invention can be used on digital words having any number of bits. For example, the digital word may be provided as an eight-bit binary value, a twelve bit binary value, a sixteen but binary value, a thirty-two bit binary value, a sixty-four bit binary value or as a binary value having any other number of bits (e.g. one hundred twenty-eight or more bits). More of less than each of the above-specified number of bits may also be used. [0028] It should also be noted that the techniques described herein may be applied equally well to either a scale images or color images. In the case of a gray scale image, the value of each digital word corresponds to the intensity of the pixel and thus the image at that particular pixel location. In the case of a color image, reference is sometimes made herein to each pixel being represented by a predetermined number of bits (e.g. eight bits) which represent the color red (R bits), a predetermined number of bits (e.g. eight bits) which represent the color green (G bits) and a predetermined number of bits (e.g. eight bits) which represent the color blue (B-bits) using the so-called RGB color scheme in which a color and luminance value for each pixel can be computed from the RGB values. Thus, in an eight bit color RGB representation, a pixel may be represented by a twenty- four bit digital word.
[0029] It is of course possible to use greater or fewer than eight bits for each of the RGB values. It is also possible to represent color pixels using other color schemes such as a hue, saturation, brightness (HSB) scheme or a cyan, magenta, yellow, black (CMYK) scheme. It should thus be noted that the techniques described herein are applicable to a plurality of color schemes including but not limited to the above mentioned RGB, HSB, CMYK schemes as well as the Luminosity and color axes a & b (Lab) YUV color difference color coordinate system, the Karhunen-Loeve color coordinate system, the retinal cone color coordinate system and the X, Y, Z scheme.
[0030] Reference is also sometimes made herein to an image as a two-dimensional pixel array. An example of an array size is size 512 x 512. One of ordinary skill in the art will of course recognize that the techniques described herein are applicable to various sizes and shapes of pixel arrays including irregularly shaped pixel arrays.
[0031] An "image region" or more simply a "region" is a portion of an image. For example, if an image is provided as a 32 X 32 pixel array, a region may correspond to a 4 X 4 portion of the 32 X 32 pixel array.
[0032] In many instances, groups of pixels in an image are selected for simultaneous consideration. One such selection technique is called a "map" or a "local window." For example, if a 3 X 3 subarray of pixels is to be considered, that group is said to be in a 3 X 3 local window. One of ordinary skill in the art will of course recognize that the techniques described herein are applicable to various sizes and shapes of local windows including irregularly shaped windows.
[0033] It is often necessary to process every such group of pixels which can be formed from an image. In those instances, the local window is thought of as "sliding" across the image because the local window is placed above one pixel, then moves and is placed above another pixel, and then another, and so on. Sometime the "sliding" is made in a raster pattern. It should be noted, though, that other patterns can also be used.
[0034] It should also be appreciated that although the detection techniques described herein are described in the context of detecting polyps in a colon, those of ordinary skill in the art should appreciate that the detection techniques can also be used search for and detect structures other than polyps and that the techniques may find application in regions of the body other than the bowel or colon.
[0035] One approach to subtracting contents from the colon is to first identify the intersection of a morphologic dilation of an air region, a dilated fecal matter region and a dilated edge region (which can be found by using a gradient finding function and morphologic dilation). The system then approximates the intersection of these three regions to be the residue that preferably would be removed. While this technique provides satisfactory results in terms of digitally subtracting the contents of the bowel from an image, it can lead to a problem of over subtraction, because sometimes folds that should remain in the image pass through the residue area that is removed thus causing the folds to also be removed.
[0036] This has the undesirable result that the fold in the image is removed from the image along with the bowel contents. It has thus, in accordance with the present invention, been recognized that in some systems, it is possible to correctly subtract tagged regions of the colon and perform a mucosal reconstruction but still over-subtract portions of the image. When over-subtraction of folds occurs, the system may leave behind artifacts in the fold regions. [0037] This misidentification of fold regions occurs, at least in part because of volume averaging techniques in the imaging process (which is unavoidable in CT processing). In particular, when identified (or "tagged") fecal material bounds a relatively narrow soft tissue region, volume averaging can cause the soft tissue region (which should be expressed as image regions having low pixel values) to be assigned pixel values which approximate the pixel values which represent bowel contents (i.e. image regions having high pixel values). Thus, in these fold regions, pixels which should be assigned low pixel values (and thus normally retained, following subtraction of bowel contents) are instead assigned values in the soft tissue density range. Consequently, portions of the image which should be retained (e.g. the fold regions) are instead subtracted out of the image along with the bowel contents.
[0038] One approach which allows subtraction of the bowel contents while allowing the fold regions to remain in the image is referred to as the fold symmetry processing approach. In this approach, the symmetry of the objects in the image are used to identify fold regions in the image. In the symmetry approach, it is recognized that if a point lies in the intersection of the residue (also referred to as bowel contents) and a fold, then it should be surrounded by soft tissue like pixels on all sides. Thus, by examining the variance between this pixel and the pixels around it, this variance should be relatively low. However, when a point that lies in the residue but not in the fold is examined, it should have air (low pixel values) and fecal matter (high pixel values) around it. Thus, the variance of the pixel and the surrounding pixels should be relatively high. Thus, after identifying the residue regions, the process involves searching for pixels having a relatively low variance. These pixels were then put back into the image (rather than subtracted).
[0039] Once the fold region is identified, a further refinement to this process can include dilating the pixels which define the fold region. This results in retaining more of each desired fold and providing the fold having a relatively thin, normal structure in the final image from which bowel contents have been subtracted. [0040] Referring now to FIG. 1 , a system for performing virtual colonoscopy 10 includes a computed tomography (CT) imaging system 12 having a database 14 coupled thereto. As is known, the CT system 10 produces two-dimensional images of cross- sections of regions of the human body by measuring attenuation of X-rays through a cross- section of the body. The images are stored as digital images or image data in the image database 14. A series of such two-dimensional images can be combined using known techniques to provide a three-dimensional image of a colon. A user interface 16 allows a user to operate the CT system and also allows the user to access and view the images stored in the image database.
[0041] A digital bowel subtraction processor (DBSP) 18 is coupled to the image database 14 and the user interface 16. The DBSP receives image data from the image database and processes the image data to digitally remove the contents of the bowel from the digital image. The DBSP can then store the image back into the image database 14. The particular manner in which the DBSP processes the images to subtract or remove the bowel contents from the image will be described in detail below in conjunction with Figs. 2-6. Suffice it here to say that since the DBSP digitally subtracts or otherwise removes the contents of the bowel from the image provided to the DBSP, the patient undergoing the virtual colonoscopy need not purge the bowel in the conventional manner which is know to be unpleasant to the patient.
[0042] The DBSP 18 may operate in one of at least two modes. The first mode is referred to as a raster mode in which the DBSP utilizes a map or window which is moved in a predetermined pattern across an image. In a preferred embodiment, the pattern corresponds to a raster pattern. A threshold process is used in which the window scans the entire image while threshold values are applied to pixels within the image in a predetermined sequence. The threshold process assesses whether absolute threshold values have been crossed and the rate at which they have been crossed. The raster scan threshold process is used to identify pixels having values which represent low density regions (e.g. air) sometimes referred to as "air pixels" which are proximate (including adjacent to) pixels having values which represent matter or substance (e.g. water or other matter). The processor examines each of the pixels to locate native un-enhanced soft tissue, as a boundary between soft tissue (e.g. bowel wall) and bowel contents is established, pixels are reset to predetermined values depending upon the side of the boundary on which they appear.
[0043] The second mode of operation for the DBSP 18 is the so-called gradient processor mode. In the gradient processor mode, a soft tissue threshold (ST) value, an air threshold (AT) value and a bowel threshold (BT) value are selected. A first mask is applied to the image and all pixels having values greater than the bowel threshold value are marked. Next, a gradient is applied to the pixels in the images to identify pixels in the image which should have air values and bowel values. The gradient function identifies regions having rapidly changing pixel values. From experience, one can select bowel/air and soft tissue/air transition regions in an image by appropriate selection of the gradient threshold. The gradient process uses a second mask to capture a first shoulder region in a transition region after each of the pixels having values greater than the BT value have been marked.
[0044] Once the DBSP 18 removes the bowel contents from the image, there exists a relatively sharp boundary and gradient when moving from the edge of the bowel wall to the "air" of the bowel lumen. This is because the subtraction process results in all of the subtracted bowel contents having the same air pixel values. Thus, after the subtraction, there is a sharp boundary and gradient when moving from the edge of the bowel wall to the "air" of the bowel lumen. In this context, "air" refers to the value of the image pixels which been reset to a value corresponding to air density. If left as is, this sharp boundary (and gradient) end up inhibiting the 3D endoluminal evaluation of the colon model since sharp edges appear as bright reflectors in the model and thus are visually distracting.
[0045] A mucosa insertion processor 19a is used to further process the sharp boundary to lesson the impact of or remove the visually distracting regions. The sharp edges are located by applying a gradient operator to the image from which the bowel contents have been extracted. The gradient operator may be similar to the gradient operator used to find the boundary regions in the gradient subtracter approach described herein. The gradient threshold used in this case, however, typically differs from that used to establish a boundary between bowel contents and a bowel wall. [UU4bJ lhe particular gradient threshold to use can be empirically determined. Such empirical selection may be accomplished, for example, by visually inspecting the results of gradient selection on a set of images detected under similar scanning and bowel preparation techniques and adjusting gradient thresholds manually to obtain the appropriate gradient (tissue transition selector) result.
[0047] The sharp edges end up having the highest gradients in the subtracted image. A filter is then applied to these boundary (edge) pixels in order to "smooth" the edge. In one embodiment, the filter is provided having a constrained Gaussian filter characteristic. The constraint is that the smoothing is allowed to take place only over a predetermined width along the boundary. The predetermined with should be selected such that the smoothing process does not obscure any polyp of other bowel structures of possible interest. In one embodiment the predetermined width corresponds to a width of less than ten pixels. In a preferred embodiment, the predetermined width corresponds to a width in the range of two to five pixels and in a most preferred embodiment, the width corresponds to a width of three pixels. The result looks substantially similar and in some cases indistinguishable from the natural mucosa seen in untouched bowel wall, and permits an endoluminal evaluation of the subtracted images.
[0048] The digital bowel subtraction processor 18 also includes a fold processor 19b. It has been recognized that during the process to subtract from the image regions which have been identified or "tagged" as bowel contents (hereinafter "tagged regions"), it is possible to subtract regions of the colon which correspond to a fold. This occurs because during the subtraction process, the system may inadvertently subtract soft tissue elements (represented by low pixel values) that are bounded by high density tagged material (represented by high pixel values). Because of volume averaging artifacts which arise during the imaging process (which is unavoidable in CT image processing), pixels in the fold regions which should have been assigned low pixel values (and thus would normally be retained following subtraction of the bowel contents from the image) are instead assigned values in the soft tissue density range (and consequently end up being subtracted from the image). Thus, portions of the image (i.e. the fold) which should have been retained are instead subtracted out of the image. [0049] The fold processor 19b helps prevent fold regions from being removed from the image. As will become apparent from the description provided below in conjunction with Figs. 2 and 2A, in a preferred embodiment, the fold processor includes a boundary processor and a symmetry processor which identify a boundary and utilizes a symmetry characteristic of the fold region of the image to identify the fold and thus prevent the fold from being subtracted from the image.
[0050] Also coupled between the image database 14 and the user interface 16 is an automated polyp detection processor (APDP) 20. The APDP 20 receives image data from the image database and processes the image data to detect and /or identify polyps, tumors, inflammatory processes, or other irregularities in the anatomy of the colon. The APDP 20 can thus pre-screen each image in the database 14 such that an examiner (e.g. a doctor) need not examine every image but rather can focus attention on a subset of the images possibly having polyps or other irregularities. Since the CT system 10 generates a relatively large number of images for each patient undergoing the virtual colonoscopy, the examiner is allowed more time to focus on those images in which the examiner is most likely to detect a polyp or other irregularity in the colon. The particular manner in which the APDP 20 processes the images to detect and /or identify polyps in the images will be described in detail below in conjunction with Figs. 7-9. Suffice it here to say that the APDP 20 can be used to process two-dimensional or three-dimensional images of the colon. It should also be noted that APDP 20 can process images which have been generated using either conventional virtual colonoscopy techniques (e.g. techniques in which the patient purges the bowel prior to the CT scan) or the APDP 20 can process images in which the bowel contents have been digitally subtracted (e.g. images which have been generated by DBSP 18).
[0051] It should also be appreciated that polyp detection system 20 can provide results generated thereby to an indicator system which can be used to annotate (e.g. by addition of a marker, icon or other means) or otherwise identify regions of interest in an image (e.g. by drawing a line around the region in the image, or changing the color of the region in the image) which has been processed by the detection system 20. [0052] Referring now to Figs. 2 and 2A in which like elements are provided having like reference designations, an image of a bowel cross section 100 includes a bowel wall 101 which defines a perimeter of the bowel. The bowel includes a fold 102. Portions of the bowel image 104a, 104b corresponds to an air region of the bowel 100 and portions of the bowel image 106a, 106b correspond to those portions of the bowel having contents therein. It should be appreciated that in a digital image of the bowel, the air regions 104a, 104b appear as low density material (and thus can be represented by pixels having a relatively low value, for example) while the bowel contents are represented as a high density material (and thus can be represented, for example, by pixels having a value which is relatively high compared with the valve of the pixel requesting the low density regions). A boundary 108 exists between the low density regions 104a, 104b and the high density regions 106a, 106b.
[0053] As can be most clearly seen in Fig. 2, the fold region 102 is relatively long and narrow and is immersed in the high density material 106. Due to the averaging of pixel values which occurs during the imaging process, the fold region 102 (or some portions of the fold region 102) can be artificially designated as a region of high density material. Thus, if left with such a designation, the fold region would be subtracted as part of the high density region 106 which represents bowel contents.
[0054] In accordance with the present invention, however, it has been recognized that if a point lies along the boundary 108 region and in the fold 102, then it should be surrounded by pixels having low density values on all sides. Thus, within the fold region, there is symmetry about the boundary line 108 in that the pixel values in the fold region 102 below the boundary line 108 are substantially equal to the pixel values in the fold region 102 above the boundary line 108. Stated differently, by examining the variance of the pixels around a selected point in the fold region 102, the variance of that pixel with respect to surrounding pixels should be relatively low.
[0055] On the other hand, a point that lies along the boundary 108 in the residue 106 (but not in the fold 102), should have both air (low pixel values) and fecal matter (high pixel values) around it. Thus, there is no symmetry about the boundary line 108 in that the pixel values in the region 106 below the boundary line 108 have a value which is substantially different (e.g. relatively high) compared with the pixel values in the region 104a or 104b above the boundary line 108. Stated differently, a selected point which is not in the fold region 102 should have a relatively high variance of pixel values around it.
[0056] In this manner, it is possible to identify a fold region (e.g. fold region 102) in an image. Once those pixels having a low variance are identified, those pixels are returned to the image rather than subtracted from the image.
[0057] In one embodiment, the pixels having a low variance can be identified by forming a window 110a and correlating pixel values in the window 110a with a kernel. It should be appreciated that the boundary 108 has a width and thus the window 110a must be provided having a size which is large enough to span the width of the boundary 108. In practical applications, the boundary 108 is typically a minimum of 3 to 5 pixels wide. The size of the window 110 may be empirically determined. It should be appreciated of course that the window should fit within the expected width of a fold (which is typically in the range of about five to ten pixels but which could be larger than ten or smaller than five in some cases.
[0058] As can be most clearly seen in FIG. 2 A, the window 110a is generated and slides or moves across the image along the boundary 108.
[0059] When the window 110a is located so that it includes a portion of both regions 104a and 106a, the pixels below the boundary 108 have a relatively high value compared with the value of the pixels above the boundary 108. Thus, the correlation of the pixels above and below the boundary 108 results in a relatively high correlation value which indicates that the there is no symmetry about the boundary 108.
[0060] When the window slides to position 110b, the correlation of the pixels above and below the boundary results in a relatively low correlation value since the pixels above and below the boundary both correspond to low density pixels. When the window 110 slides to position 110c, the correlation of the pixels above and below the boundary again results in a relatively high correlation value since the pixel values above the boundary 108 correspond to low density material pixel values while the pixel values below the boundary 108 correspond to high density material pixel values.
[0061] Thus, by sliding the window 110 across the boundary 108, it is possible to identify fold regions in the image.
[0062] It should be appreciated that although a straight boundary 108 is shown in FIGS. 2 and 2A, this technique can also be used for a curved boundary or any shaped boundary.
[0063] It should also be appreciated that while this technique is described in two- dimensions, the technique applies equally well in three-dimensions. That is, the window would be provided having a width, length and depth and would slide along a surface rather than a two-dimensional boundary.
[0064] Once the pixels which make up the fold region (e.g. fold region 102 in FIG. 2 are identified and returned to the image, the returned pixels can be dilated. This results in more of each desired fold being present in the image while at the same time providing a relatively thin, structure having a normal or natural appearance in the final subtracted image.
[0065] Fig. 3 is a flow diagram showing the processing performed by a processing apparatus which may, for example, be provided as part of a virtual colonoscopy system 10 such as that described above in conjunction with FIG. 1 to perform digital bowel subtraction including detection of fold regions and automated polyp detection. The rectangular elements (typified by element 120 in FIG. 3), are herein denoted "processing blocks" and represent computer software instructions or groups of instructions.
[0066] Alternatively, the processing blocks can represent functions performed by functionally equivalent circuits such as a digital signal processor circuit or an application specific integrated circuit (ASIC). The flow diagrams do not depict the syntax of any particular programming language. Rather, the flow diagrams illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required of the particular apparatus. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables are not shown. It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the spirit of the invention.
[0067] Turning now to FIG. 3, a process for detecting and processing fold regions in an image of a bowel begins by identifying an air boundary as shown in block 100. Although reference is made herein to an air-water boundary, it should be appreciated that reference to any specific boundary (e.g. a boundary between air and water) is intended to be exemplary and is made for reasons of promoting clarity in the description and is not intended to be and should not be construed as limiting. It should be appreciated that the boundary may be between air and some other matter or substance and not necessarily water or between two substances having different densities. Next, as shown in block 102, a correlation matrix is applied to the air- water boundary of an original image. Application of the correlation matrix allows identification of those regions of the image having symmetry about the air- water boundary. Such regions correspond to low variance regions of the image and these regions are identified as shown in block 104. Once the low variance regions of the image are identified, a regional morphologic dilation of these regions is performed as shown in block 106. By recognizing that the low variance regions correspond to fold regions, the fold region is in essence being dilated. This dilation process results in more of each desired fold being present in the image while at the same time providing the image having a fold structure with a normal or natural appearance when viewed by a person examining the image.
[0068] In one embodiment, a subtraction mask is used to remove the bowel contents from the image. In particular, regions covered by the subtraction mask are removed. In such an embodiment, the dilation operation performed in block 106 results in the removal of pixels (representing the fold region) from the subtraction mask. By removing pixels corresponding to the fold from the subtraction mask, the fold regions are left in the image after the bowel contents are subtracted from the image. [0069] It should be appreciate that while the description in Fig. 4 below refers to finding a centerline, the technique perhaps more appropriately can be described as an operation performed by a colon segmentation processor which extracts those segments of the images that are of interest, namely colon. The importance is having to perform this step in a so- called minimal preparation (or tagged) dataset. It should be appreciated that once the colon has been appropriately segmented out or identified in the image, calculating the actual centerline can be accomplished using conventional techniques. Two exemplary techniques are morphologic thinning and medial axis transform calculations, both of which are well-known.
[0070] Referring now to Figs. 4 and 4A, a process for identifying a colon centerline when the colon is not cleansed begins with block 130 in which a first image is selected. The first selected image is also sometimes referred to herein as the index image. Processing then flows to block 132 in which a first point (referred to herein as a seed point) known to be within the colon is identified. The seed point may be identified manually (e.g. by a user) or in some embodiments a centerline processor may automatically determine the seed point. Automatic detection of the seed point may be accomplished for example by having the system first process an image corresponding to the most inferior aspect of the colon (i.e. the image which contains the rectum/anus portion of the colon. If the system is given apriori knowledge that the most inferior image is being processed and what anatomical structure should appear in the image, then the system can automatically select a seed point. Once the seed point is identified in the index image, the entire colon can then be identified in the image (including bowel contents, air and some soft tissue).
[0071] As shown in processing block 134, a simple subtraction is then performed. The subtraction can be accomplished using a threshold and dilation technique. The regions are then labeled (e.g. identified as containing either high density or low density material) as shown in block 138. Once the image regions are labeled, as shown in block 140, the colon region is identified by finding the seed point (i.e. the point known to be in the colon) and determining what label is assigned to the seed point. The first image is now processed. [0072] Processing then flows to decision block 141 in which a determination is made as to whether there are any more images to process. If a decision is made that all images have been processed, then centerline processing ends. If on the other hand, a decision is made that all images have not yet been processed, then processing flows to processing block 142 in which the next image is selected. Also, the image which was last processed is identified as the "current image."
[0073] Processing then proceeds to block 144 in which a simple subtraction is then performed on the next image (i.e. the image currently being processed). The subtraction can be accomplished using a threshold and dilation technique. The image is subject to a threshold operation to assign air and non-air values (or to simply assign values which indicate regions of different densities) and then the regions are labeled (e.g. as either high density or low density material) as shown in blocks 146 and 148. Once the image regions are labeled in both the current and next images, as shown in block 148, the colon region is identified in the next image by using the colon location information in the current image. In this manner, the center of the colon in each subsequent image can be found thereby allowing a colon centerline to be established.
[0074] Referring now to Fig. 5, a colon 160 having bowel contents 162 therein is processed in the manner described above in conjunction with Figs. 4 and 4 A to provided a centerline 164. In the case where the seed point is manually provided (e.g. by a user), any of the points 168a - 168d could be used as the seed point. In the case where a centerline processor automatically detects the seed point, the process begins with the inferior most image 170 (also referred to as the index image) and a point proximate the center of the image 170 corresponds to the rectum/anus 171 is selected as the seed point (i.e. a point in the image known to be part of the colon). With image 170 selected as the first image, the process proceeds sequentially through each image moving in the direction of image 172.
[0075] Referring now to Fig. 6, image 172 is shown to include a first object 174 and a second object 176. It should be appreciated that both objects 174, 176 include high density regions 178, 180. Since the bowel centerline is being found without cleansing the bowel, it is possible that either object 174, 176 corresponds to the bowel and the other object corresponds to some other structure such as bone, for example. Structure 176 also includes a region 182 having a density lower than other objects or regions in the image (e.g. an air region 182) and a boundary 184.
[0076] Referring now to Fig. 7 in which like elements of Fig. 6 are provided having like reference designations, the air region 182 has a boundary 190 and to help distinguish the colon from other material, the air region boundary 190 is dilated as indicated by boundary 190a. Similarly, the content region 180 has a boundary 192 and to help distinguish the colon from other material, the content region boundary 192 is dilated as indicated by boundary 192a. The region 178 also has a boundary 194 which is dilated as indicated by boundary 194a.
[0077] Referring now to Fig. 7A, the dilation of the air and high density regions results in the generation of an overlap region 196. A union of each point in each set is then made to identify the entire colon. It should be appreciated the points must be contiguous to be included as points in the union of sets. In this manner, structure 178 is distinguished as a structure which is separate from structure 178.
[0078] Referring now to Fig. 8, a pair of images 200, 202 in which structures 176', 178' have been identified are processed to find a centerline. The current image 200 corresponds to an image in which a point 204 in the colon has already been identified. The point 204 could have been identified either manually or automatically as described above. Once the two images are aligned then a determination is made as to what structure in image 202 the point 204 lies within. The alignment should be considered conventional in that the images are obtained in the same scanning procedure and will have assigned to them a fixed position within the frame of reference established by the CT scanner. In this particular example, the point 204 lies within structure 176". Since point 204 is known to be within the colon and the distance D which separates the two images 200, 202 along the colon is known to be small, structure 176" in image 202 is identified as a colon region.
[0079] Referring now to Fig. 9, as mentioned above, the images are processed sequentially in a given direction as shown in Fig. 9. It should be appreciated that in some instances, e.g. when a turn in the colon is reached, it is necessary to reverse the direction in which the images are processed. This is important because sometimes the colon makes 180 degree turns (also referred to as bends, or flexures) and in order to correctly map the anatomy with reference to the colon, the system must change direction to follow the anatomic (as opposed to spatial) direction of the colon. Upon reversing, the system needs to continue in the 'reversed' direction at least the distance equivalent to the maximum diameter of the colon before determining that it has reached its terminus. This is because, if one imagines a hairpin turn in a tube (e.g. the colon), if the system has processed images through one loop to the bend, and reversed direction, it will begin to encounter new, contiguous, but unmarked colon only after it has descended roughly one diameter's worth of distance. The one diameter distance is a practical, empirical distance to employ, however other distances may of course also be used to accomplish proper processing.
[0080] Referring now to Figs. 10 and 10 A, by finding a centerline 222 of a colon 220, the colon can be "unfolded" as shown by reference number 220a in Fig. 10A. Once the colon is unfolded, polyp detection techniques can be applied to the unfolded colon 220a. The centerline has two primary uses: improving subtraction (by allowing the system to clearly follow fold anatomy in three dimensions; and 2) improving polyp detection. The latter is improved because the detection system can operate within a frame of reference which is related to the colon anatomy. If all of the colon anatomy is laid out on a 'plane', as possible with a centerline, then potential lesions can be normalized with respect to scale, and simultaneously, orientation (rotation) of target lesions can be minimized.
[0081] Once polyps (e.g. polyps 226a, 226b) are identified in the unfolded colon 220a, the polyp locations are identified in the three dimensional image of the colon. An approach to polyp detection using a map of an unfolded colon is next described.
[0082] For the following discussion, it will be assumed that CTC datasets have been cleaned electronically in the manner described above. Most image processing methods, such as that intended for DSBC will introduce artifacts into the image datasets; the specific intent is to optimize and implement a discriminator system for minimal preparation, electronically cleansed CTC datasets, to appropriately handle the expected artifacts. [0083] The first step in detection is to separate the colon surface from the rest of the image dataset. A useful method to map the colon surface is to calculate a colon centerline (i.e. a three-dimensional curve that runs the length of the colon along the center of its lumen. The centerline is a useful construct for evaluation of a wide range of image processing problems, and can be calculated by use of a morphologic thinning algorithm and the so-called medial axis transform (MAT). In morphological thinning, the air within the colon lumen is taken as the object of interest and this column of air is iteratively eroded, until a single line segment remains along the central axis of the colon. The medial axis transform is a complementary algorithm wherein the distance between a set of regularly spaced points within the column and the outer boundary of the air column is tabulated. Points associated with greater distance to the boundary are assigned higher values, and the medial axis is taken as the set of points with the maximum distance to the outer boundary. While the MAT is more computationally expensive, it is generally a robust approach.
[0084] Following the calculation of a colon centerline, a radial distance signature is calculated along the length of the centerline. The radial signature is a graph representing the distance from center of an object to its boundary as a radial line segment is swept through 360°. In this case, the distance from the centerline to the colon mucosal surface is measured at each fixed angular interval around the centerline, and this process is repeated along the length of the centerline curve. The result of this procedure is a planar map of the colon where the value of each pixel represents its radial distance from the central axis. The process is akin to straightening the colon, and slicing it open longitudinally, as for a pathology specimen. This approach has been employed to map the colon in both phantom and clinical cases for evaluation for CTC.
[0085] Transforming the features of the colon mucosa from the standard Euclidean axes of a CT image to a set of coordinates situated along the length of the colon has the potential to normalize the orientation of polyp lesions with respect to the colonic mucosa. This is because the majority of polyps are semi-circular in elevation. Hence, polyps can be viewed as nearly hemi-spherical objects protruding from the plane of the colon inner surface. Viewed from above, these polyps would be expected to have a nearly circular contour. [0086] Machine identification of objects of interest in the map requires a preliminary segregation of features in the map, a step called segmentation. This process can be accomplished by a method known as the watershed segmentation. Briefly, this method can be viewed akin to mathematically flooding the contour of the colon map with water, and evaluating the contour lines of features that remain above the water surface. In this analogy, the water is allowed to rise until all objects are inundated, and the algorithm tabulates the position of the contour lines just before the waters from different regions of the map are allowed to admix. The result of this process is a set of continuous boundaries surrounding the separate objects on the map. To optimize the procedure, watershed segmentation is usually applied to the gradient transform of an image. The gradient transform is a rendering of the image where edges of shapes are accentuated. The edge accentuation is calculated by convolving the image map with an operator matrix, such as the Sobel operator. In the case of the colon map, the edges to be accentuated are the transitions of radial height of each feature along the colon mucosa. The result of these operations is identification of a set of objects situated along the mapped internal surface of the colon. The boundary of each object is composed of the points at which the local colon surface diverges sharply inward. What follows are two methods to describe and classify these objects identified by this process of segmentation.
[0087] The first method is based on statistical description (i.e. it is a statistical approach to polyp detection). There is a well developed body of literature on the subject of statistical methods for feature discrimination. In the case of polyp detection, the boundary and internal texture of the objects identified by segmentation are described, respectively, in terms of their variation about a central point (called the centroid), their mean, and their variance about the mean. The centroid is the weighted average of the object, corresponding to its center of mass. The mean and variance are respectively, the standard statistical average and second moment about this average. One advantage to this approach to feature description is that these statistical descriptions are relatively invariant to the orientation and scaling of the object in question. In the case of polyp detection, where the target objects are nearly circular in shape, the variance of the object boundary about the object centroid and the compactness of each boundary yield potentially unique values. The compactness of an object is defined by the equation: (perimeter)A2 / area. For nearly circular representations of objects, both the boundary variance and the boundary compactness demonstrate a minimum. In contrast, other structures of the colon, are typically linear or curvilinear in shape. Hence, one would expect the compactness and boundary variance of these structures to be distinguishable from those of a polyp.
[0088] The internal texture of an object also contains useful classifying information. For example, the standard deviation of pixels within an object and the average entropy of pixels that make up each object have been shown to be useful for object discrimination. In this case, the entropy of a set of pixels is defined as: -Σ[ p(/)log (p(/)) ], where p(t') represents a histogram of the texture (CT density) values. Summation is performed for each i, comprising the range of possible texture values in the image. In the setting of the colon, a frequent mimicker of the colon polyp is retained fecal material. Like polyps, retained fecal material can demonstrate a nearly spherical contour. Polyps however, demonstrate a uniformly soft tissue density, whereas retained feces generally contain small bubbles of air. As a result of this heterogeneity, one would expect the texture variance and entropy of polyps and fecal pseudolesions to be distinct.
[0089] In general, pattern recognition can be facilitated by the combination of boundary and texture descriptors into a single multi-dimensional feature vector. It is believed that there are no published studies combining both contour and textural analyses for the purpose of polyp detection; however, there is an extensive literature describing their utility for pattern recognition.
[0090] In the most straight forward approach, feature vectors can be compared for the purpose of pattern classification by calculating the Euclidean distance between them. The Euclidean distance is the multi-dimensional analogy to the planar formula, D = ( (xn- χι)2 + (Yo - Yi)2) ) , where (x0 ,y0) and (xi, yi) refer to the coordinates of two points on a plane. Objects that are similar in feature will demonstrate a small Euclidean distance, and by empirically setting a threshold, or discriminator function, one can classify an unknown object based on the distance of its feature vector to the vector of a known object. In practice, the set of feature vectors of a class of objects are not identical. Instead, each class of objects designated, cθj, is best described by the statistical parameters, mean feature vector, mj, covariance matrix, Cj, and probability of occurrence, P(j). These parameters are analogous to the two-dimensional evaluation of populations according to mean, variance, and probability density. If these parameters are known for each class objects to be encountered, it is in theory possible to formulate an explicit discriminator function to separate them. This function, called the Bayesian discriminator d() for a group of classes, j, takes the form:
Figure imgf000025_0001
where x is a feature vector, xτ is the transpose of x, Cj and Cj "1 are the covariance matrix and its inverse, nay and mj T are the mean feature vector and its transpose, and P(cϋj) is the probability of class ωj occurring.
[0091] The different kinds of soft tissue density structures to be encountered in the colon are relatively few in number, and include polyps, haustral folds, and retained feces. One approach to polyp detection is to construct a library of known colon structures and derive the mean, co-variance, and probability parameters necessary to form an explicit discriminator. While it is possible that the feature vectors of these structures will cluster sufficiently to permit construction of an explicit discriminator function, it is believed that these parameters have not yet been catalogued for the human colon.
[0092] In cases where the mean feature vector, covariance matrix, and probability descriptors of object classes cannot be explicitly calculated a priori, it has been shown that effective pattern discrimination can be achieved by use of an artificial neural network. In this method, the feature vector of an object is evaluated in a set of weighted nodes, analogous to neurons. Each node has the property that it generates a non-linear output in response to a set of input values and associated weights. The input nodes typically correspond in number to the dimension of the input feature vector, and similarly, the output nodes correspond in number to the different object classes to be identified. Determining the node weights is performed by exposing the network to a set of training cases. In this process, vectors of known class are fed forward through the network and the error of the resulting output classification is stepwise propagated backward through the network. Each weight is adjusted in order to minimize the local error associated at a given node with the inputs of the preceding layer. Following training, unknown feature vectors are fed forward through the network without backward feedback and class membership is determined by the final state of the output nodes. It has been shown that a three-layer network, comprised of an input layer, a hidden layer, and an output layer is in theory capable of separating arbitrarily complex groups of object classes. Hence, another means exists to analyze the feature vectors of colon objects if the cataloguing method proves unfeasible.
[0093] A second approach taken for polyp classification (referred to as a correlation method for polyp detection) combines a planar colon map with template matching. In this approach, the segmented representations of objects in the colon map are further modified to normalize their size and to incorporate a description of their internal texture. Normalization for size can be accomplished by analysis of the boundary points of each object around the object centroid. In this representation, the boundary points of an object are repositioned along their respective radii toward the centroid by the minimum radius observed in the set of boundary points for that object. This process retains the basic morphology of the boundary contour, and reduces the variations in boundary points for larger objects. A description of the internal composition of an object can be represented by the standard deviation of pixels contained within the object boundary. The pixels within the normalized representation of the object are set to the value of this standard deviation. The result of these steps is a planar object having a contour normalized for size, and having internal pixel values set to the standard deviation of the object's internal texture. A similar mapping is performed for the template polyp — its contour and internal texture are respectively normalized for size and modified to reflect internal homogeneity. The template is then combined with the template in the process of correlation, described previously. The pixel values in the correlation image reflect the similarity of each region of the modified map with the modified template. Sharp peaks of pixel value correspond to regions of high similarity and are taken to represent the location of polyps. Correlation peaks of this kind can be selected by means of a high pass filter and threshold.
[0094] Another approach to further develop a polyp detection system can be provided by implementing colon mapping and segmentation as follows. The centerline of the colon can be found using prior art techniques such as the algorithm described by Ferreira and Russ, with the modifications described by Zhang for handling colon flexures. With the colon centerline identified, a colon mapping process can be performed. The smallest size polyp of clinical interest is approximately seven (7) mm in size, which represents ten (10) isometric voxels from thirty-five (35) cm field-of-view CTC data. Hence, in order to balance computational efficiency with minimized sampling error, the software will sample the centerline at five (5) voxel intervals. The angular sampling interval at each stop along the centerline derives from a similar calculation: the maximum circumference of normal, air distended colon in CTC is approximately 180 mm, leading to a seven degree (7°) interval in order to achieve fifty percent (50%) sampling overlap.
[0095] Graphical reference for the methods described in this section are shown in Figures 11A-D.
[0096] The technique utilizes the radial distance from the centerline to an inner surface of the colon using a three-dimensional Euclidean distance formula. The mucosal surface is segmented out of the CT data using a global threshold set to exclude pixels above -50 Hounsfϊeld units. Previous experience has shown that this threshold value clearly outlines the inner aspect of the mucosa. For each 360° sweep about the centerline, the software will tabulate the maximal radius observed, rmax. The radial height of each point, i, along the signature will be calculated as: vmax - η .
[0097] The data construct for the colon map comprises a three dimensional matrix, with two axes describing position along the mucosal (inner) surface of the colon, and the third dimension representing the radial height of features from the surface. The internal voxels of structures protruding into the colon lumen will be incorporated into the matrix in two steps, based on the calculation of the radial signature of the segmented colon. The maximum of the radial signature, rmax, will be taken to represent the most distended region of colon wall at each position for which it will be calculated along the colon centerline. In the recording of the angular signature of the colon, voxels located along the radial segment stretching from the inner mucosal surface to rmax will be included in the map matrix. In this way, the map will comprise strata representing the features protruding into the colon lumen. Three additional levels along the height axis of the map will be used. The first of these, designated z0, will hold the radial height data. The second, zi, will hold the gradient transform of the map, and the third, z2, will hold a normalized form of each object for use in correlation matching. [0098] The gradient transform of the colon map, located in level zls will be calculated using the Sobel operator. The gradient will be calculated by convolution of the Sobel operator with the z0 level data, and the result will be taken to represent the edges along the height-axis of structures protruding into the colon lumen.
[0099] A watershed segmentation on the zi transform data is then performed. This watershed segmentation can be accomplished using any prior art technique such as the technique of Bieniek. While the watershed technique has several known advantages in terms of its function and output, it is also known that the algorithm can generate unwanted boundaries due to minor variations in the surface being analyzed. This problem, known as over-segmentation, can be addressed in the following manner. It has been shown that preprocessing the surface to be analyzed in order to bring additional information to the segmentation step can improve the output of the segmentation. For example, application of a smoothing filter can diminish the presence of minor surface variations, removing them as possible false targets of segmentation. In addition, in the case of the colon map, the choice of a sufficiently high gradient threshold in the gradient transform may permit selection of only the larger features of interest along the colon surface. These preprocessing steps lead to the generation of markers, i.e. tags that help to pre-filter the image in order to optimize the segmentation.
[0100] The process of feature extraction and analysis can be continued along two approaches, one based on a statistical description of image features, and the second based on correlation. Described below are approaches for each method.
[0101] In one statistical approach, for each object segmented from the colon map, the boundary variance, boundary compactness, textural variance, and textural entropy are determined. First, the centroid of the boundary points of each object represented in the zi level of the data structure are determined. This may be accomplished using conventional techniques. The variance of the distance of boundary points to the centroid as well as the compactness can then be determined. This may also be accomplished using conventional techniques. [0102] For the textural descriptors, the composite voxels for each three-dimensional object arising from the colon mucosa are gathered. In one approach, this can be accomplished in two steps. First, for each segmented object, j, the zi boundary points can be used to define within the plane of zi the set of internal points of each object, designated, zi,intemaij- As the data structure is designed, for each point, p , in the set zijntemaijj the composite elements of each three-dimensional object are held in the column of strata above ?i. The vertical delimiter of these elements is the radial signature stored in each corresponding position of the z0 layer. Hence, for each planar object defined in zl5 one can gather all the elements that compose the corresponding three-dimensional structure arising from the colon mucosa, to calculate the mean, variance, and entropy of these elements. The variance and the entropy of the texture values can be determined using well-known techniques such as the technique described by Gonzalez.
[0103] In one correlation approach, for evaluation of the correlation based matching technique, a normalized version of each segmented object is determined and these representations are stored in the z2 layer of the data structure. The normalized version of each object will be centered on the object centroid calculated previously, and stored in the zi layer. Normalization for size for each object will proceed as follows. The radial distance separating the set of boundary points and the centroid for each object are determined. The minimum radial distance observed for each set of boundary points, designated, rmin, will be used to displace each boundary point in a radial direction toward the centroid by dividing each boundary point radius by rmin. The result will be a set of boundary points that incorporate the shape of each object, but reduced in size.
[0104] The texture of each obj ect in z2 will be incorporated into this representation by calculating the standard deviation of voxel density. The internal elements comprising each three-dimensional object represented in zi and z2 will be gathered together as described above for the statistical approach. The result in the z2 level will be a set of objects, whose boundaries correspond in morphology to the objects in zi, but normalized for size, and whose internal elements incorporate the object texture.
[0105] Lesions may be catalogued using either a statistical approach or a correlation approach. In one statistical approach, an estimate is made of the boundary variance, boundary compactness, textural variance and textural entropy of a group of colonoscopy confirmed polyps, haustral folds, and fecal pseudo lesions. The polyps and other objects will be extracted from the large group of research CTC cases performed previously.. Initially objects that have been identified on the traditionally cleansed CTC exams can be used, in order to exclude noise introduced by the DSBC processing. In one prototype system, fifty lesions already present in a library include a spectrum of polyps ranging in size from 5 to 35 mm. Similarly, as haustral folds are normal, common structures of the colon, a large number of these is available for analysis. Finally, fecal pseudo lesions in prior cases have been catalogued and it is possible to identify these objects in experimental embodiments operating with the above-described techniques. The four statistical boundary and textural descriptors can be calculated from the CTC datasets, following the segmentation steps described above. These data will be used to estimate a mean feature vector, m, covariance matrix C, and probability of occurrence for polyps, haustral folds, and fecal pseudolesions. The mean vector and covariance matrix can serve as the basis for constructing a Bayesian pattern discriminator function, utilizing prior art techniques such as the method described by Gonzalez. After analysis by this approach, the software will flag objects assigned to the polyp class by resetting the centroid of the zi layer of each object to an arbitrarily high value. The group of marked centroids will be taken to represent the group of polyp candidates and these data will be passed to a polyp mark-up routine for identification to the evaluating radiologist.
[0106] If the Bayesian discriminator is unable to adequately distinguish between obj ect classes, it is conceivable that the variety of polyp morphologies, from sessile to pedunculated will prove too complex for the above-described approach. If the Bayesian analysis of the feature vectors outlined above proves too limited, the feature vectors maybe analyzed by means of a three layer artificial neural network. Determination of inadequate performance of the Bayesian technique will be made based upon evaluation of polyps within the colon phantom, a series of steps described below. It is believed that the neural network approach may be necessary if the Bayesian approach yields a sensitivity for polyps in the colon phantom of < 70%. A brief description of a neural network design is provided herein below. [0107] If constructed, the three layer neural network will be composed of four input nodes, four hidden nodes, and three output nodes. Initial training of the network will take place with a set of polyps, folds, and pseudo lesions obtained from traditionally cleaned CTC. Network node weights will initially be set to small random values with zero mean. These will be adjusted according to the method of back propagation, and the output nodes will be monitored during training. The process continues by exposing the network to a set of known phantom structures obtained using the DSBC technique, as it has been shown that the performance of an artificial neural network can increase with graded exposure to noise during training. Training will be deemed complete for the network when upon presentation with an object of class, i, output for node 0 is > 0.95 and output for all other nodes 0 ,j≠i, is < 0.05.
[0108] In one correlation approach, a polyp template can be generated from a library of previous data by applying the processing steps outlined above to generate a family of normalized polyp representations. These polyp templates are convolved with the normalized representation of the colon stored in the z2 level of the imaging data structure. The resulting correlation image is filtered using a two-dimensional high pass filter, in order to emphasize sharp peaks of correlation, as these are most likely to represent regions of match. In addition, it is expected to employ a global threshold to further isolate the correlation peaks from the rest of the correlation image. The value of this threshold can be determined empirically. The location of the filtered, thresholded correlation peaks represent the polyp candidates for this technique. These data can then be passed to a polyp mark-up routine.
[0109] A polyp mark-up and output of detection system is next described. Those objects assigned to the polyp class will be flagged by setting the centroid element in the zi layer to an arbitrarily high value. Subsequently, the group of flagged centroid elements will be processed by a polyp mark-up system. This routine will take each centroid location, and map this value back to the original CT dataset. Finally, the mark-up routine will etch in white the mucosal surface of the colon local to the centroid position. In this way, output of the detection system will be displayed to the reviewing radiologist. [0110] Having described preferred embodiments of the invention it will now become apparent to those of ordinary skill in the art that other embodiments incorporating these concepts may be used. Accordingly, it is submitted that that the invention should not be limited to the described embodiments but rather should be limited only by the spirit and scope of the appended claims.

Claims

What is claimed is:
1. In a digital bowel subtraction process, a method for processing a fold region of an image comprising: identifying in the image a boundary between a first substance having a first density and a second substance having a second different density; processing one or more portions of the image about the boundary to determine whether symmetry exists about the boundary; and identifying regions in the image having symmetry about the boundary.
2. The method of Claim 1 wherein identifying a boundary comprises identifying a boundary using a threshold operation and a dilation operation.
3. The method of claim 2 wherein the threshold operation comprises comparing pixel values to a predetermined threshold value and setting pixel values above the predetermined threshold to a first value and pixel values below the predetermined threshold to a second value.
4. The method of claim 1 wherein processing one or more portions of the image about the boundary to determine whether symmetry exists about the boundary comprises at least one of: (a) comparing the value of pixels above the boundary to the value of pixels below the boundary; (b) comparing the value of pixels on the boundary to the value of pixels below the boundary; and (c) comparing the value of pixels on the boundary to the value of pixels above the boundary.
5. The method of claim 5 wherein the image corresponds to an original image.
6. The method of claim 6 wherein the processing to determine symmetry about the boundary comprises generating a window; sliding the window across the boundary; and computing a value indicative of the symmetry between the pixel values and the boundary in the window and the pixel values below the boundary in the window.
7. In a digital bowel subtraction process, a method for processing a fold region of an image comprising: identifying an air-water boundary in the digital image; processing one or more portions of the image about the air- water boundary to determine whether symmetry exists about the air-water boundary; and identifying regions in the image having symmetry about the air-water boundary.
8. The method of Claim 1 wherein identifying an air- water boundary comprises identifying an air- water boundary using a threshold operation and a dilation operation.
9. The method of claim 2 wherein the threshold operation comprises comparing pixel values to a predetermined threshold value and setting pixel values above the predetermined threshold to a first value and pixel values below the predetermined threshold to a second value.
10. The method of claim 7 wherein processing one or more portions of the image about the air- water boundary to determine whether symmetry exists about the air- water boundary comprises at least one of: (a) comparing the value of pixels above the boundary to the value of pixels below the boundary; (b) comparing the value of pixels on the boundary to the value of pixels below the boundary; and (c) comparing the value of pixels on the boundary to the value of pixels above the boundary.
11. The method of claim 7 wherein the image corresponds to an original image. i . ine mexnoα or claim 11 wnerein the processing to determine symmetry about the air-water boundary comprises generating a window; sliding the window across the air-water boundary; and computing a value indicative of the symmetry between the pixel values and the boundary in the window and the pixel values below the boundary in the window.
13. A system for processing a fold region in a digital image of a colon, the system comprising: a boundary processor which receives a first digital image and identifies in the image a boundary between a first substance having a first density and a second substance having a second different density; and a symmetry processor which processes one or more portions of the image about the boundary to determine whether symmetry exists about the boundary and identifies regions in the image having symmetry about the boundary.
14. The system of Claim 13 wherein said boundary processor comprises means for identifying a boundary using a threshold means and a dilation means.
15. The system of claim 14 wherein the threshold means comprises means for comparing pixel values to a predetermined threshold value and setting pixel values above the predetermined threshold to a first value and pixel values below the predetermined threshold to a second value.
16. The system of claim 13 wherein said symmetry processor comprises means for processing one or more portions of the image about the boundary to determine whether symmetry exists about by at least one of: (a) comparing the value of pixels above the boundary to the value of pixels below the boundary; (b) comparing the value of pixels on the boundary to the value of pixels below the boundary; and (c) comparing the value of pixels on the boundary to the value of pixels above the boundary.
17. The system of claim 16 wherein the image being processed corresponds to an original image.
18. The system of claim 17 wherein the means to determine symmetry about the boundary comprises means for generating a window, sliding the window across the air- water boundary, and computing a value indicative of the symmetry between the pixel values and the boundary in the window and the pixel values below the boundary in the window.
19. The system of claim 13 wherein the density of the first substance is lower than the density of the second substance.
20. The system of claim 13 wherein the density of the first substance is lower than the density of the second substance.
21. A system for identifying a colon centerline in an image of an uncleansed colon comprises: means for identifying a seed point known to be within the colon; means for labeling the seed point and regions around the seed point as having one of at least two values based upon a characteristic of the region in the image; means for identifying image regions having the same label as the seed point; and means for identifying a colon region within the image by identifying all regions having the same label as the seed point.
22. A method for identifying a colon centerline in an image of an uncleansed colon comprises: identifying a seed point known to be within the colon; labeling the seed point and regions around the seed point as having one of at least two values based upon a characteristic of the region in the image; identifying image regions having the same label as the seed point; and identifying a colon region within the image by identifying all regions having the same label as the seed point.
PCT/US2005/012325 2004-04-12 2005-04-12 Method and apparatus for processing images in a bowel subtraction system WO2005101314A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2007508461A JP2007532251A (en) 2004-04-12 2005-04-12 Method and apparatus for image processing in an intestinal deduction system
EP05736176A EP1735750A2 (en) 2004-04-12 2005-04-12 Method and apparatus for processing images in a bowel subtraction system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US56146504P 2004-04-12 2004-04-12
US60/561,465 2004-04-12

Publications (2)

Publication Number Publication Date
WO2005101314A2 true WO2005101314A2 (en) 2005-10-27
WO2005101314A3 WO2005101314A3 (en) 2006-06-01

Family

ID=34966077

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/012325 WO2005101314A2 (en) 2004-04-12 2005-04-12 Method and apparatus for processing images in a bowel subtraction system

Country Status (3)

Country Link
EP (1) EP1735750A2 (en)
JP (1) JP2007532251A (en)
WO (1) WO2005101314A2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007064769A1 (en) * 2005-11-30 2007-06-07 The General Hospital Corporation Adaptive density mapping in computed tomographic images
JP2007209538A (en) * 2006-02-09 2007-08-23 Ziosoft Inc Image processing method and program
EP1884894A1 (en) * 2006-07-31 2008-02-06 iCad, Inc. Electronic subtraction of colonic fluid and rectal tube in computed colonography
JP2008093172A (en) * 2006-10-11 2008-04-24 Olympus Corp Image processing device, image processing method, and image processing program
JP2009022411A (en) * 2007-07-18 2009-02-05 Hitachi Medical Corp Medical image processor
EP2033567A1 (en) * 2006-05-26 2009-03-11 Olympus Corporation Image processing device and image processing program
DE102007058687A1 (en) * 2007-12-06 2009-06-10 Siemens Ag Colon representing method for patient, involves producing processed data by segmenting and removing image information of colon contents marked with contrast medium during recording of image data detected by colon folds
JP2010504794A (en) * 2006-09-29 2010-02-18 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Protrusion detection method, system, and computer program
DE102009033452A1 (en) * 2009-07-16 2011-01-20 Siemens Aktiengesellschaft Method and device for providing a segmented volume dataset for a virtual colonoscopy and computer program product
US8031921B2 (en) 2005-02-14 2011-10-04 Mayo Foundation For Medical Education And Research Electronic stool subtraction in CT colonography
EP2472473A1 (en) * 2006-03-14 2012-07-04 Olympus Medical Systems Corp. Image analysis device
WO2015063192A1 (en) * 2013-10-30 2015-05-07 Koninklijke Philips N.V. Registration of tissue slice image
CN109313803A (en) * 2016-06-16 2019-02-05 皇家飞利浦有限公司 A kind of at least part of method and apparatus of structure in at least part of image of body for mapping object
CN112116694A (en) * 2020-09-22 2020-12-22 青岛海信医疗设备股份有限公司 Method and device for drawing three-dimensional model in virtual bronchoscope auxiliary system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4845566B2 (en) * 2006-04-03 2011-12-28 株式会社日立メディコ Image display device
US10089729B2 (en) * 2014-04-23 2018-10-02 Toshiba Medical Systems Corporation Merging magnetic resonance (MR) magnitude and phase images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8031921B2 (en) 2005-02-14 2011-10-04 Mayo Foundation For Medical Education And Research Electronic stool subtraction in CT colonography
US7809177B2 (en) 2005-11-30 2010-10-05 The General Hospital Corporation Lumen tracking in computed tomographic images
US7961967B2 (en) 2005-11-30 2011-06-14 The General Hospital Corporation Adaptive density mapping in computed tomographic images
US8000550B2 (en) 2005-11-30 2011-08-16 The General Hospital Corporation Adaptive density correction in computed tomographic images
US7965880B2 (en) 2005-11-30 2011-06-21 The General Hospital Corporation Lumen tracking in computed tomographic images
WO2007064769A1 (en) * 2005-11-30 2007-06-07 The General Hospital Corporation Adaptive density mapping in computed tomographic images
JP2007209538A (en) * 2006-02-09 2007-08-23 Ziosoft Inc Image processing method and program
US7860284B2 (en) 2006-02-09 2010-12-28 Ziosoft, Inc. Image processing method and computer readable medium for image processing
US8244009B2 (en) 2006-03-14 2012-08-14 Olympus Medical Systems Corp. Image analysis device
EP2472473A1 (en) * 2006-03-14 2012-07-04 Olympus Medical Systems Corp. Image analysis device
US8116531B2 (en) 2006-05-26 2012-02-14 Olympus Corporation Image processing apparatus, image processing method, and image processing program product
EP2033567A4 (en) * 2006-05-26 2010-03-10 Olympus Corp Image processing device and image processing program
EP2033567A1 (en) * 2006-05-26 2009-03-11 Olympus Corporation Image processing device and image processing program
EP1884894A1 (en) * 2006-07-31 2008-02-06 iCad, Inc. Electronic subtraction of colonic fluid and rectal tube in computed colonography
JP2010504794A (en) * 2006-09-29 2010-02-18 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Protrusion detection method, system, and computer program
US8594396B2 (en) 2006-10-11 2013-11-26 Olympus Corporation Image processing apparatus, image processing method, and computer program product
US8917920B2 (en) 2006-10-11 2014-12-23 Olympus Corporation Image processing apparatus, image processing method, and computer program product
EP2085019A1 (en) * 2006-10-11 2009-08-05 Olympus Corporation Image processing device, image processing method, and image processing program
EP2085019A4 (en) * 2006-10-11 2011-11-30 Olympus Corp Image processing device, image processing method, and image processing program
JP2008093172A (en) * 2006-10-11 2008-04-24 Olympus Corp Image processing device, image processing method, and image processing program
JP2009022411A (en) * 2007-07-18 2009-02-05 Hitachi Medical Corp Medical image processor
DE102007058687A1 (en) * 2007-12-06 2009-06-10 Siemens Ag Colon representing method for patient, involves producing processed data by segmenting and removing image information of colon contents marked with contrast medium during recording of image data detected by colon folds
US8908938B2 (en) 2009-07-16 2014-12-09 Siemens Aktiengesellschaft Method and device for providing a segmented volume data record for a virtual colonoscopy, and computer program product
DE102009033452B4 (en) * 2009-07-16 2011-06-30 Siemens Aktiengesellschaft, 80333 Method for providing a segmented volume dataset for a virtual colonoscopy and related items
DE102009033452A1 (en) * 2009-07-16 2011-01-20 Siemens Aktiengesellschaft Method and device for providing a segmented volume dataset for a virtual colonoscopy and computer program product
WO2015063192A1 (en) * 2013-10-30 2015-05-07 Koninklijke Philips N.V. Registration of tissue slice image
US10043273B2 (en) 2013-10-30 2018-08-07 Koninklijke Philips N.V. Registration of tissue slice image
US10699423B2 (en) 2013-10-30 2020-06-30 Koninklijke Philips N.V. Registration of tissue slice image
CN109313803A (en) * 2016-06-16 2019-02-05 皇家飞利浦有限公司 A kind of at least part of method and apparatus of structure in at least part of image of body for mapping object
CN109313803B (en) * 2016-06-16 2023-05-09 皇家飞利浦有限公司 Method and apparatus for mapping at least part of a structure in an image of at least part of a body of a subject
CN112116694A (en) * 2020-09-22 2020-12-22 青岛海信医疗设备股份有限公司 Method and device for drawing three-dimensional model in virtual bronchoscope auxiliary system
CN112116694B (en) * 2020-09-22 2024-03-05 青岛海信医疗设备股份有限公司 Method and device for drawing three-dimensional model in virtual bronchoscope auxiliary system

Also Published As

Publication number Publication date
WO2005101314A3 (en) 2006-06-01
EP1735750A2 (en) 2006-12-27
JP2007532251A (en) 2007-11-15

Similar Documents

Publication Publication Date Title
EP1735750A2 (en) Method and apparatus for processing images in a bowel subtraction system
US7630529B2 (en) Methods for digital bowel subtraction and polyp detection
US7876947B2 (en) System and method for detecting tagged material using alpha matting
Ghosh et al. Incorporating priors for medical image segmentation using a genetic algorithm
US8170642B2 (en) Method and system for lymph node detection using multiple MR sequences
Yoshida et al. CAD in CT colonography without and with oral contrast agents: progress and challenges
US11896407B2 (en) Medical imaging based on calibrated post contrast timing
US8515200B2 (en) System, software arrangement and method for segmenting an image
Casiraghi et al. Automatic abdominal organ segmentation from CT images
Vukadinovic et al. Segmentation of the outer vessel wall of the common carotid artery in CTA
Ratheesh et al. Advanced algorithm for polyp detection using depth segmentation in colon endoscopy
EP4118617A1 (en) Automated detection of tumors based on image processing
Delmoral et al. Segmentation of pathological liver tissue with dilated fully convolutional networks: A preliminary study
WO2010034968A1 (en) Computer-implemented lesion detection method and apparatus
Afifi et al. Unsupervised detection of liver lesions in CT images
Nagy et al. On classical and fuzzy Hough transform in colonoscopy image processing
Wei et al. Segmentation of lung lobes in volumetric CT images for surgical planning of treating lung cancer
Tamiselvi Effective segmentation approaches for renal calculi segmentation
Prakash Medical image processing methodology for liver tumour diagnosis
You et al. Extraction of samples from airway and vessel trees in 3D lung CT based on a multi-scale principal curve tracing algorithm
Naeppi et al. Computer-aided detection of polyps and masses for CT colonography
Hiraman Liver segmentation using 3D CT scans.
Ko et al. Interactive polyp biopsy based on automatic segmentation of virtual colonoscopy
Gomalavalli et al. Feature Extraction of kidney Tumor implemented with Fuzzy Inference System
Yoshida et al. Computer-aided diagnosis in CT colonography: detection of polyps based on geometric and texture features

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REEP Request for entry into the european phase

Ref document number: 2005736176

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2005736176

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2007508461

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Ref document number: DE

WWP Wipo information: published in national office

Ref document number: 2005736176

Country of ref document: EP