US20090003699A1 - User guided object segmentation recognition - Google Patents

User guided object segmentation recognition Download PDF

Info

Publication number
US20090003699A1
US20090003699A1 US12/129,410 US12941008A US2009003699A1 US 20090003699 A1 US20090003699 A1 US 20090003699A1 US 12941008 A US12941008 A US 12941008A US 2009003699 A1 US2009003699 A1 US 2009003699A1
Authority
US
United States
Prior art keywords
image
atomic number
segmentation
interest
radiographic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/129,410
Inventor
Peter Dugan
Michael Riess
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lockheed Martin Corp
Original Assignee
Lockheed Martin Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lockheed Martin Corp filed Critical Lockheed Martin Corp
Priority to US12/129,410 priority Critical patent/US20090003699A1/en
Assigned to LOCKHEED MARTIN CORPORATION reassignment LOCKHEED MARTIN CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RIESS, MICHAEL, DUGAN, PETER
Publication of US20090003699A1 publication Critical patent/US20090003699A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/06Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2178Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor

Definitions

  • Embodiments of the present invention relate generally to image segmentation and, more particularly, to computer systems and methods for user guided image segmentation of radiographic images.
  • Image segmentation the process of separating objects of interest from the background (or from other objects) in an image, is typically a difficult task for a computer to perform. If an image scene is simple and the contrast between objects in the scene and the background is high, then the task may be somewhat easier. However, if an image scene is cluttered and the contrast between objects in the scene and the background (or other objects) is low, image segmentation can be a particularly difficult problem. For example, in a radiographic image of a three-dimensional object such as a cargo container there can be numerous layers of objects and contrast may be low between the objects and the background.
  • radiographic images of objects having layers may also present a need to segment the image in two ways: in the x-y plane (i.e., the plane the image was produced on) and by layer of depth in order to correct for layer effects such as overlapping.
  • User guided object segmentation recognition can be considered a type of semi-automatic image segmentation. Due to the present difficulty of image segmentation discussed above and the potentially safety critical nature of image processing, such as for security screening, it may be desirable to use a human operator to aid a computer in segmentation of readiographic images.
  • an image is presented to an operator. This image can be a raw image or a processed image.
  • the image can be a radiographic image on which an automatic segmentation process has been performed.
  • Embodiments of the present invention can be used in an imaging system, such as a nuclear material detection system, that includes a capability of producing images using a plurality of different energy levels. Each energy level provides a different imaging characteristic such as energy penetration of the object being scanned. Different images produced using different energy levels can be used in conjunction with each other to better identify layers within the object being scanned.
  • an imaging system such as a nuclear material detection system
  • an operator (or user) of a system can view an image and outline or select a region of interest (ROI) using an input device such as a mouse or graphics tablet.
  • ROI region of interest
  • an operator can outline a region of interest using mouse clicks to define a path around the ROI.
  • the operator can select a region of the image using a click and drag approach common in many Windows applications.
  • an automatic recognition e.g., an Z effective estimation processes
  • the automatic image segmentation algorithms may be able to achieve an improved Z effective result, or a different result. It is also possible that even with user guidance the automatic segmentation routines or modules produce the same result as was initially determined. In any case, the result of the automatic segmentation on the user selected region can be displayed for the operator to view.
  • One exemplary embodiment can include a system for user guided segmentation of radiographic images of a cargo container.
  • the system can include an object segmentation recognition module and an operator terminal including a display device and an input device, the operator terminal coupled to the object segmentation module.
  • the object segmentation recognition module can have instructions stored in a memory that when executed cause the object segmentation recognition module to perform a series of functions.
  • the functions can include segmenting a plurality of radiographic images of a cargo conveyance and outputting region of interest coordinates and a corrected atomic number image as output.
  • the functions can also include providing a first image to the operator terminal for display, the first image being based on the corrected atomic number image.
  • the functions can include receiving input from the operator terminal, the input including an indication of a selected region of the first image; and performing an additional segmentation process on only the selected region of the first image.
  • the functions can include providing a second image to the operator terminal for display, the second image based on the selected region of the first image and results of the additional segmentation process.
  • Another embodiment includes a method for user guided segmentation of radiographic images.
  • the method includes segmenting a plurality of radiographic images in order to determine regions of interest and corrected atomic number values, and correcting an atomic number image to generate a corrected atomic number image.
  • the method can also include providing a first image to an operator terminal for display, the first image being based on the corrected atomic number image, and receiving input from the operator terminal, the input including an indication of a selected region of the first image.
  • the method can also include performing an additional segmentation process on the selected region of the first image and providing a second image to the operator terminal for display, the second image based on the selected region of the first image and results of the additional segmentation process.
  • a goal of user guided segmentation can be to get a better characterization of the objects which can result in better layer extraction and a better Z estimation for the material being studied.
  • Providing a human user or operator in the loop can allow interaction with areas of an image that might be potential threats and that may not get resolved by the automatic ROI processing.
  • the method can include providing updated regions of interest and corrected atomic number values based on the additional segmentation process as output.
  • the apparatus can include means for segmenting a radiographic image in order to determine regions of interest and corrected atomic number values, and means for providing a first image to an operator terminal for display.
  • the apparatus can also include means for receiving input from the operator terminal, the input including an indication of a selected region of the first image and means for performing an additional segmentation process on the selected region of the first image.
  • the apparatus can also include means for providing a second image to the operator terminal for display, the second image being based on the selected region of the first image and results of the additional segmentation process; and means for providing updated regions of interest and corrected atomic number values based on the additional segmentation process as output.
  • FIG. 1 is a block diagram of an exemplary object segmentation recognition processor showing inputs and outputs;
  • FIG. 2 is a block diagram of an exemplary object segmentation recognition processor showing an exemplary OSR processor in detail;
  • FIG. 3 is a flowchart showing an exemplary method for image segmentation
  • FIG. 4 is a block diagram of an exemplary object segmentation recognition apparatus showing data flow and processing modules
  • FIG. 5 is a block diagram of an exemplary system for user guided segmentation
  • FIG. 6 is a flowchart showing an exemplary method for user guided image segmentation recognition
  • FIG. 7 is a diagram of an exemplary radiographic image showing an initial automatic segmentation result
  • FIG. 8 is a diagram of the exemplary radiographic image of FIG. 7 with a user guided input shown.
  • FIG. 9 is a diagram of the exemplary radiographic image of FIG. 7 that has been subjected to an additional segmentation process based on user guided input and shows a different result from that of FIG. 7 .
  • FIG. 1 shows a block diagram of an exemplary object segmentation recognition processor showing inputs and outputs.
  • OSR region of interest
  • ROI region of interest
  • the images 104 provided or obtained as input to the OSR processor 102 can include radiographic images or other images.
  • the images 104 can include radiographic images of a cargo conveyance such as a cargo container.
  • the images 104 can include one or more images, for example four images can be provided with each image being generated using a different radiographic energy level.
  • the images 104 can include radiographic images or other images derived from radiographic images, such as, for example, an atomic number image representing estimated atomic numbers associated with radiographic images.
  • the OSR processor 102 can obtain, request or receive the images 104 via a wired or wireless connection, such as a network (e.g., LAN, WAN, wireless network, Internet or the like) or direct connection within a system.
  • the OSR processor 102 can also receive the images 104 via a software connection (e.g., procedure call, standard object access protocol, remote procedure call, or the like).
  • a software connection e.g., procedure call, standard object access protocol, remote procedure call, or the like.
  • any known or later developed wired, wireless or software connection suitable for transmitting data can be used to supply the images 104 to the OSR processor 102 .
  • the OSR processor 102 can be requested to segment images by another process or system, or can request images for segmenting from another process or system. If the images 104 include more than one image, the images can be registered prior to being sent for segmentation.
  • the OSR processor 102 processes the images 104 to segment the images 104 and identify objects within the images 104 .
  • the OSR processor 102 can also extract or identify layers within the images in order to help segment the images more accurately.
  • the layer information can also be used to correct or adjust estimated atomic numbers in an atomic number image or map.
  • the atomic number image or map can include a representation of estimated atomic numbers determined from the images 104 .
  • ROIs regions of interest
  • the ROIs can be determined based on an image characteristic such as estimated atomic number of the ROI (or object), shape of the ROI, position or location of the ROI, or the like.
  • the OSR processor 102 can provide ROI/object coordinates 106 as output.
  • the ROI/object coordinates 106 can be associated with the input images 104 or an atomic number image.
  • the output ROI/object coordinates 106 can be outputted via a wired or wireless connection, such as a network (e.g., LAN, WAN, Internet or the like) or direct connection within a system.
  • the output ROI/object coordinates 106 can be outputted via a software connection (e.g., response to a procedure call, standard object access protocol, remote procedure call, or the like).
  • FIG. 2 is a block diagram of an exemplary object segmentation recognition processor showing an exemplary OSR processor in detail.
  • the OSR processor 102 includes a segment processing section 202 having a connected region analysis module 204 , an edge analysis module 206 , a ratio layer analysis module 208 and a blind source separation (BSS) layer analysis module 210 .
  • the OSR processor 102 also includes an object ROI section 212 having a layer analysis and segment association module 214 and an object ROI determination module 216 .
  • the segment processing section receives the images 104 .
  • the images 104 can be processed using one or more image segmentation modules (e.g., the connected region analysis module 204 , the edge analysis module 206 , or a combination of the above).
  • image segmentation modules e.g., the connected region analysis module 204 , the edge analysis module 206 , or a combination of the above.
  • the segmentation modules shown are for illustration purposes and that any known or later developed image segmentation processes can be used.
  • the selection of the number and type of image segmentation modules employed in the OSR processor 102 may depend on a contemplated use of an embodiment and the selection may be guided by a number of factors including, but not limited to, type of materials being scanned, configuration of the scanning system and objects being scanned, desired performance characteristics, time available for processing, or the like.
  • the images 104 can also be processed by one or more layer analysis modules (e.g., the ratio layer analysis module 208 , the BSS layer analysis module 210 , or a combination of
  • the resulting image segment data can be provided to the object ROI section 212 .
  • the layers and segments of the image segment data are analyzed and combined or associated to produce segment-layer data that contains information about objects and layers within the images 104 .
  • the segment-layer data can be in the form of an atomic number image that represents a composite of the images 104 and has been adjusted or corrected based on layers and segments to provide an image suitable for identification of ROIs.
  • the segment-layer data can also be represented in any form suitable for transmitting the information that may be needed to analyze the images 104 .
  • the segment-layer data is then provided to the object ROI determination module 216 for analysis and identification of ROIs.
  • the object ROI determination module 216 can use one or more image characteristics to identify ROIs within the images 104 or the segment-layer data.
  • Image characteristics can include an estimated atomic number for a portion of the image (e.g., a pixel, segment, object, region or the like), a shape of a segment or object within the image, or a position or location of an object or segment. In general, any image characteristic that is suitable for identifying an ROI can be used.
  • coordinate data ( 106 ) representing each ROI can be provided as output.
  • the output can be provided as described above in connection with reference number 106 of FIG. 1 .
  • segment-layer data or an adjusted or corrected atomic number image can be provided in addition to, or as a substitute for, the ROI coordinates.
  • FIG. 3 is a flowchart showing an exemplary computer implemented method for image segmentation. Processing begins at step 302 and continues to step 304 .
  • one or more radiographic images are obtained. These images can be provided by an imaging system (e.g., an x-ray, magnetic resonance imaging device, computerized tomography device, or the like). In general, any imaging device suitable for generating images that may require segmenting can be used. Processing continues to step 306 .
  • an imaging system e.g., an x-ray, magnetic resonance imaging device, computerized tomography device, or the like. In general, any imaging device suitable for generating images that may require segmenting can be used. Processing continues to step 306 .
  • the radiographic images are segmented.
  • the segmentation can be performed using one or more image segmentation processes. Examples of segmentation methods include modules or processes for segmentation based on clustering, histograms, edge detection, region growing, level set, graph partitioning, watershed, model based, and multi-scale. Processing continues to step 308 .
  • any layers present in the images are determined.
  • the layers can be determined using one or more layer extraction or identification processes. For example, a ratio layer analysis process and a BSS layer analysis process can be used together to identify layers in the images.
  • a goal of layer identification and extraction is to remove overlapping effects which may be present. By removing overlapping effects, the true gray level of a material can be determined. Using the true gray level, a material's effective atomic number (and, optionally, material density) can be determined. Using the effective atomic number the composition of the material can be determined and illicit materials, such as special nuclear materials can be detected automatically.
  • the ratio method of layer identification and overlap effect removal is known in the art as applied to dual energy and is described in “The Utility of X-ray Dual-energy Transmission and Scatter Technologies for Illicit Material Detection,” a published Ph.D. Dissertation by Lu Qiang, Virginia Polytechnic Institute and State University, Blacksburg, Va., 1999, which is incorporated herein by reference in its entirety.
  • the ratio method provides a process whereby a computer can solve for one image layer and remove any overlapping effects of another layer. Thus, regions that overlap can be separated into their constituent layers and a true gray level can be determined for each layer.
  • Blind source separation is a technique known in the art, and refers generally to the separation of a set of signals from a set of mixed signals, without the aid of information (or with very little information) about the source signals or the mixing process. However, if information about how the signals were mixed (e.g., a mixing matrix) can be estimated, it can then be used to determine an un-mixing matrix which can be applied to separate the components within a mixed signal.
  • a mixing matrix e.g., a mixing matrix
  • the BSS method may be limited by the amount of independence between materials within the mixture.
  • Several techniques exist for estimating the mixing matrix include using an unsupervised learning process. The process can include incrementally changing and weighting coefficients of the mixing matrix and evaluating the mixing matrix until optimal conditions are met. Once the mixing matrix is estimated, un-mixing coefficients can be computed. Examples of some BSS techniques include projection pursuit gradient ascent, projection pursuit stepwise separation, ICA gradient ascent, and complexity pursuit gradient ascent. In general, an iterative hill climbing or other type of optimization process can be used to estimate the mixing matrix and determine an optimal matrix. Also, contemplated or desired performance levels may require development of custom algorithms that can be tuned to a specific empirical terrain provided by the mixing and un-mixing matrices. Once the layers are identified and overlapping effects are removed, processing continues to step 310 .
  • step 310 segments that have been identified are associated with any layers that have been determined or identified in step 308 . Associating segments with layers can help to remove any overlapping effects and also can improve the ability to determine a true gray value for a segment. Processing continues to step 312 .
  • ROIs are determined.
  • the ROIs can be determined based on an image characteristic as described above. Processing continues to step 314 .
  • a gray level atomic number image is optionally adjusted to reflect the corrections or adjustments provided by the layer determination.
  • the adjustments or corrections can include changes related to removal of overlap effects or other changes. Processing continues to step 316 .
  • step 316 the ROI coordinates and, optionally, the adjusted or corrected gray level image are provided as output to an operator or another system.
  • the output can be in a standard format or in a proprietary format.
  • steps 304 - 316 can be repeated in whole or in part to perform a contemplated image segmentation process.
  • FIG. 4 is a block diagram of an exemplary object segmentation recognition apparatus showing data flow and processing modules.
  • four gray scale radiographic images 402 - 408 ), each generated using a different energy level, are provided to an effective Z-value determination module 410 .
  • the effective Z-value determination module determines a pixel-level Z-value gray scale image 412 .
  • the pixel-level Z-value gray scale image 412 can be provided to an image segmentation and layer analysis module 414 .
  • the segmentation and layer analysis module 414 segments the image and analyzes layers, as described above, to generate a layer corrected image representing true gray values, ROI coordinates, or both.
  • FIG. 5 shows an exemplary nuclear detection system that provides for user guided input for object segmentation recognition.
  • an object screening system 500 can be used to screen an object to be scanned 502 in order to detect contraband such as nuclear material.
  • the object 502 is subjected to one or more electromagnetic energies (with two, 504 a and 504 b , being shown for illustration purposes) produced by the scanner 506 .
  • the scanner 506 receives returned or radiated energy and produces scanned images 508 that are sent to a threat detection system 510 .
  • the threat detection system 510 processes the scanned images 508 to detect (either automatically or with some manual input) nuclear material or other possible threats revealed by the radiographic imaging. Part of the detection process includes segmenting one or more of the images using OSR module 511 . The results of the segmentation can be provided to an operator station 514 via link 512 . The results can be in the form of a graphical image suitable for display to an operator or user of the object screening system 500 .
  • the operator can provided input to the operator station 514 .
  • the input (or an encoded form thereof) can be transmitted via link 512 to the threat detection system 510 and/or the OSR module 511 .
  • the OSR module 511 can then perform an additional segmentation process on a selected portion of the image to produce another segmentation result.
  • a second image, based on the result of the additional segmentation process can be provided to the operator station 514 for viewing by the operator.
  • the segmentation results can be released by the operator for additional processing by the threat detection system 510 or additional segmentation process can be requested.
  • the operator can manually indicated object segmentations using an input device associated with the operator station 514 .
  • the manually generated segmentations, the automatic segmentation results, or both, can be sent to the threat detection system 510 .
  • Link 512 can be a wired or wireless link such as a LAN, WAN, wireless network connection, radio link, optical link, or the like.
  • the energies 504 a and 504 b can include, for example, two or more different energy levels of x-ray energy. It will be appreciated that other types of electromagnetic energy can be used to scan the object 502 . It will also be appreciated that although two energies ( 504 a and 504 b ) are shown, more or less energies can be used with an embodiment. Any type of scanner suitable for detecting contraband such as nuclear material and capable or producing an image (or array of values) may be used.
  • the object being screened (or scanned) can include a cargo container, a truck, a tractor trailer, baggage, cargo, luggage, a vehicle, an air cargo container, and/or any object being transported that could potentially contain nuclear material or a portion of a threat or weapon system, or any object for which threat or contraband screening is contemplated or desired.
  • FIG. 6 is a flowchart showing an exemplary method for user guided image segmentation recognition. Processing for the method begins at step 602 and continues to step 604 .
  • radiographic images are obtained.
  • the images can be in the form of raw radiographic image data, gray level data representing effective atomic number, or a hybrid image containing both. Processing continues to step 606 .
  • step 606 the radiographic images are automatically segmented by computer using one or more segmentation and/or layer analysis algorithms as discussed above. Processing continues to step 608 .
  • ROI coordinates and, optionally, adjusted gray level image(s) are output as a first image to an operator station.
  • the output can be displayed in any suitable means such as on a display screen or printed. Processing continues to step 610 .
  • step 610 input is received from an operator using the operator station indicating a selected region of the first image. Processing continues to step 612 .
  • an additional segmentation process is performed on the selected region.
  • the additional segmentation process can be performed using the same algorithms as used for the initial automatic segmentation or may be performed using different algorithms. Also, the same algorithms may be used with different parameters for the segmentation of the selected region. Processing continues to step 614 .
  • a second image is provided to the operator station.
  • the second image is based on results of the additional segmentation process of step 612 . Processing continues to step 616 .
  • the second image could be a smaller region of the original image or sub-image.
  • ROI coordinates and, optionally, adjusted gray level images based on the additional segmentation processing are output.
  • the ROI coordinates and adjusted gray level image can be provided to another module in a threat detection system, such as material context analysis processor.
  • the output can be provided to another system or another operator.
  • the decision to release the results of the additional segmentation processing can be made by the operator or can be automatic.
  • the final output can be a combination of the segmentation results of the initial processing and the segmentation results of the additional processing.
  • steps 604 - 616 can be repeated in whole or in part to perform a contemplated user guided image segmentation process.
  • FIGS. 7-9 show a diagrammatic representation of a sequence of radiographic images on an operator display screen.
  • FIG. 7 shows an initial automatic segmentation result.
  • a first object 702 has been recognized by the automatic routines of the OSR processor.
  • a second object 704 has been recognized.
  • a portion 706 (shown by dashed line) of object 704 is obscured behind object 702 and was not recognized as being associated with object 704 , rather was include in object 702 .
  • An operator viewing the image of FIG. 7 may recognize that a portion of object 704 appears to be obscured and was not correctly identified using automatic techniques. The operator can then elect to have additional user guided segmentation performed.
  • FIG. 8 shows the image of FIG. 7 with a graphical user interface (GUI) element ( 802 ) drawn around a selected portion of the image.
  • GUI graphical user interface
  • the GUI element can be indicated using any suitable input means such as mouse, keyboard, stylus and tablet, touch screen, or the like. In general, any input means suitable for indicating a selected region of the image can be used.
  • the GUI element can be drawn using a “click and drag” technique that is commonly used in many personal computer software applications. Also, other techniques can be used such as Livewire (based on the lowest cost path algorithm by Dijkstra) or Intelligent Scissors.
  • the region can be sent to the segmentation module for an additional segmentation processing.
  • the additional segmentation processing can include the same or different algorithms or parameters as used for the initial automatic segmentation.
  • a user guided layer extraction can be performed in which the operator can view each image individually or view the group as a composite and can select regions of interest from each of the images for additional layer extraction or analysis processing.
  • a second image can be provided to the operator station. This image can be the same or different from the initial segmentation image.
  • FIG. 9 shows a second image that is different from the first image shown in FIG. 7 .
  • the object 904 now includes the portion ( 706 ) that was previously obscured and included in object 702 of FIG. 7 .
  • modules, processes, systems, and sections described above can be implemented in hardware, software, or both. Also, the modules, processes systems, and sections can be implemented as a single processor or as a distributed processor. Further, it should be appreciated that the steps mentioned above may be performed on a single or distributed processor. Also, the processes, modules, and sub-modules described in the various figures of the embodiments above may be distributed across multiple computers or systems or may be co-located in a single processor or system. Exemplary structural embodiment alternatives suitable for implementing the modules, sections, systems, means, or processes described herein are provided below.
  • the modules, processors or systems described above can be implemented as a programmed general purpose computer, an electronic device programmed with microcode, a hard-wired analog logic circuit, software stored on a computer-readable medium or signal, a programmed kiosk, an optical computing device, a GUI on a display, a networked system of electronic and/or optical devices, a special purpose computing device, an integrated circuit device, a semiconductor chip, and a software module or object stored on a computer-readable medium or signal, for example.
  • Embodiments of the method and system may be implemented on a general-purpose computer, a special-purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmed logic circuit such as a PLD, PLA, FPGA, PAL, or the like.
  • any process capable of implementing the functions or steps described herein can be used to implement embodiments of the method, system, or a computer program product (software program).
  • embodiments of the disclosed method, system, and computer program product may be readily implemented, fully or partially, in software using, for example, object or object-oriented software development environments that provide portable source code that can be used on a variety of computer platforms.
  • embodiments of the disclosed method, system, and computer program product can be implemented partially or fully in hardware using, for example, standard logic circuits or a VLSI design.
  • Other hardware or software can be used to implement embodiments depending on the speed and/or efficiency requirements of the systems, the particular function, and/or particular software or hardware system, microprocessor, or microcomputer being utilized.
  • Embodiments of the method, system, and computer program product can be implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the function description provided herein and with a general basic knowledge of the computer, image processing, radiographic, and/or threat detection arts.
  • embodiments of the disclosed method, system, and computer program product can be implemented in software executed on a programmed general purpose computer, a special purpose computer, a microprocessor, or the like.
  • the method of this invention can be implemented as a program embedded on a personal computer such as a JAVA® or CGI script, as a resource residing on a server or image processing workstation, as a routine embedded in a dedicated processing system, or the like.
  • the method and system can also be implemented by physically incorporating the method into a software and/or hardware system, such as the hardware and software systems of multi-energy radiographic cargo inspection systems.

Abstract

A system for segmenting radiographic images of a cargo container can include an object segmentation recognition module adapted to perform a series of functions. The functions can include receiving a plurality of radiographic images of a cargo container, each image generated using a different energy level and segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data representing one or more image segments. The functions can also include identifying image layers within the radiographic images using a plurality of layer analysis modules by providing the plurality of radiographic images and the segmentation data as input to the layer analysis modules, and determining adjusted atomic number values for an atomic number image based on the image layers. The functions can include adjusting the atomic number image based on the adjusted atomic number values for the regions of interest to generate an adjusted atomic number image and identifying regions of interest within the adjusted atomic number image based on an image characteristic. The functions can also include providing coordinates of each region of interest and the adjusted atomic number image as output.

Description

  • The present application claims the benefit of U.S. Provisional Patent Application No. 60/940,632, entitled “Threat Detection System”, filed May 29, 2007, which is incorporated herein by reference in its entirety.
  • Embodiments of the present invention relate generally to image segmentation and, more particularly, to computer systems and methods for user guided image segmentation of radiographic images.
  • Image segmentation, the process of separating objects of interest from the background (or from other objects) in an image, is typically a difficult task for a computer to perform. If an image scene is simple and the contrast between objects in the scene and the background is high, then the task may be somewhat easier. However, if an image scene is cluttered and the contrast between objects in the scene and the background (or other objects) is low, image segmentation can be a particularly difficult problem. For example, in a radiographic image of a three-dimensional object such as a cargo container there can be numerous layers of objects and contrast may be low between the objects and the background. In addition to the difficulties often associated with low contrast and cluttered scenes, radiographic images of objects having layers may also present a need to segment the image in two ways: in the x-y plane (i.e., the plane the image was produced on) and by layer of depth in order to correct for layer effects such as overlapping.
  • User guided object segmentation recognition can be considered a type of semi-automatic image segmentation. Due to the present difficulty of image segmentation discussed above and the potentially safety critical nature of image processing, such as for security screening, it may be desirable to use a human operator to aid a computer in segmentation of readiographic images. In user guided segmentation, an image is presented to an operator. This image can be a raw image or a processed image. For example, the image can be a radiographic image on which an automatic segmentation process has been performed.
  • Embodiments of the present invention can be used in an imaging system, such as a nuclear material detection system, that includes a capability of producing images using a plurality of different energy levels. Each energy level provides a different imaging characteristic such as energy penetration of the object being scanned. Different images produced using different energy levels can be used in conjunction with each other to better identify layers within the object being scanned.
  • In an exemplary embodiment, an operator (or user) of a system can view an image and outline or select a region of interest (ROI) using an input device such as a mouse or graphics tablet. For example, an operator can outline a region of interest using mouse clicks to define a path around the ROI. Alternatively, the operator can select a region of the image using a click and drag approach common in many Windows applications. Once the region of interest has been selected by the operator, then an automatic recognition (e.g., an Z effective estimation processes) process can be performed on the region. By narrowing the processing region, the automatic image segmentation algorithms may be able to achieve an improved Z effective result, or a different result. It is also possible that even with user guidance the automatic segmentation routines or modules produce the same result as was initially determined. In any case, the result of the automatic segmentation on the user selected region can be displayed for the operator to view.
  • One exemplary embodiment can include a system for user guided segmentation of radiographic images of a cargo container. The system can include an object segmentation recognition module and an operator terminal including a display device and an input device, the operator terminal coupled to the object segmentation module. The object segmentation recognition module can have instructions stored in a memory that when executed cause the object segmentation recognition module to perform a series of functions. The functions can include segmenting a plurality of radiographic images of a cargo conveyance and outputting region of interest coordinates and a corrected atomic number image as output. The functions can also include providing a first image to the operator terminal for display, the first image being based on the corrected atomic number image. The functions can include receiving input from the operator terminal, the input including an indication of a selected region of the first image; and performing an additional segmentation process on only the selected region of the first image. The functions can include providing a second image to the operator terminal for display, the second image based on the selected region of the first image and results of the additional segmentation process.
  • Another embodiment includes a method for user guided segmentation of radiographic images. The method includes segmenting a plurality of radiographic images in order to determine regions of interest and corrected atomic number values, and correcting an atomic number image to generate a corrected atomic number image. The method can also include providing a first image to an operator terminal for display, the first image being based on the corrected atomic number image, and receiving input from the operator terminal, the input including an indication of a selected region of the first image. The method can also include performing an additional segmentation process on the selected region of the first image and providing a second image to the operator terminal for display, the second image based on the selected region of the first image and results of the additional segmentation process. A goal of user guided segmentation can be to get a better characterization of the objects which can result in better layer extraction and a better Z estimation for the material being studied. Providing a human user or operator in the loop can allow interaction with areas of an image that might be potential threats and that may not get resolved by the automatic ROI processing. The method can include providing updated regions of interest and corrected atomic number values based on the additional segmentation process as output.
  • Another embodiment includes a radiographic image segmentation apparatus. The apparatus can include means for segmenting a radiographic image in order to determine regions of interest and corrected atomic number values, and means for providing a first image to an operator terminal for display. The apparatus can also include means for receiving input from the operator terminal, the input including an indication of a selected region of the first image and means for performing an additional segmentation process on the selected region of the first image. The apparatus can also include means for providing a second image to the operator terminal for display, the second image being based on the selected region of the first image and results of the additional segmentation process; and means for providing updated regions of interest and corrected atomic number values based on the additional segmentation process as output.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an exemplary object segmentation recognition processor showing inputs and outputs;
  • FIG. 2 is a block diagram of an exemplary object segmentation recognition processor showing an exemplary OSR processor in detail;
  • FIG. 3 is a flowchart showing an exemplary method for image segmentation;
  • FIG. 4 is a block diagram of an exemplary object segmentation recognition apparatus showing data flow and processing modules;
  • FIG. 5 is a block diagram of an exemplary system for user guided segmentation;
  • FIG. 6 is a flowchart showing an exemplary method for user guided image segmentation recognition;
  • FIG. 7 is a diagram of an exemplary radiographic image showing an initial automatic segmentation result;
  • FIG. 8 is a diagram of the exemplary radiographic image of FIG. 7 with a user guided input shown; and
  • FIG. 9 is a diagram of the exemplary radiographic image of FIG. 7 that has been subjected to an additional segmentation process based on user guided input and shows a different result from that of FIG. 7.
  • DETAILED DESCRIPTION
  • FIG. 1 shows a block diagram of an exemplary object segmentation recognition processor showing inputs and outputs. In particular, an object segmentation recognition
  • (OSR) processor 102 is shown receiving one or more images 104 as input and providing region of interest (ROI) or object coordinates 106 as output.
  • In operation, the images 104 provided or obtained as input to the OSR processor 102 can include radiographic images or other images. For example, the images 104 can include radiographic images of a cargo conveyance such as a cargo container. The images 104 can include one or more images, for example four images can be provided with each image being generated using a different radiographic energy level. Also, the images 104 can include radiographic images or other images derived from radiographic images, such as, for example, an atomic number image representing estimated atomic numbers associated with radiographic images.
  • The OSR processor 102 can obtain, request or receive the images 104 via a wired or wireless connection, such as a network (e.g., LAN, WAN, wireless network, Internet or the like) or direct connection within a system. The OSR processor 102 can also receive the images 104 via a software connection (e.g., procedure call, standard object access protocol, remote procedure call, or the like). In general, any known or later developed wired, wireless or software connection suitable for transmitting data can be used to supply the images 104 to the OSR processor 102. The OSR processor 102 can be requested to segment images by another process or system, or can request images for segmenting from another process or system. If the images 104 include more than one image, the images can be registered prior to being sent for segmentation.
  • The OSR processor 102 processes the images 104 to segment the images 104 and identify objects within the images 104. The OSR processor 102 can also extract or identify layers within the images in order to help segment the images more accurately. The layer information can also be used to correct or adjust estimated atomic numbers in an atomic number image or map. The atomic number image or map can include a representation of estimated atomic numbers determined from the images 104.
  • Once the images 104 have been segmented and the layer information has been determined, regions of interest (ROIs) within the images 104 can be located or determined. The ROIs can be determined based on an image characteristic such as estimated atomic number of the ROI (or object), shape of the ROI, position or location of the ROI, or the like. The OSR processor 102 can provide ROI/object coordinates 106 as output. The ROI/object coordinates 106 can be associated with the input images 104 or an atomic number image. The output ROI/object coordinates 106 can be outputted via a wired or wireless connection, such as a network (e.g., LAN, WAN, Internet or the like) or direct connection within a system. The output ROI/object coordinates 106 can be outputted via a software connection (e.g., response to a procedure call, standard object access protocol, remote procedure call, or the like).
  • FIG. 2 is a block diagram of an exemplary object segmentation recognition processor showing an exemplary OSR processor in detail. In addition to the components already described above, the OSR processor 102 includes a segment processing section 202 having a connected region analysis module 204, an edge analysis module 206, a ratio layer analysis module 208 and a blind source separation (BSS) layer analysis module 210. The OSR processor 102 also includes an object ROI section 212 having a layer analysis and segment association module 214 and an object ROI determination module 216.
  • In operation, the segment processing section receives the images 104. Once received, the images 104 can be processed using one or more image segmentation modules (e.g., the connected region analysis module 204, the edge analysis module 206, or a combination of the above). It will be appreciated that the segmentation modules shown are for illustration purposes and that any known or later developed image segmentation processes can be used. Also, the selection of the number and type of image segmentation modules employed in the OSR processor 102 may depend on a contemplated use of an embodiment and the selection may be guided by a number of factors including, but not limited to, type of materials being scanned, configuration of the scanning system and objects being scanned, desired performance characteristics, time available for processing, or the like. The images 104 can also be processed by one or more layer analysis modules (e.g., the ratio layer analysis module 208, the BSS layer analysis module 210, or a combination of the above).
  • Once the segmentation processing (object segmentation, layer analysis, or both) has been completed, the resulting image segment data can be provided to the object ROI section 212. In the object ROI section 212, the layers and segments of the image segment data are analyzed and combined or associated to produce segment-layer data that contains information about objects and layers within the images 104. The segment-layer data can be in the form of an atomic number image that represents a composite of the images 104 and has been adjusted or corrected based on layers and segments to provide an image suitable for identification of ROIs. The segment-layer data can also be represented in any form suitable for transmitting the information that may be needed to analyze the images 104. The segment-layer data is then provided to the object ROI determination module 216 for analysis and identification of ROIs.
  • The object ROI determination module 216 can use one or more image characteristics to identify ROIs within the images 104 or the segment-layer data. Image characteristics can include an estimated atomic number for a portion of the image (e.g., a pixel, segment, object, region or the like), a shape of a segment or object within the image, or a position or location of an object or segment. In general, any image characteristic that is suitable for identifying an ROI can be used.
  • Once the ROIs have been determined, coordinate data (106) representing each ROI can be provided as output. The output can be provided as described above in connection with reference number 106 of FIG. 1. Also, segment-layer data or an adjusted or corrected atomic number image can be provided in addition to, or as a substitute for, the ROI coordinates.
  • FIG. 3 is a flowchart showing an exemplary computer implemented method for image segmentation. Processing begins at step 302 and continues to step 304.
  • In step 304, one or more radiographic images are obtained. These images can be provided by an imaging system (e.g., an x-ray, magnetic resonance imaging device, computerized tomography device, or the like). In general, any imaging device suitable for generating images that may require segmenting can be used. Processing continues to step 306.
  • In step 306, the radiographic images are segmented. The segmentation can be performed using one or more image segmentation processes. Examples of segmentation methods include modules or processes for segmentation based on clustering, histograms, edge detection, region growing, level set, graph partitioning, watershed, model based, and multi-scale. Processing continues to step 308.
  • In step 308, any layers present in the images are determined. The layers can be determined using one or more layer extraction or identification processes. For example, a ratio layer analysis process and a BSS layer analysis process can be used together to identify layers in the images. A goal of layer identification and extraction is to remove overlapping effects which may be present. By removing overlapping effects, the true gray level of a material can be determined. Using the true gray level, a material's effective atomic number (and, optionally, material density) can be determined. Using the effective atomic number the composition of the material can be determined and illicit materials, such as special nuclear materials can be detected automatically.
  • The ratio method of layer identification and overlap effect removal is known in the art as applied to dual energy and is described in “The Utility of X-ray Dual-energy Transmission and Scatter Technologies for Illicit Material Detection,” a published Ph.D. Dissertation by Lu Qiang, Virginia Polytechnic Institute and State University, Blacksburg, Va., 1999, which is incorporated herein by reference in its entirety. Generally, the ratio method provides a process whereby a computer can solve for one image layer and remove any overlapping effects of another layer. Thus, regions that overlap can be separated into their constituent layers and a true gray level can be determined for each layer.
  • Blind source separation (or blind signal separation) is a technique known in the art, and refers generally to the separation of a set of signals from a set of mixed signals, without the aid of information (or with very little information) about the source signals or the mixing process. However, if information about how the signals were mixed (e.g., a mixing matrix) can be estimated, it can then be used to determine an un-mixing matrix which can be applied to separate the components within a mixed signal.
  • The BSS method may be limited by the amount of independence between materials within the mixture. Several techniques exist for estimating the mixing matrix; some include using an unsupervised learning process. The process can include incrementally changing and weighting coefficients of the mixing matrix and evaluating the mixing matrix until optimal conditions are met. Once the mixing matrix is estimated, un-mixing coefficients can be computed. Examples of some BSS techniques include projection pursuit gradient ascent, projection pursuit stepwise separation, ICA gradient ascent, and complexity pursuit gradient ascent. In general, an iterative hill climbing or other type of optimization process can be used to estimate the mixing matrix and determine an optimal matrix. Also, contemplated or desired performance levels may require development of custom algorithms that can be tuned to a specific empirical terrain provided by the mixing and un-mixing matrices. Once the layers are identified and overlapping effects are removed, processing continues to step 310.
  • In step 310, segments that have been identified are associated with any layers that have been determined or identified in step 308. Associating segments with layers can help to remove any overlapping effects and also can improve the ability to determine a true gray value for a segment. Processing continues to step 312.
  • In step 312, ROIs are determined. The ROIs can be determined based on an image characteristic as described above. Processing continues to step 314.
  • In step 314, a gray level atomic number image is optionally adjusted to reflect the corrections or adjustments provided by the layer determination. The adjustments or corrections can include changes related to removal of overlap effects or other changes. Processing continues to step 316.
  • In step 316, the ROI coordinates and, optionally, the adjusted or corrected gray level image are provided as output to an operator or another system. The output can be in a standard format or in a proprietary format. Processing continues to step 318 where the method ends. It will be appreciated that steps 304-316 can be repeated in whole or in part to perform a contemplated image segmentation process.
  • FIG. 4 is a block diagram of an exemplary object segmentation recognition apparatus showing data flow and processing modules. In particular, four gray scale radiographic images (402-408), each generated using a different energy level, are provided to an effective Z-value determination module 410. The effective Z-value determination module determines a pixel-level Z-value gray scale image 412.
  • The pixel-level Z-value gray scale image 412 can be provided to an image segmentation and layer analysis module 414. The segmentation and layer analysis module 414 segments the image and analyzes layers, as described above, to generate a layer corrected image representing true gray values, ROI coordinates, or both.
  • As mentioned above, it may be desirable to use a human operator to aid in segmentation. FIG. 5 shows an exemplary nuclear detection system that provides for user guided input for object segmentation recognition. In particular, an object screening system 500 can be used to screen an object to be scanned 502 in order to detect contraband such as nuclear material. The object 502 is subjected to one or more electromagnetic energies (with two, 504 a and 504 b, being shown for illustration purposes) produced by the scanner 506. The scanner 506 receives returned or radiated energy and produces scanned images 508 that are sent to a threat detection system 510.
  • The threat detection system 510 processes the scanned images 508 to detect (either automatically or with some manual input) nuclear material or other possible threats revealed by the radiographic imaging. Part of the detection process includes segmenting one or more of the images using OSR module 511. The results of the segmentation can be provided to an operator station 514 via link 512. The results can be in the form of a graphical image suitable for display to an operator or user of the object screening system 500.
  • Once the operator has viewed the image displayed on the operator station 514, the operator can provided input to the operator station 514. The input (or an encoded form thereof) can be transmitted via link 512 to the threat detection system 510 and/or the OSR module 511. The OSR module 511 can then perform an additional segmentation process on a selected portion of the image to produce another segmentation result. A second image, based on the result of the additional segmentation process can be provided to the operator station 514 for viewing by the operator. The segmentation results can be released by the operator for additional processing by the threat detection system 510 or additional segmentation process can be requested. Also, the operator can manually indicated object segmentations using an input device associated with the operator station 514. The manually generated segmentations, the automatic segmentation results, or both, can be sent to the threat detection system 510.
  • The threat detection system can be a stand alone system or form part of a larger security system. Link 512 can be a wired or wireless link such as a LAN, WAN, wireless network connection, radio link, optical link, or the like.
  • The energies 504 a and 504 b can include, for example, two or more different energy levels of x-ray energy. It will be appreciated that other types of electromagnetic energy can be used to scan the object 502. It will also be appreciated that although two energies (504 a and 504 b) are shown, more or less energies can be used with an embodiment. Any type of scanner suitable for detecting contraband such as nuclear material and capable or producing an image (or array of values) may be used. The object being screened (or scanned) can include a cargo container, a truck, a tractor trailer, baggage, cargo, luggage, a vehicle, an air cargo container, and/or any object being transported that could potentially contain nuclear material or a portion of a threat or weapon system, or any object for which threat or contraband screening is contemplated or desired.
  • FIG. 6 is a flowchart showing an exemplary method for user guided image segmentation recognition. Processing for the method begins at step 602 and continues to step 604.
  • In step 604, radiographic images are obtained. The images can be in the form of raw radiographic image data, gray level data representing effective atomic number, or a hybrid image containing both. Processing continues to step 606.
  • In step 606, the radiographic images are automatically segmented by computer using one or more segmentation and/or layer analysis algorithms as discussed above. Processing continues to step 608.
  • In step 608, ROI coordinates and, optionally, adjusted gray level image(s) are output as a first image to an operator station. At the operator station, the output can be displayed in any suitable means such as on a display screen or printed. Processing continues to step 610.
  • In step 610, input is received from an operator using the operator station indicating a selected region of the first image. Processing continues to step 612.
  • In step 612, an additional segmentation process is performed on the selected region. The additional segmentation process can be performed using the same algorithms as used for the initial automatic segmentation or may be performed using different algorithms. Also, the same algorithms may be used with different parameters for the segmentation of the selected region. Processing continues to step 614.
  • In step 614, a second image is provided to the operator station. The second image is based on results of the additional segmentation process of step 612. Processing continues to step 616. The second image could be a smaller region of the original image or sub-image.
  • In step 616, ROI coordinates and, optionally, adjusted gray level images based on the additional segmentation processing are output. The ROI coordinates and adjusted gray level image can be provided to another module in a threat detection system, such as material context analysis processor. The output can be provided to another system or another operator. The decision to release the results of the additional segmentation processing can be made by the operator or can be automatic. The final output can be a combination of the segmentation results of the initial processing and the segmentation results of the additional processing. Processing continues to step 618 where processing ends. It will be appreciated that steps 604-616 can be repeated in whole or in part to perform a contemplated user guided image segmentation process.
  • FIGS. 7-9 show a diagrammatic representation of a sequence of radiographic images on an operator display screen. FIG. 7 shows an initial automatic segmentation result. In particular, a first object 702 has been recognized by the automatic routines of the OSR processor. Also, a second object 704 has been recognized. However, a portion 706 (shown by dashed line) of object 704 is obscured behind object 702 and was not recognized as being associated with object 704, rather was include in object 702. An operator viewing the image of FIG. 7 may recognize that a portion of object 704 appears to be obscured and was not correctly identified using automatic techniques. The operator can then elect to have additional user guided segmentation performed.
  • FIG. 8 shows the image of FIG. 7 with a graphical user interface (GUI) element (802) drawn around a selected portion of the image. The GUI element can be indicated using any suitable input means such as mouse, keyboard, stylus and tablet, touch screen, or the like. In general, any input means suitable for indicating a selected region of the image can be used. The GUI element can be drawn using a “click and drag” technique that is commonly used in many personal computer software applications. Also, other techniques can be used such as Livewire (based on the lowest cost path algorithm by Dijkstra) or Intelligent Scissors.
  • Once the selected region or portion of the image has been indicated by the operator, the region can be sent to the segmentation module for an additional segmentation processing. The additional segmentation processing can include the same or different algorithms or parameters as used for the initial automatic segmentation. Also, if multiple images are available, a user guided layer extraction can be performed in which the operator can view each image individually or view the group as a composite and can select regions of interest from each of the images for additional layer extraction or analysis processing. Once the additional segmentation and/or layer correction has been performed, a second image can be provided to the operator station. This image can be the same or different from the initial segmentation image.
  • For example, FIG. 9 shows a second image that is different from the first image shown in FIG. 7. In the image in FIG. 9 there are two objects (now labeled 902 and 904), however, the object 904 now includes the portion (706) that was previously obscured and included in object 702 of FIG. 7. By performing user guided segmentation, an operator can better analyze objects of interest and possibly determine a more accurate gray value and shape or size of the object.
  • It will be appreciated that the modules, processes, systems, and sections described above can be implemented in hardware, software, or both. Also, the modules, processes systems, and sections can be implemented as a single processor or as a distributed processor. Further, it should be appreciated that the steps mentioned above may be performed on a single or distributed processor. Also, the processes, modules, and sub-modules described in the various figures of the embodiments above may be distributed across multiple computers or systems or may be co-located in a single processor or system. Exemplary structural embodiment alternatives suitable for implementing the modules, sections, systems, means, or processes described herein are provided below.
  • The modules, processors or systems described above can be implemented as a programmed general purpose computer, an electronic device programmed with microcode, a hard-wired analog logic circuit, software stored on a computer-readable medium or signal, a programmed kiosk, an optical computing device, a GUI on a display, a networked system of electronic and/or optical devices, a special purpose computing device, an integrated circuit device, a semiconductor chip, and a software module or object stored on a computer-readable medium or signal, for example.
  • Embodiments of the method and system (or their sub-components or modules), may be implemented on a general-purpose computer, a special-purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmed logic circuit such as a PLD, PLA, FPGA, PAL, or the like. In general, any process capable of implementing the functions or steps described herein can be used to implement embodiments of the method, system, or a computer program product (software program).
  • Furthermore, embodiments of the disclosed method, system, and computer program product may be readily implemented, fully or partially, in software using, for example, object or object-oriented software development environments that provide portable source code that can be used on a variety of computer platforms. Alternatively, embodiments of the disclosed method, system, and computer program product can be implemented partially or fully in hardware using, for example, standard logic circuits or a VLSI design. Other hardware or software can be used to implement embodiments depending on the speed and/or efficiency requirements of the systems, the particular function, and/or particular software or hardware system, microprocessor, or microcomputer being utilized. Embodiments of the method, system, and computer program product can be implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the function description provided herein and with a general basic knowledge of the computer, image processing, radiographic, and/or threat detection arts.
  • Moreover, embodiments of the disclosed method, system, and computer program product can be implemented in software executed on a programmed general purpose computer, a special purpose computer, a microprocessor, or the like. Also, the method of this invention can be implemented as a program embedded on a personal computer such as a JAVA® or CGI script, as a resource residing on a server or image processing workstation, as a routine embedded in a dedicated processing system, or the like. The method and system can also be implemented by physically incorporating the method into a software and/or hardware system, such as the hardware and software systems of multi-energy radiographic cargo inspection systems.
  • It is, therefore, apparent that there is provided, in accordance with the present invention, a method, computer system, and computer software program for user guided image segmentation. While this invention has been described in conjunction with a number of embodiments, it is evident that many alternatives, modifications and variations would be or are apparent to those of ordinary skill in the applicable arts. Accordingly, Applicant intends to embrace all such alternatives, modifications, equivalents and variations that are within the spirit and scope of this invention.

Claims (20)

1. A system for user guided segmentation of radiographic images of a cargo container, the system comprising:
an object segmentation recognition module; and
an operator terminal including a display device and an input device, the operator terminal coupled to the object segmentation module,
the object segmentation recognition module having instructions stored in a memory that when executed cause the object segmentation recognition module to perform a series of functions including:
segmenting a plurality of radiographic images of a cargo conveyance;
outputting region of interest coordinates and a corrected atomic number image as output;
providing a first image to the operator terminal for display, the first image being based on the corrected atomic number image;
receiving input from the operator terminal, the input including an indication of a selected region of the first image;
performing an additional segmentation process on only the selected region of the first image; and
providing a second image to the operator terminal for display, the second image based on the selected region of the first image and results of the additional segmentation process.
2. The system of claim 1, wherein the plurality of radiographic images are generated using four energy levels.
3. The system of claim 1, wherein the region of interest coordinates are determined by comparing an estimated atomic value of each image segment to a threshold value.
4. The system of claim 1, wherein segmenting the plurality of radiographic images includes:
segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data;
identifying regions of interest and image layers containing regions of interest;
analyzing the image layers using the plurality of radiographic images and the segmentation data in order to determine corrected atomic number values for the regions of interest; and
correcting an atomic number image based on the corrected atomic number values for the regions of interest.
5. The system of claim 4, wherein regions of interest are identified by an image characteristic.
6. The system of claim 1, wherein the radiographic images include only radiographic image data.
7. The system of claim 1, wherein the radiographic images include effective atomic number data and radiographic image data.
8. The system of claim 1, wherein the function of providing the first image includes providing coordinates of each region of interest and the corrected atomic number image to an operator station.
9. The system of claim 1, wherein the function of providing the first image includes providing the coordinates of each region of interest and the corrected atomic number image to another system.
10. A method for user guided segmentation of radiographic images, the method comprising:
segmenting a plurality of radiographic images in order to determine regions of interest and corrected atomic number values;
correcting an atomic number image to generate a corrected atomic number image;
providing a first image to an operator terminal for display, the first image being based on the corrected atomic number image;
receiving input from the operator terminal, the input including an indication of a selected region of the first image;
performing an additional segmentation process on the selected region of the first image;
providing a second image to the operator terminal for display, the second image based on the selected region of the first image and results of the additional segmentation process; and
providing updated regions of interest and corrected atomic number values based on the additional segmentation process as output.
11. The method of claim 10, wherein the plurality of radiographic images are images of a cargo container.
12. The method of claim 10, wherein the plurality of radiographic images includes four images each image being generated using a different energy level.
13. The method of claim 10, wherein identifying regions of interest includes comparing an estimated atomic value of each image object to a threshold value.
14. The method of claim 10, wherein the radiographic images include only radiographic image data.
15. The method of claim 10, wherein the radiographic images include effective atomic number data and radiographic image data.
16. A radiographic image segmentation apparatus comprising:
means for segmenting a radiographic image in order to determine regions of interest and corrected atomic number values;
means for providing a first image to an operator terminal for display;
means for receiving input from the operator terminal, the input including an indication of a selected region of the first image;
means for performing an additional segmentation process on the selected region of the first image;
means for providing a second image to the operator terminal for display, the second image being based on the selected region of the first image and results of the additional segmentation process; and
means for providing updated regions of interest and corrected atomic number values based on the additional segmentation process as output.
17. The apparatus of claim 16, wherein the radiographic image is an image of a cargo container.
18. The apparatus of claim 16, further comprising means for correcting an atomic number image to generate a corrected atomic number image, wherein the first image is based on the corrected atomic number image
19. The apparatus of claim 16, wherein the means for receiving a radiographic image further includes means for receiving a plurality of radiographic images.
20. The apparatus of claim 19, wherein each radiographic image is generated using a different energy level.
US12/129,410 2007-05-29 2008-05-29 User guided object segmentation recognition Abandoned US20090003699A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/129,410 US20090003699A1 (en) 2007-05-29 2008-05-29 User guided object segmentation recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US94063207P 2007-05-29 2007-05-29
US12/129,410 US20090003699A1 (en) 2007-05-29 2008-05-29 User guided object segmentation recognition

Publications (1)

Publication Number Publication Date
US20090003699A1 true US20090003699A1 (en) 2009-01-01

Family

ID=40088192

Family Applications (7)

Application Number Title Priority Date Filing Date
US12/129,439 Abandoned US20080298544A1 (en) 2007-05-29 2008-05-29 Genetic tuning of coefficients in a threat detection system
US12/129,055 Abandoned US20090052622A1 (en) 2007-05-29 2008-05-29 Nuclear material detection system
US12/129,036 Abandoned US20090003651A1 (en) 2007-05-29 2008-05-29 Object segmentation recognition
US12/129,410 Abandoned US20090003699A1 (en) 2007-05-29 2008-05-29 User guided object segmentation recognition
US12/129,371 Abandoned US20090052762A1 (en) 2007-05-29 2008-05-29 Multi-energy radiographic system for estimating effective atomic number using multiple ratios
US12/129,383 Expired - Fee Related US8094874B2 (en) 2007-05-29 2008-05-29 Material context analysis
US12/129,393 Abandoned US20090055344A1 (en) 2007-05-29 2008-05-29 System and method for arbitrating outputs from a plurality of threat analysis systems

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US12/129,439 Abandoned US20080298544A1 (en) 2007-05-29 2008-05-29 Genetic tuning of coefficients in a threat detection system
US12/129,055 Abandoned US20090052622A1 (en) 2007-05-29 2008-05-29 Nuclear material detection system
US12/129,036 Abandoned US20090003651A1 (en) 2007-05-29 2008-05-29 Object segmentation recognition

Family Applications After (3)

Application Number Title Priority Date Filing Date
US12/129,371 Abandoned US20090052762A1 (en) 2007-05-29 2008-05-29 Multi-energy radiographic system for estimating effective atomic number using multiple ratios
US12/129,383 Expired - Fee Related US8094874B2 (en) 2007-05-29 2008-05-29 Material context analysis
US12/129,393 Abandoned US20090055344A1 (en) 2007-05-29 2008-05-29 System and method for arbitrating outputs from a plurality of threat analysis systems

Country Status (1)

Country Link
US (7) US20080298544A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080317351A1 (en) * 2007-06-22 2008-12-25 Matthias Fenchel Method for interactively segmenting structures in image data records and image processing unit for carrying out the method
US20080317342A1 (en) * 2007-06-22 2008-12-25 Matthias Fenchel Method for segmenting structures in image data records and image processing unit for carrying out the method
US20090052762A1 (en) * 2007-05-29 2009-02-26 Peter Dugan Multi-energy radiographic system for estimating effective atomic number using multiple ratios
US20100085331A1 (en) * 2008-10-02 2010-04-08 Semiconductor Energy Laboratory Co., Ltd. Touch panel and method for driving the same
US20120113146A1 (en) * 2010-11-10 2012-05-10 Patrick Michael Virtue Methods, apparatus and articles of manufacture to combine segmentations of medical diagnostic images
US20140133718A1 (en) * 2012-11-14 2014-05-15 Varian Medical Systems, Inc. Method and Apparatus Pertaining to Identifying Objects of Interest in a High-Energy Image
US8988405B2 (en) 2009-10-26 2015-03-24 Semiconductor Energy Laboratory Co., Ltd. Display device and semiconductor device
US9476923B2 (en) 2011-06-30 2016-10-25 Commissariat A L'energie Atomique Et Aux Energies Alternatives Method and device for identifying a material by the spectral analysis of electromagnetic radiation passing through said material
WO2016183225A3 (en) * 2015-05-12 2017-01-19 Lawrence Livermore National Security, Llc Image analysis of images of containers
EP2639744B1 (en) * 2012-03-14 2019-04-10 Omron Corporation Image processor, image processing method, control program, and recording medium
US10303971B2 (en) * 2015-06-03 2019-05-28 Innereye Ltd. Image classification by brain computer interface
US10939044B1 (en) * 2019-08-27 2021-03-02 Adobe Inc. Automatically setting zoom level for image capture
US11120297B2 (en) * 2018-11-30 2021-09-14 International Business Machines Corporation Segmentation of target areas in images

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110032047A (en) * 2009-09-22 2011-03-30 삼성전자주식회사 Multi-energy x-ray system, multi-energy x-ray material discriminated image processing unit, and method for processing material discriminated images of the multi-energy x-ray system
US9036782B2 (en) * 2010-08-06 2015-05-19 Telesecurity Sciences, Inc. Dual energy backscatter X-ray shoe scanning device
US8924325B1 (en) * 2011-02-08 2014-12-30 Lockheed Martin Corporation Computerized target hostility determination and countermeasure
US10216866B2 (en) * 2011-02-25 2019-02-26 Smiths Heimann Gmbh Image reconstruction based on parametric models
WO2013002805A1 (en) * 2011-06-30 2013-01-03 Analogic Corporation Iterative image reconstruction
WO2013052549A1 (en) * 2011-10-03 2013-04-11 Cornell University System and methods of acoustic monitoring
GB2508841A (en) * 2012-12-12 2014-06-18 Ibm Computing prioritised general arbitration rules for conflicting rules
US9697467B2 (en) 2014-05-21 2017-07-04 International Business Machines Corporation Goal-driven composition with preferences method and system
US9785755B2 (en) 2014-05-21 2017-10-10 International Business Machines Corporation Predictive hypothesis exploration using planning
US9118714B1 (en) 2014-07-23 2015-08-25 Lookingglass Cyber Solutions, Inc. Apparatuses, methods and systems for a cyber threat visualization and editing user interface
GB2530252B (en) * 2014-09-10 2020-04-01 Smiths Heimann Sas Determination of a degree of homogeneity in images
CN104482996B (en) * 2014-12-24 2019-03-15 胡桂标 The material kind of passive nuclear level sensing device corrects measuring system
CN104778444B (en) * 2015-03-10 2018-01-16 公安部交通管理科学研究所 The appearance features analysis method of vehicle image under road scene
US9687207B2 (en) * 2015-04-01 2017-06-27 Toshiba Medical Systems Corporation Pre-reconstruction calibration, data correction, and material decomposition method and apparatus for photon-counting spectrally-resolving X-ray detectors and X-ray imaging
US10078150B2 (en) 2015-04-14 2018-09-18 Board Of Regents, The University Of Texas System Detecting and quantifying materials in containers utilizing an inverse algorithm with adaptive regularization
CN106353828B (en) 2015-07-22 2018-09-21 清华大学 The method and apparatus that checked property body weight is estimated in safe examination system
US11841789B2 (en) 2016-01-27 2023-12-12 Microsoft Technology Licensing, Llc Visual aids for debugging
US10733532B2 (en) 2016-01-27 2020-08-04 Bonsai AI, Inc. Multiple user interfaces of an artificial intelligence system to accommodate different types of users solving different types of problems with artificial intelligence
US20180357543A1 (en) * 2016-01-27 2018-12-13 Bonsai AI, Inc. Artificial intelligence system configured to measure performance of artificial intelligence over time
US11775850B2 (en) 2016-01-27 2023-10-03 Microsoft Technology Licensing, Llc Artificial intelligence engine having various algorithms to build different concepts contained within a same AI model
US11836650B2 (en) 2016-01-27 2023-12-05 Microsoft Technology Licensing, Llc Artificial intelligence engine for mixing and enhancing features from one or more trained pre-existing machine-learning models
US11868896B2 (en) 2016-01-27 2024-01-09 Microsoft Technology Licensing, Llc Interface for working with simulations on premises
US10204226B2 (en) 2016-12-07 2019-02-12 General Electric Company Feature and boundary tuning for threat detection in industrial asset control system

Citations (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5319547A (en) * 1990-08-10 1994-06-07 Vivid Technologies, Inc. Device and method for inspection of baggage and other objects
US5600303A (en) * 1993-01-15 1997-02-04 Technology International Incorporated Detection of concealed explosives and contraband
US5600700A (en) * 1995-09-25 1997-02-04 Vivid Technologies, Inc. Detecting explosives or other contraband by employing transmitted and scattered X-rays
US5642393A (en) * 1995-09-26 1997-06-24 Vivid Technologies, Inc. Detecting contraband by employing interactive multiprobe tomography
US6018562A (en) * 1995-11-13 2000-01-25 The United States Of America As Represented By The Secretary Of The Army Apparatus and method for automatic recognition of concealed objects using multiple energy computed tomography
US6026171A (en) * 1998-02-11 2000-02-15 Analogic Corporation Apparatus and method for detection of liquids in computed tomography data
US6236709B1 (en) * 1998-05-04 2001-05-22 Ensco, Inc. Continuous high speed tomographic imaging system and method
US20010033636A1 (en) * 1999-11-13 2001-10-25 Martin Hartick Method and apparatus for determining a material of a detected item
US6556653B2 (en) * 2000-05-25 2003-04-29 University Of New Brunswick Non-rotating X-ray system for three-dimensional, three-parameter imaging
US6567496B1 (en) * 1999-10-14 2003-05-20 Sychev Boris S Cargo inspection apparatus and process
US20040247075A1 (en) * 2003-06-06 2004-12-09 Johnson James H. Vehicle mounted inspection systems and methods
US20050002550A1 (en) * 2003-07-03 2005-01-06 Ge Medical Systems Global Technology Company, Llc Imaging chain for digital tomosynthesis on a flat panel detector
US20050025280A1 (en) * 2002-12-10 2005-02-03 Robert Schulte Volumetric 3D x-ray imaging system for baggage inspection including the detection of explosives
US20050031075A1 (en) * 2003-08-07 2005-02-10 Hopkins Forrest Frank System and method for detecting an object
US20050058242A1 (en) * 2003-09-15 2005-03-17 Peschmann Kristian R. Methods and systems for the rapid detection of concealed objects
US20050111619A1 (en) * 2002-02-06 2005-05-26 L-3 Communications Security And Detection Systems Corporation Delaware Method and apparatus for target transmitting information about a target object between a prescanner and a CT scanner
US20050180542A1 (en) * 2004-02-17 2005-08-18 General Electric Company CT-Guided system and method for analyzing regions of interest for contraband detection
US20050256820A1 (en) * 2004-05-14 2005-11-17 Lockheed Martin Corporation Cognitive arbitration system
US20060098773A1 (en) * 2003-09-15 2006-05-11 Peschmann Kristian R Methods and systems for rapid detection of concealed objects using fluorescence
US7092485B2 (en) * 2003-05-27 2006-08-15 Control Screening, Llc X-ray inspection system for detecting explosives and other contraband
US7103137B2 (en) * 2002-07-24 2006-09-05 Varian Medical Systems Technology, Inc. Radiation scanning of objects for contraband
US20060204107A1 (en) * 2005-03-04 2006-09-14 Lockheed Martin Corporation Object recognition system using dynamic length genetic training
US20060233302A1 (en) * 2004-10-22 2006-10-19 Might Matthew B Angled-beam detection system for container inspection
US7130371B2 (en) * 2002-09-27 2006-10-31 Scantech Holdings, Llc System for alternately pulsing energy of accelerated electrons bombarding a conversion target
US20060256914A1 (en) * 2004-11-12 2006-11-16 Might Matthew B Non-intrusive container inspection system using forward-scattered radiation
US20060269114A1 (en) * 2003-07-03 2006-11-30 General Electric Company Methods and systems for prescribing parameters for tomosynthesis
US7162007B2 (en) * 2004-02-06 2007-01-09 Elyan Vladimir V Non-intrusive inspection systems for large container screening and inspection
US7162005B2 (en) * 2002-07-19 2007-01-09 Varian Medical Systems Technologies, Inc. Radiation sources and compact radiation scanning systems
US20070009084A1 (en) * 2005-06-01 2007-01-11 Endicott Interconnect Technologies, Inc. Imaging inspection apparatus with directional cooling
US7190757B2 (en) * 2004-05-21 2007-03-13 Analogic Corporation Method of and system for computing effective atomic number images in multi-energy computed tomography
US20070242797A1 (en) * 2005-11-09 2007-10-18 Dexela Limited Methods and apparatus for obtaining low-dose imaging
US20070248212A1 (en) * 2004-10-22 2007-10-25 Might Matthew B Cryptographic container security system
US7336767B1 (en) * 2005-03-08 2008-02-26 Khai Minh Le Back-scattered X-ray radiation attenuation method and apparatus
US7356115B2 (en) * 2002-12-04 2008-04-08 Varian Medical Systems Technology, Inc. Radiation scanning units including a movable platform
US20090003651A1 (en) * 2007-05-29 2009-01-01 Peter Dugan Object segmentation recognition
US7491958B2 (en) * 2003-08-27 2009-02-17 Scantech Holdings, Llc Radiographic inspection system for inspecting the contents of a container having dual injector and dual accelerating section
US20100019165A1 (en) * 2006-10-25 2010-01-28 Mark Goldberg Method & system for detecting nitrogenous materials via gamma-resonance absorption (gra)
US7706502B2 (en) * 2007-05-31 2010-04-27 Morpho Detection, Inc. Cargo container inspection system and apparatus
US7856081B2 (en) * 2003-09-15 2010-12-21 Rapiscan Systems, Inc. Methods and systems for rapid detection of concealed objects using fluorescence

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US73518A (en) * 1868-01-21 Luke fitzpatrick and jacob schinneller
US538758A (en) * 1895-05-07 Richard watkins
DE3467692D1 (en) * 1984-05-14 1988-01-07 Matsushita Electric Ind Co Ltd Quantum-counting radiography method and apparatus
US5132998A (en) * 1989-03-03 1992-07-21 Matsushita Electric Industrial Co., Ltd. Radiographic image processing method and photographic imaging apparatus therefor
US7394363B1 (en) * 1998-05-12 2008-07-01 Bahador Ghahramani Intelligent multi purpose early warning system for shipping containers, components therefor and methods of making the same
US6282305B1 (en) * 1998-06-05 2001-08-28 Arch Development Corporation Method and system for the computerized assessment of breast cancer risk
US20020186875A1 (en) * 2001-04-09 2002-12-12 Burmer Glenna C. Computer methods for image pattern recognition in organic material
US6969861B2 (en) * 2001-10-02 2005-11-29 Konica Corporation Cassette for radiographic imaging, radiographic image reading apparatus and radiographic image reading method
US7444309B2 (en) * 2001-10-31 2008-10-28 Icosystem Corporation Method and system for implementing evolutionary algorithms
US7123762B2 (en) * 2002-02-08 2006-10-17 University Of Chicago Method and system for risk-modulated diagnosis of disease
US7277521B2 (en) * 2003-04-08 2007-10-02 The Regents Of The University Of California Detecting special nuclear materials in containers using high-energy gamma rays emitted by fission products
JP5037328B2 (en) * 2004-03-01 2012-09-26 バリアン・メディカル・システムズ・インコーポレイテッド Two-energy radiation scanning of objects
US20060269140A1 (en) * 2005-03-15 2006-11-30 Ramsay Thomas E System and method for identifying feature of interest in hyperspectral data
US7847260B2 (en) * 2005-02-04 2010-12-07 Dan Inbar Nuclear threat detection
US20090174554A1 (en) * 2005-05-11 2009-07-09 Eric Bergeron Method and system for screening luggage items, cargo containers or persons
CN100582758C (en) * 2005-11-03 2010-01-20 清华大学 Method and apparatus for recognizing materials by using fast neutrons and continuous energy spectrum X rays
US7536365B2 (en) * 2005-12-08 2009-05-19 Northrop Grumman Corporation Hybrid architecture for acquisition, recognition, and fusion
US20070211248A1 (en) * 2006-01-17 2007-09-13 Innovative American Technology, Inc. Advanced pattern recognition systems for spectral analysis
US7483511B2 (en) * 2006-06-06 2009-01-27 Ge Homeland Protection, Inc. Inspection system and method
US8015127B2 (en) * 2006-09-12 2011-09-06 New York University System, method, and computer-accessible medium for providing a multi-objective evolutionary optimization of agent-based models
US7492862B2 (en) * 2007-01-17 2009-02-17 Ge Homeland Protection, Inc. Computed tomography cargo inspection system and method

Patent Citations (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5490218A (en) * 1990-08-10 1996-02-06 Vivid Technologies, Inc. Device and method for inspection of baggage and other objects
US5838758A (en) * 1990-08-10 1998-11-17 Vivid Technologies Device and method for inspection of baggage and other objects
US5319547A (en) * 1990-08-10 1994-06-07 Vivid Technologies, Inc. Device and method for inspection of baggage and other objects
US5600303A (en) * 1993-01-15 1997-02-04 Technology International Incorporated Detection of concealed explosives and contraband
US5600700A (en) * 1995-09-25 1997-02-04 Vivid Technologies, Inc. Detecting explosives or other contraband by employing transmitted and scattered X-rays
US5642393A (en) * 1995-09-26 1997-06-24 Vivid Technologies, Inc. Detecting contraband by employing interactive multiprobe tomography
US6018562A (en) * 1995-11-13 2000-01-25 The United States Of America As Represented By The Secretary Of The Army Apparatus and method for automatic recognition of concealed objects using multiple energy computed tomography
US6026171A (en) * 1998-02-11 2000-02-15 Analogic Corporation Apparatus and method for detection of liquids in computed tomography data
US6236709B1 (en) * 1998-05-04 2001-05-22 Ensco, Inc. Continuous high speed tomographic imaging system and method
US6567496B1 (en) * 1999-10-14 2003-05-20 Sychev Boris S Cargo inspection apparatus and process
US20010033636A1 (en) * 1999-11-13 2001-10-25 Martin Hartick Method and apparatus for determining a material of a detected item
US6556653B2 (en) * 2000-05-25 2003-04-29 University Of New Brunswick Non-rotating X-ray system for three-dimensional, three-parameter imaging
US7023957B2 (en) * 2002-02-06 2006-04-04 L-3 Communications Security And Detection Systems, Inc. Method and apparatus for transmitting information about a target object between a prescanner and a CT scanner
US20050111619A1 (en) * 2002-02-06 2005-05-26 L-3 Communications Security And Detection Systems Corporation Delaware Method and apparatus for target transmitting information about a target object between a prescanner and a CT scanner
US7162005B2 (en) * 2002-07-19 2007-01-09 Varian Medical Systems Technologies, Inc. Radiation sources and compact radiation scanning systems
US7103137B2 (en) * 2002-07-24 2006-09-05 Varian Medical Systems Technology, Inc. Radiation scanning of objects for contraband
US7130371B2 (en) * 2002-09-27 2006-10-31 Scantech Holdings, Llc System for alternately pulsing energy of accelerated electrons bombarding a conversion target
US7356115B2 (en) * 2002-12-04 2008-04-08 Varian Medical Systems Technology, Inc. Radiation scanning units including a movable platform
US20050025280A1 (en) * 2002-12-10 2005-02-03 Robert Schulte Volumetric 3D x-ray imaging system for baggage inspection including the detection of explosives
US7092485B2 (en) * 2003-05-27 2006-08-15 Control Screening, Llc X-ray inspection system for detecting explosives and other contraband
US20040247075A1 (en) * 2003-06-06 2004-12-09 Johnson James H. Vehicle mounted inspection systems and methods
US6937692B2 (en) * 2003-06-06 2005-08-30 Varian Medical Systems Technologies, Inc. Vehicle mounted inspection systems and methods
US20060269114A1 (en) * 2003-07-03 2006-11-30 General Electric Company Methods and systems for prescribing parameters for tomosynthesis
US20050002550A1 (en) * 2003-07-03 2005-01-06 Ge Medical Systems Global Technology Company, Llc Imaging chain for digital tomosynthesis on a flat panel detector
US20050031075A1 (en) * 2003-08-07 2005-02-10 Hopkins Forrest Frank System and method for detecting an object
US7491958B2 (en) * 2003-08-27 2009-02-17 Scantech Holdings, Llc Radiographic inspection system for inspecting the contents of a container having dual injector and dual accelerating section
US7856081B2 (en) * 2003-09-15 2010-12-21 Rapiscan Systems, Inc. Methods and systems for rapid detection of concealed objects using fluorescence
US20050058242A1 (en) * 2003-09-15 2005-03-17 Peschmann Kristian R. Methods and systems for the rapid detection of concealed objects
US7366282B2 (en) * 2003-09-15 2008-04-29 Rapiscan Security Products, Inc. Methods and systems for rapid detection of concealed objects using fluorescence
US20060098773A1 (en) * 2003-09-15 2006-05-11 Peschmann Kristian R Methods and systems for rapid detection of concealed objects using fluorescence
US7162007B2 (en) * 2004-02-06 2007-01-09 Elyan Vladimir V Non-intrusive inspection systems for large container screening and inspection
US20050180542A1 (en) * 2004-02-17 2005-08-18 General Electric Company CT-Guided system and method for analyzing regions of interest for contraband detection
US20050256820A1 (en) * 2004-05-14 2005-11-17 Lockheed Martin Corporation Cognitive arbitration system
US7190757B2 (en) * 2004-05-21 2007-03-13 Analogic Corporation Method of and system for computing effective atomic number images in multi-energy computed tomography
US20060233302A1 (en) * 2004-10-22 2006-10-19 Might Matthew B Angled-beam detection system for container inspection
US7356118B2 (en) * 2004-10-22 2008-04-08 Scantech Holdings, Llc Angled-beam detection system for container inspection
US20070248212A1 (en) * 2004-10-22 2007-10-25 Might Matthew B Cryptographic container security system
US20060256914A1 (en) * 2004-11-12 2006-11-16 Might Matthew B Non-intrusive container inspection system using forward-scattered radiation
US20060204107A1 (en) * 2005-03-04 2006-09-14 Lockheed Martin Corporation Object recognition system using dynamic length genetic training
US7336767B1 (en) * 2005-03-08 2008-02-26 Khai Minh Le Back-scattered X-ray radiation attenuation method and apparatus
US20070009084A1 (en) * 2005-06-01 2007-01-11 Endicott Interconnect Technologies, Inc. Imaging inspection apparatus with directional cooling
US20070242797A1 (en) * 2005-11-09 2007-10-18 Dexela Limited Methods and apparatus for obtaining low-dose imaging
US7545907B2 (en) * 2005-11-09 2009-06-09 Dexela Limited Methods and apparatus for obtaining low-dose imaging
US20100019165A1 (en) * 2006-10-25 2010-01-28 Mark Goldberg Method & system for detecting nitrogenous materials via gamma-resonance absorption (gra)
US20090003651A1 (en) * 2007-05-29 2009-01-01 Peter Dugan Object segmentation recognition
US20090052622A1 (en) * 2007-05-29 2009-02-26 Peter Dugan Nuclear material detection system
US20090055344A1 (en) * 2007-05-29 2009-02-26 Peter Dugan System and method for arbitrating outputs from a plurality of threat analysis systems
US20090052732A1 (en) * 2007-05-29 2009-02-26 Peter Dugan Material context analysis
US7706502B2 (en) * 2007-05-31 2010-04-27 Morpho Detection, Inc. Cargo container inspection system and apparatus

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090052762A1 (en) * 2007-05-29 2009-02-26 Peter Dugan Multi-energy radiographic system for estimating effective atomic number using multiple ratios
US20090052732A1 (en) * 2007-05-29 2009-02-26 Peter Dugan Material context analysis
US8094874B2 (en) * 2007-05-29 2012-01-10 Lockheed Martin Corporation Material context analysis
US20080317342A1 (en) * 2007-06-22 2008-12-25 Matthias Fenchel Method for segmenting structures in image data records and image processing unit for carrying out the method
US8180151B2 (en) * 2007-06-22 2012-05-15 Siemens Aktiengesellschaft Method for segmenting structures in image data records and image processing unit for carrying out the method
US8200015B2 (en) * 2007-06-22 2012-06-12 Siemens Aktiengesellschaft Method for interactively segmenting structures in image data records and image processing unit for carrying out the method
US20080317351A1 (en) * 2007-06-22 2008-12-25 Matthias Fenchel Method for interactively segmenting structures in image data records and image processing unit for carrying out the method
US20100085331A1 (en) * 2008-10-02 2010-04-08 Semiconductor Energy Laboratory Co., Ltd. Touch panel and method for driving the same
US8400428B2 (en) * 2008-10-02 2013-03-19 Semiconductor Energy Laboratory Co., Ltd. Touch panel and method for driving the same
US8988405B2 (en) 2009-10-26 2015-03-24 Semiconductor Energy Laboratory Co., Ltd. Display device and semiconductor device
US20120113146A1 (en) * 2010-11-10 2012-05-10 Patrick Michael Virtue Methods, apparatus and articles of manufacture to combine segmentations of medical diagnostic images
US9476923B2 (en) 2011-06-30 2016-10-25 Commissariat A L'energie Atomique Et Aux Energies Alternatives Method and device for identifying a material by the spectral analysis of electromagnetic radiation passing through said material
EP2639744B1 (en) * 2012-03-14 2019-04-10 Omron Corporation Image processor, image processing method, control program, and recording medium
US9589188B2 (en) * 2012-11-14 2017-03-07 Varian Medical Systems, Inc. Method and apparatus pertaining to identifying objects of interest in a high-energy image
US20140133718A1 (en) * 2012-11-14 2014-05-15 Varian Medical Systems, Inc. Method and Apparatus Pertaining to Identifying Objects of Interest in a High-Energy Image
WO2016183225A3 (en) * 2015-05-12 2017-01-19 Lawrence Livermore National Security, Llc Image analysis of images of containers
US10592774B2 (en) 2015-05-12 2020-03-17 Lawrence Livermore National Security, Llc Identification of uncommon objects in containers
US10303971B2 (en) * 2015-06-03 2019-05-28 Innereye Ltd. Image classification by brain computer interface
US10948990B2 (en) * 2015-06-03 2021-03-16 Innereye Ltd. Image classification by brain computer interface
US11120297B2 (en) * 2018-11-30 2021-09-14 International Business Machines Corporation Segmentation of target areas in images
US10939044B1 (en) * 2019-08-27 2021-03-02 Adobe Inc. Automatically setting zoom level for image capture

Also Published As

Publication number Publication date
US20080298544A1 (en) 2008-12-04
US20090055344A1 (en) 2009-02-26
US8094874B2 (en) 2012-01-10
US20090052762A1 (en) 2009-02-26
US20090052732A1 (en) 2009-02-26
US20090003651A1 (en) 2009-01-01
US20090052622A1 (en) 2009-02-26

Similar Documents

Publication Publication Date Title
US20090003699A1 (en) User guided object segmentation recognition
Rogers et al. Automated x-ray image analysis for cargo security: Critical review and future promise
US8494210B2 (en) User interface for use in security screening providing image enhancement capabilities and apparatus for implementing same
Mouton et al. A review of automated image understanding within 3D baggage computed tomography security screening
US8090169B2 (en) System and method for detecting items of interest through mass estimation
US20090175411A1 (en) Methods and systems for use in security screening, with parallel processing capability
US20080152082A1 (en) Method and apparatus for use in security screening providing incremental display of threat detection information and security system incorporating same
US20080159605A1 (en) Method for characterizing an image source utilizing predetermined color spaces
KR101717768B1 (en) Component inspecting method and apparatus
CN108682004A (en) A kind of infrared small target in complex background detection method based on local message
US8090150B2 (en) Method and system for identifying a containment vessel
EP2233950A2 (en) Method and System for Inspection of Containers
WO2006099477A1 (en) System and method for identifying objects of interest in image data
US20130070997A1 (en) Systems, methods, and media for on-line boosting of a classifier
US20090175526A1 (en) Method of creating a divergence transform for identifying a feature of interest in hyperspectral data
EP2976746B1 (en) A method and x-ray system for computer aided detection of structures in x-ray images
US20150071532A1 (en) Image processing device, computer-readable recording medium, and image processing method
WO2008157843A1 (en) System and method for the detection, characterization, visualization and classification of objects in image data
US7839971B2 (en) System and method for inspecting containers for target material
EP2709063A1 (en) Image processing device, computer-readable recording medium, and image processing method
US6353674B1 (en) Method of segmenting a radiation image into direct exposure area and diagnostically relevant area
US20110052032A1 (en) System and method for identifying signatures for features of interest using predetermined color spaces
US6608915B2 (en) Image processing method and apparatus
WO2008019473A1 (en) Method and apparatus for use in security screening providing incremental display of threat detection information and security system incorporating same
EP0887769B1 (en) Method of segmenting a radiation image into direct exposure area and diagnostically relevant area

Legal Events

Date Code Title Description
AS Assignment

Owner name: LOCKHEED MARTIN CORPORATION, MARYLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DUGAN, PETER;RIESS, MICHAEL;REEL/FRAME:021481/0944;SIGNING DATES FROM 20080627 TO 20080718

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION