US6259803B1 - Simplified image correlation method using off-the-shelf signal processors to extract edge information using only spatial data - Google Patents

Simplified image correlation method using off-the-shelf signal processors to extract edge information using only spatial data Download PDF

Info

Publication number
US6259803B1
US6259803B1 US09/337,219 US33721999A US6259803B1 US 6259803 B1 US6259803 B1 US 6259803B1 US 33721999 A US33721999 A US 33721999A US 6259803 B1 US6259803 B1 US 6259803B1
Authority
US
United States
Prior art keywords
edge
image
value
pixel
real time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/337,219
Inventor
Michael M. Wirtz
William R. Ditzler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NAVY United States, AS THE, Secretary of
US Department of Navy
Original Assignee
US Department of Navy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by US Department of Navy filed Critical US Department of Navy
Priority to US09/337,219 priority Critical patent/US6259803B1/en
Assigned to NAVY, UNITED STATES OF AMERICA AS, THE, AS REPRESENTED BY THE SECRETARY OF reassignment NAVY, UNITED STATES OF AMERICA AS, THE, AS REPRESENTED BY THE SECRETARY OF ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DITZLER, WILLIAM R., WIRTZ, MICHAEL M.
Application granted granted Critical
Publication of US6259803B1 publication Critical patent/US6259803B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • G06V10/7515Shifting the patterns to accommodate for positional errors

Definitions

  • the field to which this invention applies is image correlation, in particular, correlation using only data that represents spatial parameters.
  • a specific application is correlation of digitally processed data obtained from a scanning laser such as a LADAR.
  • multispectral image correlation systems in air-to-ground target detection and acquisition.
  • Such systems include means for image-processing, which automatically register images produced by airborne platform-mounted sensors operating in different wavelengths and from different points of view.
  • an effective technique for registering images from different sensor types is that of edge matching, that is, matching pronounced target scene edges appearing in one sensor image with similar target scene edges appearing in one or more other sensor images.
  • edge matching that is, matching pronounced target scene edges appearing in one sensor image with similar target scene edges appearing in one or more other sensor images.
  • conventional image processing techniques for edge extraction rely upon detection of discontinuities in the light intensity of adjacent pixels, which often produces unsatisfactory results because of reflected laser energy which exhibits large random fluctuations from pixel to pixel, because of atmospheric effects, surface texture, and receiver noise. Such random fluctuations can overwhelm systematic variations caused by the reflectance of the target.
  • an object of the invention to provide a method for enhancing air-to-ground target detection, acquisition and terminal guidance, utilizing only spatial parameters of data for edge extraction from laser-radar images.
  • a further object of the invention is to provide an image correlation system for correlating edge information from a recorded and stored template of a target scene with current real-time images from laser-radar sensors mounted on an airborne platform, the system operating to provide edge information from the sensor images based upon a comparison of the range of a pixel with each of its neighboring.
  • a feature of the invention is the provision of a method for enhancing air-to-ground target detection, acquisition, and terminal guidance, the method comprising the steps of
  • a multispectral image correlation system a library of reference images containing compressed data defining the pertinent edges within a three-dimensional space containing a pre-specified target, the target defined by a tracing along the most pronounced of the target's edges, a laser-radar (LADAR) sensor mounted on an airborne platform for providing real-time LADAR imagery of the target scene, means for determining a range value for each pixel comprising the LADAR imagery and for computing gradient based edge-strength values for each based only upon a range value thereof, and for thereby identifying the presence of a structural 90° corner.
  • LADAR laser-radar
  • the system further includes means for resetting all edge-strength values greater than a first value resultant from a reflection off the 90° corner to that the first value, means for resetting to zero the range value of any pixel adjacent another pixel having a shorter range value to identify a structural edge traced by the range values of those pixels exhibiting the first value, means for correlating the structural edge traced by the range values of those pixels exhibiting the first value, means for correlating the structural edge identified by the LADAR imagery with the appropriate reference imagery, and means for overlaying the pre-selected target from the appropriate reference image upon the LADAR imagery, to provide enhanced LADAR imagery to identify the pre-selected target therein.
  • FIG. 1 depicts oblique illumination of a corner of an object by an energy source and the two 3 ⁇ 3 arrays (e.g., length X width and width X depth) used to digitally represent the 3-D relationship of a pixel to each of its neighboring pixels within the image reflected from the oblique illumination of the corner of the object.
  • the two 3 ⁇ 3 arrays e.g., length X width and width X depth
  • FIG. 2 depicts an oblique illumination of an object in which only an edge of the object, not a corner, is illuminated, to include some additional illumination of an area behind the object.
  • FIG. 3 is a flow chart depicting a preferred method illustrative of an embodiment of the invention.
  • FIG. 4 is a flow chart representing an alternative preferred embodiment of the invention.
  • a digitized reference image 301 of a target scene 101 with a selected target 102 designated therein is produced in advance of a mission directed toward a real target 202 that may be digitally recreated by processing reflections from an illumination 201 in real time at an oblique angle over a flat surface of the real target 202 .
  • the digitally recreated real target 202 should correspond to the stored target 102 , digitally recreated from an illumination 105 of only well-defined edges, including a corner 103 , and stored as a reference image 301 .
  • a spatial gradient operation 302 detects contrast boundaries in the reference image 301 and their orientation, with a set of convolution kernels.
  • a Sobel operation (not separately shown) is a preferred means for effecting the gradient operation 302 .
  • Maxima edge pixels 303 in the reference image 301 as defined to exist along contrast boundaries 302 are identified. Edges along the maxima edge pixels 303 are traced.
  • a reference image template 401 is fabricated 304 by fitting straight line segments to the reference image 301 edges, where data exist to make this possible.
  • the data representing the line segments thus generated are compressed by storing in memory only the end points of the straight-line segments. If three-dimensional information relative to the reference target scene image is available, such as a terrain elevation database, the 3D information is used to connect the stored end points to points in three-dimensional space. Otherwise, the end points are assigned a value of zero for the third dimension. All of the data relative to of digitally recreated targets 102 known to exist within digitally recreated target scenes 101 are contained in lists of straight line segment end points (not separately shown) that constitute a library of reference image templates 401 .
  • a laser-radar (LADAR) video image 305 of the target scene 101 is provided by a system (not separately shown) mounted on an airborne platform, such as an aircraft, a missile, or other vehicle or weapon.
  • LADAR laser-radar
  • An inherent advantage of LADAR is its provision of a three dimensional image. Every pixel in the LADAR image contains the range to the points on the object from which the laser beam is reflected. The full cartesian coordinate of each pixel can be reproduced digitally. Thus, the LADAR data readily can be used with the digitally recreated image from another sensor, permitting views of the scene 101 from a number of perspectives.
  • a range value for each pixel in the LADAR image is determined and a gradient based edge strength value 306 is computed.
  • a spatial numerical second derivative operator is employed to assign a raw edge-strength value to each pixel as it relates to the range value of each of its neighboring pixels. This provides a digital representation of the response from any illumination 105 of a target 102 , e.g., an illumination as depicted in the illumination 105 of the corner 103 in FIG. 1 .
  • the edge strength of the pixel with the larger range value is set to zero 307 . Without this step, perspective transformations from an oblique to an overhead view would result in a phantom edge on the ground where a vertical obstacle shadowed the laser beam. This “edge” would not correspond to any physical object that may be identified by another means.
  • the LADAR video image is then transformed 308 to the perspective and range, while scaling 308 to the reference image template 401 .
  • a correlation 309 of the transformed and scaled edges of the LADAR image is made to the edges of the reference image template 401 .
  • an overlay 310 of the designated target in the reference image template 401 is placed on to the LADAR video image.
  • the system includes storage 401 , such as computer memory, for the reference template 401 that contains compressed data representing reference imagery of a three dimensional target scene with target(s) designated therein.
  • the template 401 comprises a tracing of the most pronounced of the scene's edges.
  • the system further includes a laser-radar (LADAR) system 402 mounted on an airborne platform for providing LADAR imagery of the target scene.
  • LADAR laser-radar
  • a first algorithm 403 determines a range value for each pixel in the LADAR image 305 and computes gradient based edge-strength values, based on these range values, for each pixel.
  • a second algorithm 404 after comparing this computed edge-strength value for each pixel to that edge-strength value computed for each of its neighboring pixels, generates a pair of 3 ⁇ 3 matrices 104 . These matrices depict the relationship of that pixel to each of its neighboring pixels in each of two intersecting planes. A certain set of values in the matrices 104 , as provided in FIG.
  • a third algorithm 405 sets to zero the range value of a pixel adjacent to any pixel having a range value smaller than its range value. This provides the minimum data needed to trace a structural edge, using just those pixels assigned that first value.
  • the data necessary to maintain a record of objects within a scene are compressed further by storing only the end points of a straight line used to define each edge as determined from the above.
  • a fourth algorithm 406 facilitates correlation of the structural edges identified in the LADAR image 305 with the stored traced edges of the reference image template 401 .
  • the correlation done by the fourth algorithm 406 requires transforming the compressed “edge data” from the LADAR image 305 in range only for each perspective taken and scaling the LADAR image 305 edge data to that of the reference template 401 .
  • a fifth algorithm 407 permits the overlay of pre-specified target(s) within the stored reference image template 401 upon the real-time LADAR image 305 to enhance real-time identification and provide precise location of the pre-specified target(s) in relation to a weapon system.

Abstract

Provided is an approach to efficiently correlate a previously captured digitally created image to one provided in real-time. The real-time digitally created image is represented by the digitally processed image of just the edges of objects within a scene. This is accomplished via digital edge extraction and subsequent digital data compression, based on comparing only the spatial differences (e.g., range values) among pixels. That is, digital data representative of signal intensity are not used. An application is the efficient correlation of real-time digitally processed 3-D images generated from laser scans, in particular, scans of laser “radars” or LADARS. The process simplifies and improves on conventional techniques by iterating three sequential steps. A “hard” edge or a corner of an object is detected via a “corner-detector” algorithm that assigns a raw edge-strength value to each pixel in the image digitally created from the LADAR return. This is accomplished via a spatial numerical second-derivative operation performed on only the range value (distance from the LADAR to a specific location on an object scanned by the LADAR) assigned to each pixel's neighboring pixels. Next, all edge-strength values greater than a pre-specified value representative of that obtained by a laser (light) reflection from a right angle (90°), i.e., a hard edge or a corner, are reset to the pre-specified edge-strength value. If this second step were omitted, the image correlation process would be dominated by large edge-strength values representative of actual discontinuities in an adjacent pixel's range value, such as those produced when a tall object (building, vehicle, or promontory) shadows the area behind it from a laser scanning it obliquely. Finally, when a discontinuity in range values between two adjacent pixels is detected, the edge-strength value of the pixel representing the larger range value is set to zero. This last step validates the perspective transformation from an oblique view to an overhead view by avoiding the designation of a strong “phantom edge” on the ground. This phantom edge results from the laser beam forming a “shadow” when obliquely radiating a tall vertical object. The data needed to accomplish this correlation are minimized for subsequent storage and manipulation by storing only the two end points of the straight lines digitally generated to depict the edges of objects within a scene. This simple method, using inexpensive commercial off-the-shelf signal processors, enables reliable real-time identification of objects by comparison to a previously obtained reconnaissance library of digital images that have been stored for the purpose of future targeting. To enable each object in this library of digital images to be efficiently stored in memory, each digital image is defined by a greatly reduced and then compressed data set. Using this method, object identification may be accomplished quickly and reliably during such time-critical events as the terminal portion of a guided weapon's trajectory.

Description

STATEMENT OF GOVERNMENT INTEREST
The invention described herein may be manufactured and used by or for the Government of the United States of America for Governmental purposes without the payment of any royalties thereon or therefore.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The field to which this invention applies is image correlation, in particular, correlation using only data that represents spatial parameters. A specific application is correlation of digitally processed data obtained from a scanning laser such as a LADAR.
2. Background
It is known to use multispectral image correlation systems in air-to-ground target detection and acquisition. Such systems include means for image-processing, which automatically register images produced by airborne platform-mounted sensors operating in different wavelengths and from different points of view.
It is further known that an effective technique for registering images from different sensor types is that of edge matching, that is, matching pronounced target scene edges appearing in one sensor image with similar target scene edges appearing in one or more other sensor images. However, conventional image processing techniques for edge extraction rely upon detection of discontinuities in the light intensity of adjacent pixels, which often produces unsatisfactory results because of reflected laser energy which exhibits large random fluctuations from pixel to pixel, because of atmospheric effects, surface texture, and receiver noise. Such random fluctuations can overwhelm systematic variations caused by the reflectance of the target.
Accordingly, there is a need for a method and system for target detection acquisition, and terminal guidance, utilizing edge extraction from laser-radar (ladar) images relying solely on spatial information, rather than light intensity data.
SUMMARY OF THE INVENTION
It is, therefore, an object of the invention to provide a method for enhancing air-to-ground target detection, acquisition and terminal guidance, utilizing only spatial parameters of data for edge extraction from laser-radar images.
A further object of the invention is to provide an image correlation system for correlating edge information from a recorded and stored template of a target scene with current real-time images from laser-radar sensors mounted on an airborne platform, the system operating to provide edge information from the sensor images based upon a comparison of the range of a pixel with each of its neighboring.
With the above and other objects in view, as will hereinafter appear, a feature of the invention is the provision of a method for enhancing air-to-ground target detection, acquisition, and terminal guidance, the method comprising the steps of
(1) providing a reference image of a target scene with a designated selected target therein,
(2) detecting contract boundaries and orientation thereof in the reference image,
(3) identifying maxima edge pixels in the reference image determined by contract boundaries and tracing edges along the maxima edge pixels, to provide a reference image template,
(4) translating points in the reference image to points in three-dimensional space, and compressing the reference image by fitting straight line segments to the reference image edges, and storing in a computer memory only end points of the straight line segments, the compressed data represented by end points of the line segments thus constituting a reference image in a library of reference images,
(5) providing video image of the target scene from a sensor, such as a LADAR, mounted on an airborne platform,
(6) determining a range value for each pixel in the video image and computing a gradient based edge strength value for each pixel based only upon the range value thereof,
(7) transforming the range values from the video image and scaling them to that of the reference image having the same perspective, range, and scale to that of the reference image template,
(8) correlating the transformed and scaled edges of the video image with the above reference image, and
(9) overlaying the above reference image upon the video image, thus providing an enhanced video image of a pre-specified target as part of the video image.
In accordance with a further feature of the invention, there is provided a multispectral image correlation system, a library of reference images containing compressed data defining the pertinent edges within a three-dimensional space containing a pre-specified target, the target defined by a tracing along the most pronounced of the target's edges, a laser-radar (LADAR) sensor mounted on an airborne platform for providing real-time LADAR imagery of the target scene, means for determining a range value for each pixel comprising the LADAR imagery and for computing gradient based edge-strength values for each based only upon a range value thereof, and for thereby identifying the presence of a structural 90° corner. The system further includes means for resetting all edge-strength values greater than a first value resultant from a reflection off the 90° corner to that the first value, means for resetting to zero the range value of any pixel adjacent another pixel having a shorter range value to identify a structural edge traced by the range values of those pixels exhibiting the first value, means for correlating the structural edge traced by the range values of those pixels exhibiting the first value, means for correlating the structural edge identified by the LADAR imagery with the appropriate reference imagery, and means for overlaying the pre-selected target from the appropriate reference image upon the LADAR imagery, to provide enhanced LADAR imagery to identify the pre-selected target therein.
The above and other features of the invention, including various novel details of method steps, arrangements, and combinations of parts will now be more particularly described with reference to the accompanying drawings and pointed out in the claims. It will be understood that the particular method steps and devices embodying the invention are shown by way of illustration only and not as limitations of the invention. The principles and features of this invention may be employed in various and numerous embodiments without departing from the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
Reference is made to the accompanying drawings in which are shown illustrative embodiments of the invention, from which its novel features and advantages will be apparent, and wherein:
FIG. 1 depicts oblique illumination of a corner of an object by an energy source and the two 3×3 arrays (e.g., length X width and width X depth) used to digitally represent the 3-D relationship of a pixel to each of its neighboring pixels within the image reflected from the oblique illumination of the corner of the object.
FIG. 2 depicts an oblique illumination of an object in which only an edge of the object, not a corner, is illuminated, to include some additional illumination of an area behind the object.
FIG. 3 is a flow chart depicting a preferred method illustrative of an embodiment of the invention; and
FIG. 4 is a flow chart representing an alternative preferred embodiment of the invention.
DESCRIPTION OF PREFERRED EMBODIMENTS
Refers to FIGS. 1 and 2. In accordance with a preferred embodiment of the invention (FIG. 3), there is provided a digitized reference image 301 of a target scene 101 with a selected target 102 designated therein. The reference image 301 is produced in advance of a mission directed toward a real target 202 that may be digitally recreated by processing reflections from an illumination 201 in real time at an oblique angle over a flat surface of the real target 202. Assuming up-to-date reconnaissance, the digitally recreated real target 202 should correspond to the stored target 102, digitally recreated from an illumination 105 of only well-defined edges, including a corner 103, and stored as a reference image 301.
A spatial gradient operation 302 detects contrast boundaries in the reference image 301 and their orientation, with a set of convolution kernels. A Sobel operation (not separately shown) is a preferred means for effecting the gradient operation 302.
Maxima edge pixels 303 in the reference image 301, as defined to exist along contrast boundaries 302 are identified. Edges along the maxima edge pixels 303 are traced.
A reference image template 401 is fabricated 304 by fitting straight line segments to the reference image 301 edges, where data exist to make this possible. The data representing the line segments thus generated are compressed by storing in memory only the end points of the straight-line segments. If three-dimensional information relative to the reference target scene image is available, such as a terrain elevation database, the 3D information is used to connect the stored end points to points in three-dimensional space. Otherwise, the end points are assigned a value of zero for the third dimension. All of the data relative to of digitally recreated targets 102 known to exist within digitally recreated target scenes 101 are contained in lists of straight line segment end points (not separately shown) that constitute a library of reference image templates 401.
A laser-radar (LADAR) video image 305 of the target scene 101 is provided by a system (not separately shown) mounted on an airborne platform, such as an aircraft, a missile, or other vehicle or weapon. An inherent advantage of LADAR is its provision of a three dimensional image. Every pixel in the LADAR image contains the range to the points on the object from which the laser beam is reflected. The full cartesian coordinate of each pixel can be reproduced digitally. Thus, the LADAR data readily can be used with the digitally recreated image from another sensor, permitting views of the scene 101 from a number of perspectives.
Accordingly, a range value for each pixel in the LADAR image is determined and a gradient based edge strength value 306 is computed. A spatial numerical second derivative operator is employed to assign a raw edge-strength value to each pixel as it relates to the range value of each of its neighboring pixels. This provides a digital representation of the response from any illumination 105 of a target 102, e.g., an illumination as depicted in the illumination 105 of the corner 103 in FIG. 1.
All computed edge-strength values greater than that value representative of a response from a corner are reset to that value. Since the second derivative operator specifically highlights differences in any response due to the illumination of the intersection of multiple edges, i.e., at a corner of the target 102, range value discontinuities at the corner establish distinct value differences among neighboring pixels, thus the operation is termed “corner detecting,” and the function operates as a “corner detector”. Thus, this spatial numerical second derivative operation on the received LADAR image data provides, with minimal transformation and data manipulation, a “best” correlation of a possible target 102 within the scene 101. Without this step, the image correlation process would be dominated by the very large edge-strength values produced by actual discontinuities in adjacent range values. Such discontinuities are produced when a vertical object, such as a building or vehicle, shadows the ground behind it from an oblique laser beam.
When a range discontinuity is detected between two adjacent pixels, the edge strength of the pixel with the larger range value is set to zero 307. Without this step, perspective transformations from an oblique to an overhead view would result in a phantom edge on the ground where a vertical obstacle shadowed the laser beam. This “edge” would not correspond to any physical object that may be identified by another means.
The LADAR video image is then transformed 308 to the perspective and range, while scaling 308 to the reference image template 401. Next, a correlation 309 of the transformed and scaled edges of the LADAR image is made to the edges of the reference image template 401.
Finally, an overlay 310 of the designated target in the reference image template 401 is placed on to the LADAR video image.
The above-described process has been demonstrated to reliably extract edges from LADAR images of typical air-to-ground targets, which are then registered with target images from reconnaissance assets. The LADAR data is processed within the time available for terminal guidance in current precision-guided weapons, utilizing commercial off the shelf (COTS) low-cost signal processors.
Refer to FIG. 4. There is provided a multispectral image correlation system for carrying out the above-described method. The system includes storage 401, such as computer memory, for the reference template 401 that contains compressed data representing reference imagery of a three dimensional target scene with target(s) designated therein. The template 401 comprises a tracing of the most pronounced of the scene's edges.
The system further includes a laser-radar (LADAR) system 402 mounted on an airborne platform for providing LADAR imagery of the target scene.
A first algorithm 403 determines a range value for each pixel in the LADAR image 305 and computes gradient based edge-strength values, based on these range values, for each pixel. A second algorithm 404 after comparing this computed edge-strength value for each pixel to that edge-strength value computed for each of its neighboring pixels, generates a pair of 3×3 matrices 104. These matrices depict the relationship of that pixel to each of its neighboring pixels in each of two intersecting planes. A certain set of values in the matrices 104, as provided in FIG. 1, indicates the position of that pixel at a corner of an object 102, in turn, helping to quickly and reliably identify specific objects based on correlation to known objects in the scene. This identification of corners of objects in a scene is facilitated by setting all edge-strength values greater than a first value determined to be representative of a pixel's position at a corner to that first value. Further, a third algorithm 405 sets to zero the range value of a pixel adjacent to any pixel having a range value smaller than its range value. This provides the minimum data needed to trace a structural edge, using just those pixels assigned that first value. The data necessary to maintain a record of objects within a scene are compressed further by storing only the end points of a straight line used to define each edge as determined from the above.
A fourth algorithm 406 facilitates correlation of the structural edges identified in the LADAR image 305 with the stored traced edges of the reference image template 401. The correlation done by the fourth algorithm 406 requires transforming the compressed “edge data” from the LADAR image 305 in range only for each perspective taken and scaling the LADAR image 305 edge data to that of the reference template 401. A fifth algorithm 407 permits the overlay of pre-specified target(s) within the stored reference image template 401 upon the real-time LADAR image 305 to enhance real-time identification and provide precise location of the pre-specified target(s) in relation to a weapon system.
There is thus provided a method and system for enhancing air-to-ground target detection, acquisition, and terminal guidance, utilizing edge extraction from LADAR images 305 that are processed using a minimal data set comprising only spatial (range value) relationships.
It is to be understood that many changes in the details and arrangements of method steps and parts, which have been herein described and illustrated in order to explain the nature of the invention, may be made by those skilled in the art within the principles and scope of the invention as expressed in the appended claims.

Claims (9)

What is claimed is:
1. A method for designating a target comprising:
establishing a reference image template comprised of a first set of pixels that may be represented by a first data set of points in at least one plane;
capturing said first data set in a manner that defines contrast boundaries of said reference image template and orientation thereof;
identifying data obtained from only those said pixels determined to be maxima edge pixels as a limited data set of said first data set of said pixels;
tracing edges along said limited data set to establish a data set of edges;
compressing said second data set by fitting straight line segments, each having first and second end points, to said data set of edges and storing in memory only said end points;
converting said data set of edges to at least one second data set to be represented in three-dimensional space;
providing a real-time image comprising a second set of pixels;
establishing a range value for each said pixel in said real time image;
based only on said range values, computing a gradient based edge-strength value for each said pixel in said real time image;
transforming said real time image perspective and range to that of said reference image template;
scaling said real time image to that of said reference image template;
correlating said scaled real time image to said reference image template;
designating a target within said reference image template; and
overlaying said designated target within said reference image template upon said real time image,
wherein an enhanced image of said designated target is provided for use in real time applications.
2. The method in accordance with claim 1 wherein said data in said data sets are digital.
3. The method in accordance with claim 1 wherein said reference image template is produced in advance of a requirement for use in real time.
4. The method in accordance with claim 1 wherein said reference image template is produced from reconnaissance images.
5. The method in accordance with claim 1 wherein three-dimensional data representative of said reference image template is used to connect said end points in three-dimensional space.
6. The method in accordance with claim 1 wherein said end points of at least one said line segment may be assigned a third dimension of zero.
7. The method in accordance with claim 1 wherein said real time image is provided by a LADAR system.
8. The method in accordance with claim 1 wherein, a raw edge-strength value may be assigned to each said pixel to facilitate identification of a structural corner in said real time image, wherein all said edge-strength values greater than a value representative of a reflection from a corner of an object are reset to said value representative of a reflection from a corner of an object, and wherein upon detecting a range discontinuity between two adjacent said pixels, said edge-strength value of said pixel with the highest range value is set to zero.
9. A image correlation system comprising:
a memory device for storing at least one template of at least one image of at least one pre-specified target, having edges at least one of which said edges is pronounced,
wherein said at least one image of said at least one pre-specified target is captured in said at least one template by at least one tracing along said pronounced at least one edges;
an imaging device for capturing in real time video images, comprising pixels, of at least one remote object;
a computer for processing data on spatial relationships,
wherein said processing identifies said remote objects by determining a range value for each of said pixels associated with said at least on remote object and computes at least one gradient based edge-strength value for each of said pixels associated with said at least one remote object;
at least one algorithm that:
resets at least one edge-strength value that is greater than a first value, said first value representative of a reflection from a corner, to said first value,
resets to zero said range value of a first at least one said pixel that is adjacent at least one second said pixel, said second at least one pixel having a shorter range value than said first at least one pixel,
wherein at least one structural edge of said at least one remote object is identified by tracing said pixels associated with said first value to establish an outline of said at least one object described by said at least one structural edge;
correlates said outline of said at least one object with said at least one image of said at least one pre-specified target in said stored template; and
overlays said at least one pre-specified target in said stored template on said real time image of said at least one object,
wherein said overlaying process enhances recognition of said pre-specified target in said real time image.
US09/337,219 1999-06-07 1999-06-07 Simplified image correlation method using off-the-shelf signal processors to extract edge information using only spatial data Expired - Lifetime US6259803B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/337,219 US6259803B1 (en) 1999-06-07 1999-06-07 Simplified image correlation method using off-the-shelf signal processors to extract edge information using only spatial data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/337,219 US6259803B1 (en) 1999-06-07 1999-06-07 Simplified image correlation method using off-the-shelf signal processors to extract edge information using only spatial data

Publications (1)

Publication Number Publication Date
US6259803B1 true US6259803B1 (en) 2001-07-10

Family

ID=23319608

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/337,219 Expired - Lifetime US6259803B1 (en) 1999-06-07 1999-06-07 Simplified image correlation method using off-the-shelf signal processors to extract edge information using only spatial data

Country Status (1)

Country Link
US (1) US6259803B1 (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6507660B1 (en) * 1999-05-27 2003-01-14 The United States Of America As Represented By The Secretary Of The Navy Method for enhancing air-to-ground target detection, acquisition and terminal guidance and an image correlation system
US20040008868A1 (en) * 1999-10-22 2004-01-15 Lockheed Martin Corporation Method and software-implemented apparatus for detecting objects in multi-dimensional data
US20050068326A1 (en) * 2003-09-25 2005-03-31 Teruyuki Nakahashi Image processing apparatus and method of same
US20050271253A1 (en) * 2000-04-14 2005-12-08 Akihiro Ohta Target detection system using radar and image processing
US20050286763A1 (en) * 2004-06-24 2005-12-29 Pollard Stephen B Image processing
US20070071324A1 (en) * 2005-09-27 2007-03-29 Lexmark International, Inc. Method for determining corners of an object represented by image data
US20070081723A1 (en) * 2005-10-11 2007-04-12 Omar Aboutalib Process for the indentification of objects
US20080181487A1 (en) * 2003-04-18 2008-07-31 Stephen Charles Hsu Method and apparatus for automatic registration and visualization of occluded targets using ladar data
US20080310755A1 (en) * 2007-06-14 2008-12-18 Microsoft Corporation Capturing long-range correlations in patch models
US20090003720A1 (en) * 2007-06-28 2009-01-01 Microsoft Corporation Efficient image representation by edges and low-resolution signal
US20090214079A1 (en) * 2008-02-27 2009-08-27 Honeywell International Inc. Systems and methods for recognizing a target from a moving platform
US20090297049A1 (en) * 2005-07-07 2009-12-03 Rafael Advanced Defense Systems Ltd. Detection of partially occluded targets in ladar images
US20100114671A1 (en) * 2008-10-31 2010-05-06 International Business Machines Corporation Creating a training tool
US20100110183A1 (en) * 2008-10-31 2010-05-06 International Business Machines Corporation Automatically calibrating regions of interest for video surveillance
US20100114746A1 (en) * 2008-10-31 2010-05-06 International Business Machines Corporation Generating an alert based on absence of a given person in a transaction
CN1897644B (en) * 2005-07-15 2010-05-12 摩托罗拉公司 Method and system for catching pictures
US7733465B2 (en) 2004-05-26 2010-06-08 Bae Systems Information And Electronic Systems Integration Inc. System and method for transitioning from a missile warning system to a fine tracking system in a directional infrared countermeasures system
NO329633B1 (en) * 2003-09-02 2010-11-22 Gasoptics Sweden Ab Localization of a point source for a visualized gas leak
US20120051638A1 (en) * 2010-03-19 2012-03-01 Panasonic Corporation Feature-amount calculation apparatus, feature-amount calculation method, and program
US20130121595A1 (en) * 2011-11-11 2013-05-16 Hirokazu Kawatani Image processing apparatus, rectangle detection method, and computer-readable, non-transitory medium
US20130148103A1 (en) * 2011-12-13 2013-06-13 Raytheon Company Range-resolved vibration using large time-bandwidth product ladar waveforms
US8767187B2 (en) 2011-12-13 2014-07-01 Raytheon Company Doppler compensation for a coherent LADAR
US8897574B2 (en) 2011-11-11 2014-11-25 Pfu Limited Image processing apparatus, line detection method, and computer-readable, non-transitory medium
US8947644B2 (en) 2012-01-19 2015-02-03 Raytheon Company Using multiple waveforms from a coherent LADAR for target acquisition
US9007197B2 (en) 2002-05-20 2015-04-14 Intelligent Technologies International, Inc. Vehicular anticipatory sensor system
CN104536058A (en) * 2015-01-08 2015-04-22 西安费斯达自动化工程有限公司 Image/radar/laser ranging integrated system for monitoring airfield runway foreign matters
US9057605B2 (en) 2012-12-06 2015-06-16 Raytheon Company Bistatic synthetic aperture ladar system
US9160884B2 (en) 2011-11-11 2015-10-13 Pfu Limited Image processing apparatus, line detection method, and computer-readable, non-transitory medium
US9349076B1 (en) 2013-12-20 2016-05-24 Amazon Technologies, Inc. Template-based target object detection in an image
CN109255799A (en) * 2018-07-26 2019-01-22 华中科技大学 A kind of method for tracking target and system based on spatially adaptive correlation filter

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4835532A (en) * 1982-07-30 1989-05-30 Honeywell Inc. Nonaliasing real-time spatial transform image processing system
US5640468A (en) * 1994-04-28 1997-06-17 Hsu; Shin-Yi Method for identifying objects and features in an image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4835532A (en) * 1982-07-30 1989-05-30 Honeywell Inc. Nonaliasing real-time spatial transform image processing system
US5640468A (en) * 1994-04-28 1997-06-17 Hsu; Shin-Yi Method for identifying objects and features in an image

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6507660B1 (en) * 1999-05-27 2003-01-14 The United States Of America As Represented By The Secretary Of The Navy Method for enhancing air-to-ground target detection, acquisition and terminal guidance and an image correlation system
US7149326B2 (en) * 1999-10-22 2006-12-12 Lockheed Martin Corporation Method and software-implemented apparatus for detecting objects in multi-dimensional data
US20040008868A1 (en) * 1999-10-22 2004-01-15 Lockheed Martin Corporation Method and software-implemented apparatus for detecting objects in multi-dimensional data
US7376247B2 (en) * 2000-04-14 2008-05-20 Fujitsu Ten Limited Target detection system using radar and image processing
US20050271253A1 (en) * 2000-04-14 2005-12-08 Akihiro Ohta Target detection system using radar and image processing
US9007197B2 (en) 2002-05-20 2015-04-14 Intelligent Technologies International, Inc. Vehicular anticipatory sensor system
US20080181487A1 (en) * 2003-04-18 2008-07-31 Stephen Charles Hsu Method and apparatus for automatic registration and visualization of occluded targets using ladar data
NO329633B1 (en) * 2003-09-02 2010-11-22 Gasoptics Sweden Ab Localization of a point source for a visualized gas leak
US20050068326A1 (en) * 2003-09-25 2005-03-31 Teruyuki Nakahashi Image processing apparatus and method of same
US20110142285A1 (en) * 2004-05-26 2011-06-16 Bae Systems Information And Electronic Systems Integration Inc. System and method for transitioning from a missile warning system to a fine tracking system in a directional infrared countermeasures system
US20110170087A1 (en) * 2004-05-26 2011-07-14 Bae Systems Information And Electronic Systems Integration Inc. System and method for transitioning from a missile warning system to a fine tracking system in a directional infrared countermeasures system
US7733465B2 (en) 2004-05-26 2010-06-08 Bae Systems Information And Electronic Systems Integration Inc. System and method for transitioning from a missile warning system to a fine tracking system in a directional infrared countermeasures system
US8023107B2 (en) 2004-05-26 2011-09-20 Bae Systems Information And Electronic Systems Integration Inc. System and method for transitioning from a missile warning system to a fine tracking system in a directional infrared countermeasures system
US8184272B2 (en) * 2004-05-26 2012-05-22 Bae Systems Information And Electronic Systems Integration Inc. System and method for transitioning from a missile warning system to a fine tracking system in a directional infrared countermeasures system
US20050286763A1 (en) * 2004-06-24 2005-12-29 Pollard Stephen B Image processing
US7679779B2 (en) * 2004-06-24 2010-03-16 Hewlett-Packard Development Company, L.P. Image processing
US20090297049A1 (en) * 2005-07-07 2009-12-03 Rafael Advanced Defense Systems Ltd. Detection of partially occluded targets in ladar images
CN1897644B (en) * 2005-07-15 2010-05-12 摩托罗拉公司 Method and system for catching pictures
US20070071324A1 (en) * 2005-09-27 2007-03-29 Lexmark International, Inc. Method for determining corners of an object represented by image data
US7627170B2 (en) * 2005-10-11 2009-12-01 Northrop Grumman Corporation Process for the identification of objects
US20070081723A1 (en) * 2005-10-11 2007-04-12 Omar Aboutalib Process for the indentification of objects
US20080310755A1 (en) * 2007-06-14 2008-12-18 Microsoft Corporation Capturing long-range correlations in patch models
US7978906B2 (en) 2007-06-14 2011-07-12 Microsoft Corporation Capturing long-range correlations in patch models
US8116581B2 (en) 2007-06-28 2012-02-14 Microsoft Corporation Efficient image representation by edges and low-resolution signal
US20090003720A1 (en) * 2007-06-28 2009-01-01 Microsoft Corporation Efficient image representation by edges and low-resolution signal
US20090214079A1 (en) * 2008-02-27 2009-08-27 Honeywell International Inc. Systems and methods for recognizing a target from a moving platform
US8320615B2 (en) 2008-02-27 2012-11-27 Honeywell International Inc. Systems and methods for recognizing a target from a moving platform
US20100114671A1 (en) * 2008-10-31 2010-05-06 International Business Machines Corporation Creating a training tool
US20100114746A1 (en) * 2008-10-31 2010-05-06 International Business Machines Corporation Generating an alert based on absence of a given person in a transaction
US8345101B2 (en) * 2008-10-31 2013-01-01 International Business Machines Corporation Automatically calibrating regions of interest for video surveillance
US8429016B2 (en) 2008-10-31 2013-04-23 International Business Machines Corporation Generating an alert based on absence of a given person in a transaction
US20100110183A1 (en) * 2008-10-31 2010-05-06 International Business Machines Corporation Automatically calibrating regions of interest for video surveillance
US8612286B2 (en) 2008-10-31 2013-12-17 International Business Machines Corporation Creating a training tool
US8861853B2 (en) * 2010-03-19 2014-10-14 Panasonic Intellectual Property Corporation Of America Feature-amount calculation apparatus, feature-amount calculation method, and program
US20120051638A1 (en) * 2010-03-19 2012-03-01 Panasonic Corporation Feature-amount calculation apparatus, feature-amount calculation method, and program
US8897574B2 (en) 2011-11-11 2014-11-25 Pfu Limited Image processing apparatus, line detection method, and computer-readable, non-transitory medium
US20130121595A1 (en) * 2011-11-11 2013-05-16 Hirokazu Kawatani Image processing apparatus, rectangle detection method, and computer-readable, non-transitory medium
US8849042B2 (en) * 2011-11-11 2014-09-30 Pfu Limited Image processing apparatus, rectangle detection method, and computer-readable, non-transitory medium
US9160884B2 (en) 2011-11-11 2015-10-13 Pfu Limited Image processing apparatus, line detection method, and computer-readable, non-transitory medium
US20130148103A1 (en) * 2011-12-13 2013-06-13 Raytheon Company Range-resolved vibration using large time-bandwidth product ladar waveforms
US8767187B2 (en) 2011-12-13 2014-07-01 Raytheon Company Doppler compensation for a coherent LADAR
US8947647B2 (en) * 2011-12-13 2015-02-03 Raytheon Company Range-resolved vibration using large time-bandwidth product LADAR waveforms
US8947644B2 (en) 2012-01-19 2015-02-03 Raytheon Company Using multiple waveforms from a coherent LADAR for target acquisition
US9057605B2 (en) 2012-12-06 2015-06-16 Raytheon Company Bistatic synthetic aperture ladar system
US9349076B1 (en) 2013-12-20 2016-05-24 Amazon Technologies, Inc. Template-based target object detection in an image
CN104536058A (en) * 2015-01-08 2015-04-22 西安费斯达自动化工程有限公司 Image/radar/laser ranging integrated system for monitoring airfield runway foreign matters
CN104536058B (en) * 2015-01-08 2017-05-31 西安费斯达自动化工程有限公司 Image/Laser/Radar range finding airfield runway foreign matter monitoring integral system
CN109255799B (en) * 2018-07-26 2021-07-27 华中科技大学 Target tracking method and system based on spatial adaptive correlation filter
CN109255799A (en) * 2018-07-26 2019-01-22 华中科技大学 A kind of method for tracking target and system based on spatially adaptive correlation filter

Similar Documents

Publication Publication Date Title
US6259803B1 (en) Simplified image correlation method using off-the-shelf signal processors to extract edge information using only spatial data
US7010158B2 (en) Method and apparatus for three-dimensional scene modeling and reconstruction
US7242460B2 (en) Method and apparatus for automatic registration and visualization of occluded targets using ladar data
EP0523152B1 (en) Real time three dimensional sensing system
EP2602761A1 (en) Object detection device, object detection method, and program
US7245768B1 (en) Depth map compression technique
WO1998003021A1 (en) Small vision module for real-time stereo and motion analysis
Weinmann et al. Thermal 3D mapping for object detection in dynamic scenes
US20050157931A1 (en) Method and apparatus for developing synthetic three-dimensional models from imagery
JP2004030461A (en) Method and program for edge matching, and computer readable recording medium with the program recorded thereon, as well as method and program for stereo matching, and computer readable recording medium with the program recorded thereon
Parmehr et al. Automatic registration of optical imagery with 3d lidar data using local combined mutual information
CN112749610A (en) Depth image, reference structured light image generation method and device and electronic equipment
CN114332085B (en) Optical satellite remote sensing image detection method
US20230177811A1 (en) Method and system of augmenting a video footage of a surveillance space with a target three-dimensional (3d) object for training an artificial intelligence (ai) model
McCartney et al. Image registration for sequence of visual images captured by UAV
WO2021124657A1 (en) Camera system
JP3253328B2 (en) Distance video input processing method
de Lima et al. Toward a smart camera for fast high-level structure extraction
Karaca et al. Ground-based panoramic stereo hyperspectral imaging system with multiband stereo matching
Shorter et al. Autonomous registration of LiDAR data to single aerial image
Hsu et al. Automatic registration and visualization of occluded targets using ladar data
Morell-Gimenez et al. A survey of 3D rigid registration methods for RGB-D cameras
Su Vanishing points in road recognition: A review
JP2004030453A (en) Stereo matching method, stereo matching program, and computer readable recording medium with stereo matching program recorded thereon
GB2583774A (en) Stereo image processing

Legal Events

Date Code Title Description
AS Assignment

Owner name: NAVY, UNITED STATES OF AMERICA AS, THE, AS REPRESE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WIRTZ, MICHAEL M.;DITZLER, WILLIAM R.;REEL/FRAME:010106/0762

Effective date: 19990519

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

SULP Surcharge for late payment
FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12