WO2001084828A1 - Method and system for fusing images - Google Patents

Method and system for fusing images Download PDF

Info

Publication number
WO2001084828A1
WO2001084828A1 PCT/US2001/012260 US0112260W WO0184828A1 WO 2001084828 A1 WO2001084828 A1 WO 2001084828A1 US 0112260 W US0112260 W US 0112260W WO 0184828 A1 WO0184828 A1 WO 0184828A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fused image
array
sampling
data
Prior art date
Application number
PCT/US2001/012260
Other languages
French (fr)
Inventor
Timothy Ostromek
Original Assignee
Litton Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Litton Systems, Inc. filed Critical Litton Systems, Inc.
Priority to EP01927025A priority Critical patent/EP1287684A4/en
Priority to IL15157401A priority patent/IL151574A0/en
Priority to CA002404654A priority patent/CA2404654A1/en
Priority to AU2001253518A priority patent/AU2001253518A1/en
Publication of WO2001084828A1 publication Critical patent/WO2001084828A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths

Definitions

  • This invention relates generally to the field of electro-optical systems and more specifically to a method and system for fusing images.
  • Image fusion involves combining two or more images produced by two or more image sensors into one single image. Producing one image that mitigates the weak aspects of the individual images while retaining the strong ones is a complicated task, often requiring a mainframe computer. Known approaches to image fusion have not been able to produce a small, lightweight system that consumes minimal power.
  • processor speed ⁇ p is limited to about 150 MHz.
  • a maximum of approximately two instructions per cycle is allowed in current microprocessors and digital signal processors.
  • a system for fusing images comprises two or more sensors for generating two or more sets of image data.
  • An information processor receives and samples the sets of image data to generate sample data and computes a fused image array from the sample data.
  • a display receives the fused image array and displays a fused image generated from the fused image array.
  • a four-step method for fusing images is disclosed. Step one calls for receiving sets of image data generated by sensors. Step two provides for sampling the sets of image data to produce sample data. In step three, the method provides for computing a fused image array from the sample data.
  • the last step calls for displaying a fused image generated from the fused image array.
  • a four-step method for computing a fused image array is disclosed. Step one calls for sampling sets of image data generated from sensors to produce sample data. Step two provides for determining image fusion metrics from the sample data. Step three calls for calculating weighting factors from the image fusion metrics. The last step provides for computing a fused image array from the weighting factors, wherein the fused image array is used to generate the fused image.
  • a technical advantage of the present invention is that it computes the fused image from sampled sensor data. By sampling the sensor data, the invention reduces the number of instruction cycles required to compute a fused image. Reducing the number of instruction cycles allows for smaller, lightweight image fusion systems that consume minimal power. BRIEF DESCRIPTION OF THE DRAWINGS
  • FIGURE 1 is a system block diagram of one embodiment of the present invention
  • FIGURE 2 is a flowchart demonstrating one method of fusing images in accordance with the present invention
  • FIGURE 3 A illustrates sampling with a fixed array pattern in accordance with the present invention
  • FIGURE 3B illustrates sampling with a varied array pattern in accordance with the present invention
  • FIGURE 3C illustrates sampling randomly in accordance with the present invention
  • FIGURE 4 illustrates a method of computing weighting factors in accordance with the present invention.
  • FIGURE 1 is a system block diagram of one embodiment of the present invention.
  • a sensor A 102 and a sensor B 104 detect one or more physical objects 106 in order to generate image data to send to an information processor 108, which fuses the sets of image data to produce a single fused image to be displayed by a display 110.
  • the sensor A 102 detects the physical objects 106 and generates sensor data, which is sent to an amplifier A 112.
  • Amplifier 1 12 amplifies the sensor data and then sends it to an analog-to-digital converter A l ⁇ 4.
  • the analog- to-digital converter y-1 1 14 converts the analog sensor data to digital data, and sends the data to an image buffer A 116 to store the data.
  • the sensor B operates in a similar fashion.
  • the sensor B 104 detects the physical objects 106 and sends the data to amplifier B 118.
  • the amplifier B 118 sends amplified data to an analog-to-digital converter B 120, which sends converted data to an image buffer B 122.
  • a field programmable gate array 124 receives the data generated by the sensor 102 and the sensor B 104.
  • the information processor 108 receives the data from the field programmable gate array 124.
  • the information processor 108 generates a fused image from the sets of sensor data, and uses an information processor buffer 126 to store data while generating the fused image.
  • the information processor 108 sends the fused image data to a display buffer 128, which stores the data until it is to be displayed on the display 110.
  • FIGURE 2 is a flowchart demonstrating one method of image fusion in accordance with the present invention. The following steps may be performed automatically using an information processor 108.
  • the method begins with step 202, where two or more image sensors generate two or more sets of image data. As above, suppose that there are two image sensors, each with the same pixel arrangement. Let
  • N h be the number of horizontal pixels
  • N v be the number of vertical pixels, such that the total number of pixels per sensor is N/, • N v .
  • the sensors may comprise, for example, visible light or infrared light image detectors. Assume that detectable variations in the proportion of the fused image computed from one set of image data and from the other set of image data occur in time ⁇ s , where: ⁇ s > ⁇ I ⁇ d . Hence, the computation of the proportion does not need to be calculated at each frame. Also, assume that the information required to form a metric that adjusts the system to a given wavelength ⁇ proportion can be derived from fewer than N / -»N V pixels.
  • FIGURES 3A, 3B, and 3C illustrate three methods of sampling image data in accordance with the present invention.
  • FIGURE 3A illustrates sampling with a fixed array pattern.
  • the sampled pixels (/, j) 302, 304, 306, 308, 310, 312, 314, 316, and 318 may be described by:
  • ⁇ / , 2 for the horizontal difference between one sampled pixel to the next sampled pixel
  • ⁇ v 2 for the vertical difference between one sampled pixel to the next sampled pixel.
  • FIGURE 3B illustrates sampling with a varied array pattern.
  • FIGURE 3C illustrates random sampling. A sequence of sampling patterns may also be used, repeating at any given number of sampling cycles, or never repeating, as in a random pattern for each continued sampling cycle.
  • a fused image array is computed from the sample data.
  • image fusion metrics are calculated from the sample data.
  • the image fusion metrics are values assigned to the pixels of the sample data. These values, for example, may give the relative weight of the data from each sensor, such that the data from the sensor that produces the better image is given more weight. Or, these values may be used to provide a control for the production of, for example, a false color image. All the pixels may be assigned the same metric, ⁇ , or each sample pixel may assigned its own metric, ⁇ y , where the subscript if designates the pixel in the rth row andy ' th column.
  • weighting factors ⁇ are calculated from the image fusion metrics.
  • the weighting factors are values assigned to the pixels of the fused image.
  • the weighting factors may be computed by, for example, linear interpolation of the image fusion metrics.
  • FIGURE 4 illustrates a method of computing weighting factors in accordance with the present invention.
  • a sampling block 410 comprises to two sampled points 402 and 404.
  • the weighting factors ⁇ 0 of the first row may be computed in the following manner.
  • step 210 a fused image array, which is used to generate a fused image, is computed from the weighting factors.
  • An array of weighting factors ⁇ generates the following fused image array:
  • V, V l / A) -a lJ + V l / B Hl - a v )
  • the superscripts (d) denotes display
  • A) denotes sensor A
  • (B) denotes sensor B
  • V ⁇ corresponds to the voltage at pixel (zJ).
  • the fused image array describes the relative weights of the data from each sensor.
  • Weighting factor ⁇ gives the relative weight of the voltage from sensor A at pixel (/J);
  • weighting factor (l- y ) gives the relative weight of the voltage from sensor B at pixel (z ' ). This example shows a linear weight; other schemes, however, can be used.
  • the method then proceeds to step 212, where the fused image generated from the fused image array is displayed on a display 1 10.
  • this embodiment allows for more instruction cycles to calculate ⁇ y for each sampled pixel.
  • To calculate the number of instruction cycles available for each sampled pixel first calculate the total number of instruction cycles per sampled pixel, and then subtract number of cycles per pixel needed to sample the pixels and to compute the fused image metrics and the fused image array. For example, assume that data is sampled using fixed array sampling. The total number of instructions for each sampled pixel is given by:
  • n beaut and n v are the number of sampled pixels in the horizontal direction and in the vertical direction, respectively. Sampling each sampling block, without double counting borders, requires about ( ⁇ + 1)[2( ⁇ - 1) + 6] instruction cycles. Each sampling block contains two sampled pixels, so each sampled pixel loses 1/2( ⁇ +
  • N, 269 instruction cycles. This is a dramatic improvement compared with the 32 cycles allotted in conventional methods. The extra cycles may be used for more complex calculations of ⁇ y or other features. Moreover, if ⁇ y is assumed to be the same for all pixels, even more additional cycles may be available to determine ⁇ y , allowing for a more sophisticated manipulation.

Abstract

A system for fusing images comprises sensors (102 and 104) for generating sets of image data. An information processor (108) receives and samples the sets of image data to generate sample data for computing a fused image array. A display (110) receives the fused image array and displays a generated fused image. Step 1 of 4 for fusing images receives sets of image data generated by sensors (102 and 104). Step 2 samples the sets of image data to produce sample data. Step 3 computes a fused image array from the sample data. Step 4 displays a fused image generated from the fused image array on a display (110). Step 1 of 4 for computing a fused image array samples sets of image data generated from sensors (102 and 104) to produce sample data. Image fusion metrics from the sample data are determined in step 2. Step 3 calculates weighting factors from the image fusion metrics. Step 4 computes a fused image array from the weighting factors, wherein the fused image array is used to generate the fused image.

Description

METHOD AND SYSTEM FOR FUSING IMAGES
TECHNICAL FIELD OF THE INVENTION
This invention relates generally to the field of electro-optical systems and more specifically to a method and system for fusing images.
BACKGROUND OF THE INVENTION
Image fusion involves combining two or more images produced by two or more image sensors into one single image. Producing one image that mitigates the weak aspects of the individual images while retaining the strong ones is a complicated task, often requiring a mainframe computer. Known approaches to image fusion have not been able to produce a small, lightweight system that consumes minimal power.
Known approaches to fusing images require a great deal of computing power. To illustrate, suppose that two image sensors each have the same pixel arrangement. Let N„ be the number of horizontal pixels, and let Nv be the number of vertical pixels, such that the total number of pixels per sensor is N/, • Nv. Let the frame rate of the display be Ωj, expressed in Hz. The time τd allowed for processing each frame is given by: τd = \I Ωd All processing for each displayed image must be done within this time to keep the system operating in real time. To calculate the processing time per pixel, first compute the total number of pixels of both sensors. Given that each image sensor has the same pixel arrangement, the total number of pixels for both sensors is:
2 • N/.N The processing time τp per pixel is given by:
Figure imgf000003_0001
2N,N Ω, * 2NΛNV
The processing time τp per pixel is the maximum amount of time allotted per pixel to calculate a display pixel from the two corresponding sensor pixels, while allowing for real time processing. For example, given an average system where Ω^ = 30 Hz, Ν/, = 640, and Νy = 480, the processing time r^per pixel is: τp = 108.5 ns
For handheld or portable applications, processor speed Ωp is limited to about 150 MHz. A maximum of approximately two instructions per cycle is allowed in current microprocessors and digital signal processors. The time required per instruction r; is given by: ι £ — = 3.33 ns 2Ω P
The number of instruction cycles allowed for each pixel in real time processing is given by: r N, = — - = 32 instruction cycles τ, Thirty-two instruction cycles per pixel is often not a sufficient number of cycles, especially considering the fact that a simple "divide, floating point" could easily require 10 to 100 instruction cycles to complete. Practical image fusion systems generally require over 100 instruction cycles per pixel, and sophisticated image fusion algorithms often require over 1,000 instruction cycles per pixel. Consequently, current image fusion systems are confined to mainframe computers.
While known approaches have not been applied to handheld or portable applications, the challenges in the field of image fusion have continued to increase with demands for small, lightweight systems that consume minimal power. Therefore, a need has arisen for a new method and system for fusing images.
SUMMARY OF THE INVENTION
In accordance with the present invention, a method and system for fusing images are provided that substantially eliminate or reduce disadvantages and problems associated with previously developed systems and methods. A system for fusing images is disclosed. The system comprises two or more sensors for generating two or more sets of image data. An information processor receives and samples the sets of image data to generate sample data and computes a fused image array from the sample data. A display receives the fused image array and displays a fused image generated from the fused image array. A four-step method for fusing images is disclosed. Step one calls for receiving sets of image data generated by sensors. Step two provides for sampling the sets of image data to produce sample data. In step three, the method provides for computing a fused image array from the sample data. The last step calls for displaying a fused image generated from the fused image array. A four-step method for computing a fused image array is disclosed. Step one calls for sampling sets of image data generated from sensors to produce sample data. Step two provides for determining image fusion metrics from the sample data. Step three calls for calculating weighting factors from the image fusion metrics. The last step provides for computing a fused image array from the weighting factors, wherein the fused image array is used to generate the fused image.
A technical advantage of the present invention is that it computes the fused image from sampled sensor data. By sampling the sensor data, the invention reduces the number of instruction cycles required to compute a fused image. Reducing the number of instruction cycles allows for smaller, lightweight image fusion systems that consume minimal power. BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present invention and for further features and advantages, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which: FIGURE 1 is a system block diagram of one embodiment of the present invention;
FIGURE 2 is a flowchart demonstrating one method of fusing images in accordance with the present invention;
FIGURE 3 A illustrates sampling with a fixed array pattern in accordance with the present invention;
FIGURE 3B illustrates sampling with a varied array pattern in accordance with the present invention;
FIGURE 3C illustrates sampling randomly in accordance with the present invention; and FIGURE 4 illustrates a method of computing weighting factors in accordance with the present invention.
DETAILED DESCRIPTION OF THE DRAWINGS
FIGURE 1 is a system block diagram of one embodiment of the present invention. In this embodiment, a sensor A 102 and a sensor B 104 detect one or more physical objects 106 in order to generate image data to send to an information processor 108, which fuses the sets of image data to produce a single fused image to be displayed by a display 110. The sensor A 102 detects the physical objects 106 and generates sensor data, which is sent to an amplifier A 112. Amplifier 1 12 amplifies the sensor data and then sends it to an analog-to-digital converter A l \4. The analog- to-digital converter y-1 1 14 converts the analog sensor data to digital data, and sends the data to an image buffer A 116 to store the data. The sensor B operates in a similar fashion. The sensor B 104 detects the physical objects 106 and sends the data to amplifier B 118. The amplifier B 118 sends amplified data to an analog-to-digital converter B 120, which sends converted data to an image buffer B 122. A field programmable gate array 124 receives the data generated by the sensor 102 and the sensor B 104. The information processor 108 receives the data from the field programmable gate array 124. The information processor 108 generates a fused image from the sets of sensor data, and uses an information processor buffer 126 to store data while generating the fused image. The information processor 108 sends the fused image data to a display buffer 128, which stores the data until it is to be displayed on the display 110.
FIGURE 2 is a flowchart demonstrating one method of image fusion in accordance with the present invention. The following steps may be performed automatically using an information processor 108. The method begins with step 202, where two or more image sensors generate two or more sets of image data. As above, suppose that there are two image sensors, each with the same pixel arrangement. Let
Nh be the number of horizontal pixels, and let Nv be the number of vertical pixels, such that the total number of pixels per sensor is N/, • Nv. The sensors may comprise, for example, visible light or infrared light image detectors. Assume that detectable variations in the proportion of the fused image computed from one set of image data and from the other set of image data occur in time τs, where: τs > \IΩd. Hence, the computation of the proportion does not need to be calculated at each frame. Also, assume that the information required to form a metric that adjusts the system to a given wavelength λ proportion can be derived from fewer than N/-»NV pixels.
The method then proceeds to step 204 where the sets of image data are sampled to produce sample data. FIGURES 3A, 3B, and 3C illustrate three methods of sampling image data in accordance with the present invention. FIGURE 3A illustrates sampling with a fixed array pattern. The sampled pixels (/, j) 302, 304, 306, 308, 310, 312, 314, 316, and 318 may be described by:
/
Figure imgf000008_0002
where/? = 1, 2
Figure imgf000008_0001
and
j = qΔv , where q =
Figure imgf000008_0003
One possible arrangement is to have Δ/, = 2 for the horizontal difference between one sampled pixel to the next sampled pixel, and Δv = 2 for the vertical difference between one sampled pixel to the next sampled pixel. The groups of pixels 320 and
322, each with 2 sampled pixels 302 and 304, and 308 and 310, respectively, are sampling blocks. FIGURE 3B illustrates sampling with a varied array pattern. FIGURE 3C illustrates random sampling. A sequence of sampling patterns may also be used, repeating at any given number of sampling cycles, or never repeating, as in a random pattern for each continued sampling cycle.
Referring again to FIGURE 2, in steps 206 to 210, a fused image array is computed from the sample data. In step 206, image fusion metrics are calculated from the sample data. The image fusion metrics are values assigned to the pixels of the sample data. These values, for example, may give the relative weight of the data from each sensor, such that the data from the sensor that produces the better image is given more weight. Or, these values may be used to provide a control for the production of, for example, a false color image. All the pixels may be assigned the same metric, β, or each sample pixel may assigned its own metric, βy, where the subscript if designates the pixel in the rth row andy'th column.
In step 208, weighting factors υ, where the subscript if designates the pixel in the rth row and y'th column, are calculated from the image fusion metrics. The weighting factors are values assigned to the pixels of the fused image. The weighting factors may be computed by, for example, linear interpolation of the image fusion metrics.
FIGURE 4 illustrates a method of computing weighting factors in accordance with the present invention. For example, suppose that the sample data was sampled using a fixed array pattern, where every fifth point 402, 404, 406, and 408 is sampled in both the horizontal and vertical direction, that is, Δ/7 = Δv = Δ = 5. A sampling block 410 comprises to two sampled points 402 and 404. The weighting factors α0 of the first row may be computed in the following manner. First, an incremental value for the first row in the horizontal direction δ/7] is calculated using the following formula:
Figure imgf000009_0001
Then, the weighting factors between βn and βi6 in the horizontal direction are calculated using the following formula:
Figure imgf000009_0002
The weighting factors in the vertical direction between βn and β6ι are calculated in a similar manner, using the following equations:
Figure imgf000009_0003
α,ι = βn + δvι ( - 1)
Referring again to FIGURE 2, the method then proceeds to step 210, where a fused image array, which is used to generate a fused image, is computed from the weighting factors. An array of weighting factors υ generates the following fused image array:
V, = Vl/A)-alJ + Vl/BHl - av) where / e {1, ..., Nn},j e {1, ..., Nv}, the superscripts (d) denotes display, A) denotes sensor A, and (B) denotes sensor B, and Vυ corresponds to the voltage at pixel (zJ). The fused image array describes the relative weights of the data from each sensor. Weighting factor υ gives the relative weight of the voltage from sensor A at pixel (/J); weighting factor (l- y) gives the relative weight of the voltage from sensor B at pixel (z' ). This example shows a linear weight; other schemes, however, can be used. The method then proceeds to step 212, where the fused image generated from the fused image array is displayed on a display 1 10.
By sampling the image data, this embodiment allows for more instruction cycles to calculate βy for each sampled pixel. To calculate the number of instruction cycles available for each sampled pixel, first calculate the total number of instruction cycles per sampled pixel, and then subtract number of cycles per pixel needed to sample the pixels and to compute the fused image metrics and the fused image array. For example, assume that data is sampled using fixed array sampling. The total number of instructions for each sampled pixel is given by:
X τ, where xs is the processing time per sampled pixel, which is given by:
1 Ωd - 2nhnv where n„ and nv are the number of sampled pixels in the horizontal direction and in the vertical direction, respectively. Sampling each sampling block, without double counting borders, requires about (Δ + 1)[2(Δ - 1) + 6] instruction cycles. Each sampling block contains two sampled pixels, so each sampled pixel loses 1/2(Δ +
1)[2(Δ - 1) + 6] instruction cycles per pixel. Computing the fused image array from the weighting factors requires approximately four instruction cycles for each calculation, that is: ^,NV
Figure imgf000010_0001
Therefore, the time left per pixel for calculating the image fusion metrics βy is: τ 4N N
Ν. = -s- 1/2 (Δ + 1) [2(Δ-1) + 6] - — h—*- Using the values given above: Ωd = 30 Hz, Nh = 640, Nv = 480, Ah = Δv = 5, nh = 128, and nv = 96, the number of instruction cycles is computed to be:
N, = 269 instruction cycles. This is a dramatic improvement compared with the 32 cycles allotted in conventional methods. The extra cycles may be used for more complex calculations of βy or other features. Moreover, if βy is assumed to be the same for all pixels, even more additional cycles may be available to determine βy, allowing for a more sophisticated manipulation.
Although an embodiment of the invention and its advantages are described in detail, a person skilled in the art could make various alternations, additions, and omissions without departing from the spirit and scope of the present invention as defined by the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A system for fusing images, the system comprising: a. two or more sensors for generating two or more sets of image data; b. an information processor for receiving and sampling the sets of image data to generate sample data and for computing a fused image array from the sample data; and c. a display for receiving the fused image array and displaying a fused image generated from the fused image array.
2. The system of Claim 1 further comprising a field programmable gate array coupled to the sensors and the information processor.
3. The system of Claim 1 further comprising one or more converters operable to convert analog signals from the sensors to digital data for use by the information processor.
4. The system of Claim 1 wherein the fused image array assigns a value to one or more pixels of the fused image, wherein the value describes the relative weights of the sets of image data.
5. A method for fusing images, the method comprising: a. receiving two or more sets of image data generated by two or more sensors; b. sampling the sets of image data to produce sample data; c. computing a fused image array from the sample data; and d. displaying a fused image generated from the fused image array.
6. The method of Claim 5, wherein the sampling step further comprises sampling with a fixed array pattern.
7. The method of Claim 5, wherein the sampling step further comprises sampling with a varied array pattern.
8. The method of Claim 5, wherein the sampling step further comprises sampling randomly.
9. The method of Claim 5, wherein the computing step further comprises determining one or more image fusion metrics, wherein the image fusion metrics are values assigned to one or more pixels of the sample data.
10. The method of Claim 5, wherein the computing step further comprises calculating one or more weighting factors from the image fusion metrics, wherein the weighting factors are values assigned to one or more pixels of the fused image.
11. The method of Claim 10, wherein the computing step further comprises calculating the weighting factors by interpolation of the image fusion metrics.
12. The method of Claim 10, wherein the computing step further comprises calculating the weighting factors by linear inteφolation of the image fusion metrics.
13. The method of Claim 5, further comprising performing the foregoing steps automatically using an information processor.
14. A method for computing a fused image array, the method comprising: a. sampling the sets of image data generated from two or more sensors to produce sample data; b. determining one or more image fusion metrics from the sample data, wherein the image fusion metrics are values assigned to one or more pixels of the sample data; c. calculating one or more weighting factors from the image fusion metrics, wherein the weighting factors are values assigned to one or more pixels of a fused image; and d. computing a fused image array from the weighting factors, wherein the fused image array is used to generate the fused image.
15. The method of Claim 14, wherein the fused image array describes the relative weights of the sets of image data at one or more pixels of the fused image.
16. The method of Claim 14, wherein the sampling step further comprises sampling with a fixed array pattern.
17. The method of Claim 14, wherein the sampling step further comprises sampling with a varied array pattern.
18. The method of Claim 14, wherein the sampling step further comprises sampling randomly.
19. The method of Claim 14, wherein the calculating step further comprises calculating the weighting factors by inteφolation of the image fusion metrics.
20. The method of Claim 14, wherein the calculating step further comprises calculating the weighting factors by linear inteφolation of the image fusion metrics.
21. The method of Claim 14, further comprising performing the foregoing steps automatically using an information processor.
PCT/US2001/012260 2000-04-27 2001-04-13 Method and system for fusing images WO2001084828A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP01927025A EP1287684A4 (en) 2000-04-27 2001-04-13 Method and system for fusing images
IL15157401A IL151574A0 (en) 2000-04-27 2001-04-13 Method and system for fusing images
CA002404654A CA2404654A1 (en) 2000-04-27 2001-04-13 Method and system for fusing images
AU2001253518A AU2001253518A1 (en) 2000-04-27 2001-04-13 Method and system for fusing images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US56126000A 2000-04-27 2000-04-27
US09/561,260 2000-04-27

Publications (1)

Publication Number Publication Date
WO2001084828A1 true WO2001084828A1 (en) 2001-11-08

Family

ID=24241253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/012260 WO2001084828A1 (en) 2000-04-27 2001-04-13 Method and system for fusing images

Country Status (5)

Country Link
EP (1) EP1287684A4 (en)
AU (1) AU2001253518A1 (en)
CA (1) CA2404654A1 (en)
IL (1) IL151574A0 (en)
WO (1) WO2001084828A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004063878A2 (en) * 2003-01-03 2004-07-29 Litton Systems, Inc. Method and system for real-time image fusion
WO2005050566A1 (en) * 2003-11-12 2005-06-02 Litton Systems, Inc. Method and system for generating an image
US7091930B2 (en) * 2003-08-02 2006-08-15 Litton Systems, Inc. Centerline mounted sensor fusion device
CN1303432C (en) * 2003-06-05 2007-03-07 上海交通大学 Remote sensing image picture element and characteristic combination optimizing mixing method
CN100410684C (en) * 2006-02-23 2008-08-13 复旦大学 Remote sensing image fusion method based on Bayes linear estimation
US9053558B2 (en) 2013-07-26 2015-06-09 Rui Shen Method and system for fusing multiple images

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1313972C (en) * 2003-07-24 2007-05-02 上海交通大学 Image merging method based on filter group

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5159455A (en) * 1990-03-05 1992-10-27 General Imaging Corporation Multisensor high-resolution camera
US5424773A (en) * 1993-01-29 1995-06-13 Kawai Musical Inst. Mfg. Co., Ltd. Apparatus and method for generating a pseudo camera position image from a plurality of video images from different camera positions using a neural network
US5889553A (en) * 1993-11-17 1999-03-30 Canon Kabushiki Kaisha Image pickup apparatus capable of high resolution imaging
US5930405A (en) * 1994-11-28 1999-07-27 Canon Kabushiki Kaisha Image change sensing and storage apparatus and method
US5978021A (en) * 1992-08-08 1999-11-02 Samsung Electronics Co., Ltd. Apparatus producing a high definition picture and method thereof

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5140416A (en) * 1990-09-18 1992-08-18 Texas Instruments Incorporated System and method for fusing video imagery from multiple sources in real time
US5416851A (en) * 1991-07-30 1995-05-16 Xerox Corporation Image analysis based on location sampling
US5325449A (en) * 1992-05-15 1994-06-28 David Sarnoff Research Center, Inc. Method for fusing images and apparatus therefor
US5550937A (en) * 1992-11-23 1996-08-27 Harris Corporation Mechanism for registering digital images obtained from multiple sensors having diverse image collection geometries
DE19502640C1 (en) * 1995-01-20 1996-07-11 Daimler Benz Ag Fusing images of same view from different sensors for night sight
US6007052A (en) * 1997-06-21 1999-12-28 Raytheon Company System and method for local area image processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5159455A (en) * 1990-03-05 1992-10-27 General Imaging Corporation Multisensor high-resolution camera
US5978021A (en) * 1992-08-08 1999-11-02 Samsung Electronics Co., Ltd. Apparatus producing a high definition picture and method thereof
US5424773A (en) * 1993-01-29 1995-06-13 Kawai Musical Inst. Mfg. Co., Ltd. Apparatus and method for generating a pseudo camera position image from a plurality of video images from different camera positions using a neural network
US5889553A (en) * 1993-11-17 1999-03-30 Canon Kabushiki Kaisha Image pickup apparatus capable of high resolution imaging
US5930405A (en) * 1994-11-28 1999-07-27 Canon Kabushiki Kaisha Image change sensing and storage apparatus and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1287684A4 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004063878A2 (en) * 2003-01-03 2004-07-29 Litton Systems, Inc. Method and system for real-time image fusion
WO2004063878A3 (en) * 2003-01-03 2004-09-10 Litton Systems Inc Method and system for real-time image fusion
US7176963B2 (en) 2003-01-03 2007-02-13 Litton Systems, Inc. Method and system for real-time image fusion
CN1303432C (en) * 2003-06-05 2007-03-07 上海交通大学 Remote sensing image picture element and characteristic combination optimizing mixing method
US7091930B2 (en) * 2003-08-02 2006-08-15 Litton Systems, Inc. Centerline mounted sensor fusion device
WO2005050566A1 (en) * 2003-11-12 2005-06-02 Litton Systems, Inc. Method and system for generating an image
US7373023B2 (en) 2003-11-12 2008-05-13 Northrop Grumman Guidance And Electronics Company, Inc. Method and system for generating an image
AU2004292174B2 (en) * 2003-11-12 2010-04-08 Northrop Grumman Guidance And Electronics Company, Inc. Method and system for generating an image
CN100410684C (en) * 2006-02-23 2008-08-13 复旦大学 Remote sensing image fusion method based on Bayes linear estimation
US9053558B2 (en) 2013-07-26 2015-06-09 Rui Shen Method and system for fusing multiple images

Also Published As

Publication number Publication date
IL151574A0 (en) 2003-04-10
EP1287684A1 (en) 2003-03-05
CA2404654A1 (en) 2001-11-08
EP1287684A4 (en) 2006-07-12
AU2001253518A1 (en) 2001-11-12

Similar Documents

Publication Publication Date Title
US6584219B1 (en) 2D/3D image conversion system
EP2677500B1 (en) Event-based image processing apparatus and method
US4133004A (en) Video correlation tracker
EP0918439B1 (en) Device for converting two-dimensional video into three-dimensional video
JP3434979B2 (en) Local area image tracking device
US20020012459A1 (en) Method and apparatus for detecting stereo disparity in sequential parallel processing mode
EP1072147A2 (en) Method and apparatus for measuring similarity using matching pixel count
JPS60229594A (en) Method and device for motion interpolation of motion picture signal
JP2947360B2 (en) Method and apparatus for measuring image motion
WO2001084828A1 (en) Method and system for fusing images
US20200193583A1 (en) Spatially dynamic fusion of images of different qualities
US7940993B2 (en) Learning device, learning method, and learning program
JP3161467B2 (en) Method for temporal interpolation of images and apparatus for implementing this method
JP4563982B2 (en) Motion estimation method, apparatus, program thereof, and recording medium thereof
JPH0695012B2 (en) Device for a matrix-arranged photodiode array
KR100795974B1 (en) Apparatus for realtime-generating a depth-map by processing streaming stereo images
JP2001154646A (en) Infrared picture display device
JP3786300B2 (en) Motion vector detection apparatus and motion vector detection method
JP3267383B2 (en) Image processing method and image processor
JPH0364279A (en) Image blurring detecting device
US6873395B2 (en) Motion picture analyzing system
JP3038935B2 (en) Motion detection device
CN117409043B (en) Sub-pixel level video target tracking method, device, equipment and storage medium
Luo et al. Simultaneous monocular visual odometry and depth reconstruction with scale recover
JP3185362B2 (en) Moving object detection device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 151574

Country of ref document: IL

WWE Wipo information: entry into national phase

Ref document number: 2404654

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2001927025

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2001927025

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP