WO1994018801A1 - Color wide dynamic range camera using a charge coupled device with mosaic filter - Google Patents

Color wide dynamic range camera using a charge coupled device with mosaic filter Download PDF

Info

Publication number
WO1994018801A1
WO1994018801A1 PCT/US1994/001358 US9401358W WO9418801A1 WO 1994018801 A1 WO1994018801 A1 WO 1994018801A1 US 9401358 W US9401358 W US 9401358W WO 9418801 A1 WO9418801 A1 WO 9418801A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
dynamic range
components
wide dynamic
imaging apparatus
Prior art date
Application number
PCT/US1994/001358
Other languages
French (fr)
Inventor
Ran Ginosar
Tamar Genossar
Ofra Zinaty
Noam Sorek
Daniel J. Kligler
Yehoshua Y. Zeevi
Arkadi Neyshtadt
Dov Avni
Original Assignee
I Sight, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by I Sight, Inc. filed Critical I Sight, Inc.
Priority to EP94907434A priority Critical patent/EP0739571A1/en
Publication of WO1994018801A1 publication Critical patent/WO1994018801A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor
    • H04N2209/045Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
    • H04N2209/046Colour interpolation to calculate the missing colour values

Definitions

  • This invention pertains to video imagery and more particularly to apparatuses and techniques for providing enhancement of video color images.
  • the present invention uses a four-color mosaic filter with a single chip CCD in conjunction with color wide dynamic range algorithms. It is also applicable, however, to other types of mosaic filters known in the art. Description of the Prior Art
  • video imaging apparatus including means for providing a plurality of video color images of a scene at different exposure levels using a single CCD chip, each color image being separated into several (e.g., four in the preferred embodiment) different components prior to sensing by the CCD chip by way of a multiple color mosaic filter in front of the CCD chip.
  • the pixel outputs are then decoded — subjected to specific mathematical operations by the processing electronics following the CCD output — to generate the video luminance and chrominance signals.
  • the present invention integrates the digital processing of the mosaic color CCD data with ADAPTIVE SENSITIVITYTM dynamic range enhancement. This integration provides for a substantial savings in total system processing hardware chip count and cost. It also permits better control of the color and detail production of the camera's video output.
  • the mosaic storage format also provides for a unique video image compression technique.
  • FIG. 1 is a general block diagram of the present invention.
  • FIG. 2 is a representative illustration of the data image elements, with the size of the data image elements exaggerated.
  • FIG. 3 is a general block diagram of the long and short processing of the present invention.
  • Figure 4 is a block diagram of the color path of the present invention.
  • Figure 5 is a block diagram of the intensity path of the present invention.
  • Figure 6 is a block diagram of the look-up table processing of the present invention.
  • Figure 7 is a block diagram of the joint operations of the present invention.
  • Figure 8 is a block diagram of the differential color, intensity result block of the present invention.
  • Figure 9 is a block diagram of the color suppression factor block of the present invention.
  • Figure 10 is a block diagram of the color conversion block of the present invention.
  • FIG. 11 is a block diagram of the mosaic generation block of the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Figure 1 is a block diagram of the apparatus 10 of the present invention.
  • Apparatus 10 includes a mosaic filter 12 which is bonded to the front of CCD 14 (preferably a single chip), generally as part of the CCD integrated circuit manufacturing process.
  • the mosaic complementary additive color image comprises alternating first and second rows of image data elements 18 A — 18 D wherein first rows include alternating ⁇ and ⁇ data elements (18 A and 18 B , respectively), and wherein the second rows include alternating ⁇ and ⁇ data elements (18 C and 18 D , respectively).
  • Each mosaic element of filter 12 covers the sum of two adjacent pixel sensors of CCD 14 so that each pixel output of CCD 14 is representative of one of the above color combinations given for the various image data elements 18.
  • apparatus 10 includes four major functions as summarized below:
  • the first stages of the algorithm are performed on the two exposures (long/short) separately.
  • the processing of each exposure is divided into two paths:
  • Color path processing evaluates color component for each pixel.
  • Intensity path processing handles intensity information for each pixel. This includes point (“DC”) intensity and edge information.
  • Point processing handles intensity information for each pixel. This includes point (“DC”) intensity and edge information.
  • Each of the long/short exposure length processing functions outputs its point intensity information, obtained from the Y path processing, to four look-up tables (LUTs). These tables determine the point intensity result of the two exposures, the normalized color weighting or color selection function and the saturation color suppression factor. This information serves the joint operation processing stage.
  • the four LUTs are programmable, thus enabling operation with different functions when necessary. In an alternative embodiment, these LUTs may be replaced by a programmable, piecewise linear (PWL) or other digital function generator.
  • PWL piecewise linear
  • Joint operations processing joins results produced by the long and short processing blocks, and results obtained from the functions implemented in the table processing LUTs, and evaluates the final output of the algorithm.
  • the processing is divided into:
  • Color components and Y result calculation evaluates the final result of the color components and the intensity of each pixel.
  • Color suppression factor calculation evaluates the color suppression factor for each pixel, based on both edges and saturation information.
  • Color conversion processing converts mosaic differential color space to RGB color space and produces RGB and Y/Cr/Cb outputs for each pixel.
  • Generate Mosaic processing converts RGB color space back to mosaic color space for each pixel.
  • the mosaic information generated enables economical hardware storage of processed images. This information can be retrieved and replayed through the algorithm — in Replay Mode— to produce RGB or Y/Cr/Cb output of the stored result.
  • apparatus 10 includes long/short processing as implemented by mosaic long exposure field block 20 and mosaic short exposure field block 22 which obtain, respectively, a long and a short exposure from CCD 14 in order to allow subsequent processing by long exposure processing block 24 and short exposure processing block 26.
  • long and short exposures are used here generally to denote two image inputs to apparatus 10.
  • long is used to mean an input with a higher exposure level
  • short a lower exposure level.
  • the higher exposure may be generated in several ways, including longer integration time, typically obtained by controlling the "electronic shutter” of the CCD chip; higher gain in the analog amplifiers preceding digitization; or a larger mechanical iris opening or other external gating means.
  • These two image inputs are usually generated by a single CCD chip, but may also be generated simultaneously by two separate, boresighted CCD chips, as disclosed in the aforementioned earlier applications.
  • the two inputs may be generated either sequentially (as in the case of the first method above— integration time control) or concurrently (by using two input channels with different gain levels).
  • field memories are required at the input to apparatus 10 (in blocks 20 and 22) to synchronize the data coming from the two sequential fields or frames. These memories are not needed in concurrent modes, except for purposes of "freezing" the image for electronic, digital storage.
  • Switching logic incorporated in blocks 20 and 22 controls the data flow into and out of these field memories, depending on which mode (sequential or concurrent) is used. Of course, this implementation could be expanded to more than two exposure levels. Blocks 24 and 26 may typically be provided on separate processing chips or incorporated together in a single chip. The processing for each exposure is divided into two paths:
  • Color path processing handles color information for each pixel (see color path block 28 in Figure 3 and, in more detail, in Figure 4);
  • Intensity path processing handles intensity information for each pixel (see Y path block 30 in Figure 3 and, in more detail, in Figure 5).
  • long/short exposure processing blocks 24, 26 include mosaic white balance block 32.
  • Mosaic white balance block 32 receives the following field of information from long/short exposure field blocks 20, 22:
  • the mosaic white balance block 32 After processing, the mosaic white balance block 32 outputs color-corrected data values: ⁇ wb ⁇ wb ⁇ wb ⁇ wb ⁇ wb ⁇ wb ...
  • Mosaic white balance block 32 contains mosaic color balance functions. These functions may typically be implemented as eight mosaic white balance LUTs (look-up tables). That is, for each exposure there is a set of four LUTs, one for each mosaic data type: ⁇ , ⁇ , ⁇ , and ⁇ . Independent calculation of white balance correction factors is performed for each exposure. This enables white balancing scenes where the observable parts of the two exposures are at different color temperatures.
  • the LUTs may contain multiplicative correction factors which are evaluated as follows:
  • Y denotes a selective average over Y in the given white image and and are the respective average values of ⁇ , ⁇ , ⁇ and ⁇ . Saturated or cutoff pixels are excluded from this average. Since by definition, the equations for the correction factors are:
  • these LUTs are replaced by digital multipliers.
  • the LUTs may also be loaded with correction functions other than simple linear multiplicative factors.
  • the mosaic balance correction factors can be computed based on four average signals, namely, ⁇ , ⁇ , ⁇ and ⁇ , instead of merely two of them as above. This alternative yields improved uniformity of color balance under difficult conditions.
  • the white balance function may be done on the RGB color components in the color conversion block 78 (described below).
  • color path block 28 is shown in more detail.
  • the input to color path block 28 is the image data ⁇ wb , ⁇ wb , ⁇ wb , ⁇ wb after processing by mosaic white balance block 32.
  • Block 36 performs horizontal low-pass filtering on dr hp and db hp calculated in block 34. This reduces color artifacts caused by interpolation.
  • the low-pass filter width is five pixels and its coefficients are 1 ⁇ 4, 1 ⁇ 4, 1 ⁇ 4, The equations follow: For pixels (i,j) in even lines i: dr(j-2) + 2*dr(j-1) + 2*dr(j) + 2*dr(j+1) + dr(j+2) dr lp (j) __________________________________________________________________________________________________________________________
  • Delay buffer 38 receives the output from low-pass color component block 36 and directs db lp (i even -1), dr lp (i odd - 1), db lp (i even +1) and dr lp (i odd +1) to vertical interpolation block 40 and dr(i even ) and db(i odd ) to multiplexer 42.
  • Vertical interpolation block 40 receives the low-pass color components as described above and generates interpolated low-pass color components dr lp in the odd numbered lines and dbL in the even numbered lines: db lp db lp db lp db lp db lp . . . . .
  • the interpolated low-pass color components dr lp db lp are multiplexed with the original low-pass components dr lp , db lp to give the color path output values dr and dp for each pixel.
  • This function is performed by multiplexer 42, which separates the output received from delay buffer block 38 and vertical interpolator block 40 into a first path including db lp (i even ) and db lp (i odd ) and a second path including dr lp (i even ) and dr lp (i odd ).
  • intensity (Y) processing block 30 shown in Figure 3 one sees that the input to intensity (Y) processing block 30 from mosaic white balance block 32 ( Figure 3) is received by intensity evaluation block 44 which outputs computed intensity Y for each pixel.
  • the intensity evaluation block 44 calculation is performed as follows (based on the prior definition of Y) :
  • the output from intensity evaluation block 44 is received by delay buffer 46, generate output intensity block 48 and limit block 50.
  • Delay buffer 46 is a delay line of two horizontal lines, required for the 3x3 and 1x3 matrix transformations in Y path block 30. Together with the color path delay buffer 38 and with Y path delay buffer 54, it may be implemented in a preferred embodiment in mosaic data space, operating on the input ⁇ , ⁇ , y, ⁇ data before the intensity (Y) evaluation block 44 and color difference evaluation block 34. It is shown here schematically for clarity.
  • Vertical low-pass filter 52 receives intensity (Y) signals from the intensity evaluation block 44 as delayed by delay buffer 46.
  • the unfiltered intensity (Y) input will sometimes exhibit horizontal stripes, one pixel high in each field, in areas of transition to saturation. These stripes stem from the different color spectra of the a , ⁇ , y, and ⁇ pixels, as a result of which the ⁇ +7 value of Y(i(even),j) may, for instance, reach saturation at a lower level of optical intensity than the ⁇ + ⁇ value of the vertically adjacent Y(i+1(odd),j). Y vlp averages these values to obtain a function that is smooth over the transition area.
  • Generate output intensity block 48 receives intensity (Y) information from intensity evaluation block 44 and vertical low-pass intensity (Y vlp ) information from vertical low-pass filter 52.
  • the output of block 48 is output intensity (Y out ) to point processing LUT block 62 (see Figure
  • Block 48 replaces the original luminance Y, computed by the intensity evaluation block 44, with Y vlp when Y vlp approaches saturation, in order to prevent the appearance of horizontal stripes as explained above.
  • Block 48 implements the function: Y if Y vlp ⁇ Y threahold
  • Y thrahold is typically equal to approximately 220 on an 8-bit scale of 0-255. As values of Y approach saturation, image detail is lost in any event, so that substituting Y vlp in the high range does not adversely affect the perceived resolution. Y vlp is used as the selecting input in order to ensure a smooth transition.
  • limit block 50 i.e., Y limit
  • edge detection block 56 which outputs edge information for each pixel.
  • Edge detector block 56 convolves the Y limit value and its 8 immediate neighbors, with a high-pass or edge detecting kernel.
  • the 3x3 Laplacian operator may be used:
  • the following kernel may be used:
  • the edge detector block 56 could be implemented as separate horizontal and vertical convolution operations (such as a 1 x 3 or 3 x 1 matrix), with additional logic to avoid overemphasis of diagonal edges.
  • This alternative embodiment is less hardware intensive and gives improved picture quality in some circumstances.
  • Edge suppress block 58 receives the vertical low-pass intensity (Y vlp ) signals from vertical low-pass filter 52 and outputs edge suppression function f edge to edge multiplier 60.
  • the edge suppression function varies between 0 and 1 in the long exposure processing block 24 only. In the short exposure processing block 26, the function is set to 1, i.e., no edge suppression at this point.
  • the function is typically implemented in block 24 in a piecewise linear fashion as follows: if Y vlp ⁇ LOWSAT
  • LOWSAT is set to approximately 190 and
  • Edge multiplier 60 receives input from blocks 56, 58 and generates suppressed edge ed supp to intensity (Y) result calculation.
  • Edge multiplier 60 multiplies the edge output of the edge detector block 56 by the edge suppression function f edge from block 58 to generate an output value ed supp to joint operations block 64 (see Figure 1).
  • the purpose of this multiplication is to suppress distorted large edges that may appear in the long exposure at intensity (Y) values near saturation, at the same time as they appear in the short exposure at lower values of intensity (Y).
  • the double appearance of such edges was found empirically to cause the resulting displayed edges to be overemphasized and sometimes smeared on account of blooming in the long exposure.
  • the long exposure edge is suppressed so that only the short exposure edge will pass through to the output image.
  • the edge suppress function may also be used to reduce the amplitude of edges from the long exposure which may be otherwise exaggerated due to the higher gain of the long exposure relative to the short exposure.
  • an optional multiplier or LUT may be added to multiply the output of block 56 times the ratio of exposure times (duration of long exposure/duration of short exposure) or the corresponding gain ratio, or some function of the exposure and/or gain ratio. This reflects the ratio of scales of these two values.
  • Y path block 30 outputs processed luminance Y out , edge, and edge supp to point processing block 62 and joint operations block 64.
  • point processing block 62 includes four point processing functions, all of which receive output intensity (Y out ) values from the long and short exposure processing blocks 24, 26 (see Figure 1). These functions may typically be implemented as LUTs in RAM or ROM memory. Point processing block 62 generates arbitrary function values for input to the joint operations block 64 ( Figure 1).
  • the four tables of block 62 are:
  • the intensity (DC result) block 66 which generates a LUT value of intensity (Y lut ) for the joint operations block 64.
  • Block 66 controls the amount of point (“DC") luminance that is summed with the edge information in generating the output luminance, Y result .
  • DC point
  • f is an arbitrary function. It has been found that a quasilogarithmic or fractional power dependence of Y lut on the inputs gives the best output image appearance, and the general function above can generally be reduced to a more compact LUT or piecewise linear implementation.
  • Blocks 68 and 70 control the proportions of mixing the color values, dr and db, from the long and short exposures, respectively, that will be used to generate the output color values, dr result and Y result .
  • w 1 and w 5 are chosen so as to give predominant weight at each pixel to the color values taken from the exposure in which the intensity (Y) luminance values are in the linear portion of the range, and to give a smooth transition over luminance gradient regions of the image.
  • W 1 and w 5 are determined on the basis of Y out (long) alone, except for cases where the long exposure is near saturation while the short is near cutoff, so that neither gives a linear reading.
  • the outputs of blocks 68 and 70 are normalized by division by the corresponding values of Y out for the long and short exposures.
  • a floating point representation for the output values of blocks 68, 70 is used so as to maintain sufficient accuracy to prevent noticeable quantization in the output image.
  • Saturation color suppression factor block 72 generates the color suppression factor Wht that reduces chroma saturation (adds white to the image) in areas of luminance saturation of the input image.
  • An additional edge color suppression factor, Z ed is computed in the joint operations block (as will be described hereinafter). The minimum of Wht and Z ed , both of which vary from 1 to 0, multiplies the chroma components at the output stage of color conversion. Thus, as Wht approaches zero, so does the color saturation of the output image.
  • the purpose of the saturation color suppression function is to reduce the appearance of color artifacts that arise due to CCD saturation.
  • Wht w 1 + w 5 *z 5 w 1 and w 5 are identical to the above color weighting values.
  • FIG. 7 discloses the joint operations block 64 (also see Figure 1).
  • Joint operations block 64 combines the chrominance and luminance data from the long and short exposure processing blocks 24, 26, together with data from point processing block 62, to generate a combined Y/dr/db result.
  • Block 64 then converts this result to output in standard RGB or Y/Cr/Cb (luminance, chrominance (red) and chrominance (blue)) color space.
  • a color suppression factor Z is computed and applied to the chrominance outputs in order to reduce color artifacts (by reducing chroma saturation) around edges and areas of luminance signal saturation.
  • Joint operations block 64 includes:
  • dr, db are the differences between successive readings in even and odd lines, respectively) which receives dr, db values from the color path outputs of long and short exposure processing blocks 24, 26 respectively; ed supp from the intensity (Y) path output of long exposure processing block 24 and edge data from the intensity
  • Block 74 generates combined intensity Y/dr/db results to color conversion block 78 (to be discussed). Block 74 will be discussed in greater detail hereinafter.
  • the color conversion block 78 which receives Y result , dr result , db result from block 74 and Z, the color suppression factor from block 76 and generates R out , G out , and B out and Cr and Cb.
  • Block 78 will be discussed in greater detail hereinafter.
  • the dr, db, Y block 74 is shown in further detail in Figure 8.
  • Block 74 includes an intensity (Y) calculation which is performed by adders 79, 80 and edge limiting block 81.
  • Adder 79 receives ed supp (long) data from long exposure processing block 24, and ed short from short exposure processing block 26. These two inputs are added to give edge result , which is then input to the edge limiting block 81.
  • Edge limiting is implemented as a piecewise linear function with 6 inflection points (A 1 ...A 6 ) and 4 slopes (S 1 ...S 4 ), as shown in the upper right inset of Figure 8. Generally the inflection points and slopes are chosen so as to enhance the smaller edges (i.e., S 2 and S 3 ⁇ 1), while large edges (edge > A 5 or ⁇ A 2 ) are suppressed.
  • a 3 and A 4 may be set to 0, but it is sometimes desirable to set them to small non-zero values in order to suppress false edges due to noise. The best results appear to be obtained with
  • Block 80 may be removed from its location in Figure 8 and placed so thac the output of block 81 is not added to Y result until just before being added into block 113 A-C , that is, as late as possible.
  • Block 74 further includes a dr, db calculation which is performed by the remaining sections of block 74.
  • the dr, db calculation receives low-pass color components dr, db from the color paths of long and short exposure processing blocks 24, 26; w 1 /Y 1 and w 5 /Y 5 from block 62; and Y result as calculated by adder 80.
  • the dr, db calculation outputs dr result and db result .
  • dr result and db result may be generated by selection between the long and short normalized dr and db inputs (and possibly their long/short average values).
  • the color suppression factor block 76 of Figure 7 is shown in more detail in Figure 9.
  • Maximum value block 100 selects the higher of the two absolute values of ed long and ed short as calculated by absolute value blocks 98, 99.
  • the result of the calculation of block 100, ed max is input to edge chroma suppression factor block 102 to calculate Z ed .
  • Th is ordinarily set to zero, to give complete chroma suppression at very strong edges.
  • Th ⁇ 0 is used only in replay of images stored in mosaic format (see generate mosaic block 120 described hereinafter), in which case Z ed serves to suppress color anomalies resulting from the reinterpolation of the pixel values.
  • minimum value block 104 selects the minimum of the two color suppression factors, Z ed and Wht, thereby determining the edge criterion or saturation criterion that should be used to provide the required degree of chroma suppression at the given pixel.
  • color conversion block 78 receives Y result dr result , and db result from block 74 and Z from block 76 and generates outputs in both the RGB and Y/Cr/Cb formulations.
  • block 78 takes the interim dynamic range enhancement results Y/dr/db, and converts them into conventional color components for system output.
  • Block 78 includes horizontal low-pass filter 106 which receives Y result and calculates Y result (1p) for the color matrix block 108.
  • Horizontal low-pass filter 106 is identical to the low-pass color component block 36 in the color path block
  • Color matrix block 108 receives Y result (lp) from horizontal low-pass filter 106 and dr result and db result from block 74 and generates low-pass RGB color component outputs.
  • RGB white balance multipliers 109 A , 109 B , 109 C receive low-pass RGB signals from color matrix block 108 and generate normalized low-pass RGB signals.
  • Multipliers 109 A , 109 B , 109 C multiply each of the RGB low-pass values by a pre-computed white balance correction factor, adjusted by the normalization factor 0.7 required by the color matrix calculation.
  • conventional RGB white balancing uses only two multiplicative factors, correcting R and B while G is held constant, this "short cut" does not preserve constant Y achromatic luminance. This loss of normalization may lead to the appearance of artifacts and incorrect luminance in the output. It is necessary, therefore, to use three multiplicative factors, normalized to preserve constant luminance Y.
  • Output signal enhancement block 110 (which includes chroma suppression and RGB output functions) receives corrected low-pass RGB color component signals from color matrix block 108 via multipliers 109 A , 109 B , 109 C ; Y result from block 74; Y result (lp) from block 106; and chroma suppression factor Z from block 76.
  • RGB values output from color matrix block 108 are low-pass values.
  • High-frequency image information is "re-injected" into RGB according to the following equation (given here only for the R component, since the treatment of G and B is identical):
  • K is an arbitrary constant between 0 and 1, chosen according to the degree of high-frequency enhancement required. Values in the range 0.4 ⁇ K ⁇ 0.8 are typically used.
  • FIG 11 discloses generate mosaic block 120 of Figure 1 in more detail
  • the input of generate mosaic block 120 is R out /G out /B out from color conversion block 78 of joint operations block 64.
  • the output of block 120 is the equivalent a, ⁇ , y, ⁇ values in the format: ⁇ eq ⁇ eq ⁇ eq ⁇ eq . . . . .
  • the final RGB values from the processed image are used to generate equivalent, simulated mosaic values of ⁇ , ⁇ , ⁇ , and ⁇ .
  • equivalent, simulated mosaic values of ⁇ , ⁇ , ⁇ , and ⁇ .
  • only eight bits per pixel of information must be stored, rather than the 24 bits of full output information. These mosaic values can later be replayed to regenerate the stored image.
  • the simulated mosaic values are generated by the following matrix in matrix block 122, based on the color equivalencies given hereinabove.
  • multiplexer 124 selects which one of the four mosaic values to output for each pixel according to the table:
  • Apparatus 10 has three modes of operation: normal, adaptive sensitivity (AS), and replay.
  • Normal mode emulates the performance of a mosaic color CCD camera without adaptive sensitivity. In this mode only the long exposure portion of the pipeline operates. The processing functions are limited to decoding the mosaic input into conventional color components: Y/Cr/Cb or RGB, while additionally performing filtering operations for anti-aliasing, detail (edge) enhancement and chroma suppression where required. 2. Adaptive sensitivity mode uses all the resources of the processing pipeline to generate wide dynamic range images as described hereinabove.
  • Replay mode is required for displaying images that have been stored in RAM or disk. Apparatus 10 stores these images in a regenerated mosaic format in order to save on storage memory requirements. Replay mode is similar to normal mode, except that most of the enhancement operations are not performed: since the stored data have already been filtered once, it is for the most part not desirable to filter them again.

Abstract

The apparatus (10) is a color wide dynamic range apparatus which includes a filter (12) interposed immediately in front of reoccurring color elements so that each pixel represents a given color element for the scene. At least two exposure levels are taken of the scene and the pixel outputs are decoded to generate the video luminance and chrominance signals. The images of the at least two exposure levels are combined to form a final image.

Description

COLOR WIDE DYNAMIC RANGE CAMERA
USING A CHARGE COUPLED DEVICE WITH MOSAIC FILTER
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation-in-part of U.S. patent application Serial No. 07/795,350 filed November 20,
1991 entitled "Color Wide Dynamic Range Camera", which is, in turn, a continuation-in-part of U.S. patent application
Serial No. 07/388,547, filed August 23, 1989, now U.S.
Patent No. 5,114,442. Additionally, this application is related to U.S. Patent No. 4,858,014 and currently pending
U.S. patent application Serial No. 07/805,512, filed December 11, 1991. The disclosures of all of the above- identified U.S. patents and patent applications are incorporated herein by reference.
BACKGROUND OF THE INVENTION
Field of the Invention
This invention pertains to video imagery and more particularly to apparatuses and techniques for providing enhancement of video color images. In particular, the present invention uses a four-color mosaic filter with a single chip CCD in conjunction with color wide dynamic range algorithms. It is also applicable, however, to other types of mosaic filters known in the art. Description of the Prior Art
Various types of video enhancement apparatuses and techniques have been proposed. Prior implementations of color wide dynamic range cameras, such as those disclosed in the above-identified parent applications hereto, have used a plurality of CCD chips to generate the image data for subsequent processing. The use of multiple CCD chips, however, adds to the complexity and cost of the instrument.
Moreover, current consumer video cameras, i.e., camcorders, almost universally use a single CCD chip. Therefore, a single CCD implementation is required to use dynamic range enhancement algorithms in a camcorder. Single CCD chip implementations are similarly preferred for endoscopic applications.
OBJECTS AND SUMMARY OF THE INVENTION
It is therefore an object of this invention to provide a color wide dynamic range camera implemented with a single CCD chip.
It is also an object of this invention to provide a color wide dynamic range camera which is adapted for use with a camcorder.
It is a further object of this invention to provide a color wide dynamic range camera which is adapted for use with a conventional endoscope.
These and other objects of the invention will be more apparent from the discussion below.
SUMMARY OF THE INVENTION
There is thus provided in accordance with the preferred embodiment of the present invention, video imaging apparatus including means for providing a plurality of video color images of a scene at different exposure levels using a single CCD chip, each color image being separated into several (e.g., four in the preferred embodiment) different components prior to sensing by the CCD chip by way of a multiple color mosaic filter in front of the CCD chip. The pixel outputs are then decoded — subjected to specific mathematical operations by the processing electronics following the CCD output — to generate the video luminance and chrominance signals.
The present invention integrates the digital processing of the mosaic color CCD data with ADAPTIVE SENSITIVITY™ dynamic range enhancement. This integration provides for a substantial savings in total system processing hardware chip count and cost. It also permits better control of the color and detail production of the camera's video output. The mosaic storage format also provides for a unique video image compression technique.
BRIEF DESCRIPTION OF THE DRAWINGS
Further objects and advantages of the invention will become apparent from the following description and claims, and from the accompanying drawings, wherein:
Figure 1 is a general block diagram of the present invention.
Figure 2 is a representative illustration of the data image elements, with the size of the data image elements exaggerated.
Figure 3 is a general block diagram of the long and short processing of the present invention.
Figure 4 is a block diagram of the color path of the present invention.
Figure 5 is a block diagram of the intensity path of the present invention.
Figure 6 is a block diagram of the look-up table processing of the present invention.
Figure 7 is a block diagram of the joint operations of the present invention. Figure 8 is a block diagram of the differential color, intensity result block of the present invention.
Figure 9 is a block diagram of the color suppression factor block of the present invention.
Figure 10 is a block diagram of the color conversion block of the present invention.
Figure 11 is a block diagram of the mosaic generation block of the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring now to the drawings in detail wherein like numerals indicate like elements throughout the several views, one sees that Figure 1 is a block diagram of the apparatus 10 of the present invention.
Apparatus 10 includes a mosaic filter 12 which is bonded to the front of CCD 14 (preferably a single chip), generally as part of the CCD integrated circuit manufacturing process.
The alternating mosaic filter elements are cyan, magenta, yellow and green (wherein C "cyan' = G 'green' + B 'blue'; M 'magenta' = R 'red' + B; and Ye 'yellow' = R+G). When the CCD 14 charge output is read out, the photoelectric charges from vertically adjacent sensor elements of CCD 14 are combined in the analog shift register (not shown). The on-chip addition gives rise to α, β, γ and δ elements as described below and as described in the Sony CCD 1992 Data Book (Sony part number ICX038AK), as well as earlier editions.
As shown in Figure 2, the mosaic complementary additive color image comprises alternating first and second rows of image data elements 18A — 18D wherein first rows include alternating α and γ data elements (18A and 18B, respectively), and wherein the second rows include alternating β and δ data elements (18C and 18D, respectively). The α image data elements 18A are an equal mixture of cyan plus green (i.e., C + G = B 'blue' + 2G 'green'). The γ image data elements 18B are an equal mixture of magenta plus yellow (i.e., M + Ye = 2R 'red' + B + G). Similarly, the β image data elements 18C are an equal mixture of cyan plus magenta (i.e., C + M = 2B + G + R) and the δ image data elements 18D are an equal mixture of green plus yellow (i.e. , G + Ye = 2G + R).
Those skilled in the art will recognize that: Y (i.e., intensity) =
Figure imgf000007_0001
from which the definition of intensity (Y) in the red, green, blue (RGB) system may be derived:
Y = R + 1.5G + B
Of course, those skilled in the art will realize that other color combinations are equally applicable. Each mosaic element of filter 12 covers the sum of two adjacent pixel sensors of CCD 14 so that each pixel output of CCD 14 is representative of one of the above color combinations given for the various image data elements 18. Four different monochromatic images, each representative of one color combination chosen from the colors of image data elements 18A - 18D of a given scene, are therefore generated by CCD 14.
As can be further seen from Figure 1, apparatus 10 includes four major functions as summarized below:
1. Long/short exposure processing:
The first stages of the algorithm are performed on the two exposures (long/short) separately. The processing of each exposure is divided into two paths:
a. Color path processing - evaluates color component for each pixel.
b. Intensity (Y) path processing - handles intensity information for each pixel. This includes point ("DC") intensity and edge information. 2. Point processing:
Each of the long/short exposure length processing functions (typically implemented on separate chips) outputs its point intensity information, obtained from the Y path processing, to four look-up tables (LUTs). These tables determine the point intensity result of the two exposures, the normalized color weighting or color selection function and the saturation color suppression factor. This information serves the joint operation processing stage. The four LUTs are programmable, thus enabling operation with different functions when necessary. In an alternative embodiment, these LUTs may be replaced by a programmable, piecewise linear (PWL) or other digital function generator.
3. Joint operations processing:
Joint operations processing joins results produced by the long and short processing blocks, and results obtained from the functions implemented in the table processing LUTs, and evaluates the final output of the algorithm. The processing is divided into:
a. Color components and Y result calculation — evaluates the final result of the color components and the intensity of each pixel.
b. Color suppression factor calculation— evaluates the color suppression factor for each pixel, based on both edges and saturation information. c. Color conversion processing — converts mosaic differential color space to RGB color space and produces RGB and Y/Cr/Cb outputs for each pixel.
4. Generate mosaic processing.
Generate Mosaic processing converts RGB color space back to mosaic color space for each pixel. The mosaic information generated enables economical hardware storage of processed images. This information can be retrieved and replayed through the algorithm — in Replay Mode— to produce RGB or Y/Cr/Cb output of the stored result.
Referring now to Figure 1, similar to U.S. Patent No. 5,144,442 and parent U.S. patent application Serial No. 07/795,350 (the disclosures of which, again, along with U.S. patent application Serial No. 07/805,512 and U.S. Patent No. 4,858,014 are incorporated herein by reference), apparatus 10 includes long/short processing as implemented by mosaic long exposure field block 20 and mosaic short exposure field block 22 which obtain, respectively, a long and a short exposure from CCD 14 in order to allow subsequent processing by long exposure processing block 24 and short exposure processing block 26. The terms "long" and "short" exposures are used here generally to denote two image inputs to apparatus 10. In general, "long" is used to mean an input with a higher exposure level, and "short", a lower exposure level. The higher exposure may be generated in several ways, including longer integration time, typically obtained by controlling the "electronic shutter" of the CCD chip; higher gain in the analog amplifiers preceding digitization; or a larger mechanical iris opening or other external gating means.
These two image inputs are usually generated by a single CCD chip, but may also be generated simultaneously by two separate, boresighted CCD chips, as disclosed in the aforementioned earlier applications. For the more common case in which the two inputs are generated by a single CCD chip, they may be generated either sequentially (as in the case of the first method above— integration time control) or concurrently (by using two input channels with different gain levels). When a sequential method is used, field memories are required at the input to apparatus 10 (in blocks 20 and 22) to synchronize the data coming from the two sequential fields or frames. These memories are not needed in concurrent modes, except for purposes of "freezing" the image for electronic, digital storage. Switching logic incorporated in blocks 20 and 22 controls the data flow into and out of these field memories, depending on which mode (sequential or concurrent) is used. Of course, this implementation could be expanded to more than two exposure levels. Blocks 24 and 26 may typically be provided on separate processing chips or incorporated together in a single chip. The processing for each exposure is divided into two paths:
1. Color path processing— handles color information for each pixel (see color path block 28 in Figure 3 and, in more detail, in Figure 4); and
2. Intensity (Y) path processing— handles intensity information for each pixel (see Y path block 30 in Figure 3 and, in more detail, in Figure 5).
Additionally, as shown in more detail in Figure 3, long/short exposure processing blocks 24, 26 include mosaic white balance block 32.
Mosaic white balance block 32 receives the following field of information from long/short exposure field blocks 20, 22:
α γ α γ α γ . . . . ..
β δ β δ β δ . . . . ..
α γ α γ α γ . . . . ..
β δ β δ β δ . . . . ..
α γ α γ α γ . . . . ..
β δ β δ β δ . . . . ..
. . .
. . .
. . .
That is, the information from CCD 14 with the mosaic data order intact is received.
After processing, the mosaic white balance block 32 outputs color-corrected data values: αwb γwb αwb γwb α wb αwb ....
βwb δwb βwb δwb βwb δwb ....
αwb γwb αwb γwb α wb αwb ....
βwb δwb βwb δwb βwb δwb ....
αwb γwb αwb γwb α wb αwb ....
βwb δwb βwb δwb βwb δwb ....
. . .
. . .
. . .
Mosaic white balance block 32 contains mosaic color balance functions. These functions may typically be implemented as eight mosaic white balance LUTs (look-up tables). That is, for each exposure there is a set of four LUTs, one for each mosaic data type: α, β , γ, and δ . Independent calculation of white balance correction factors is performed for each exposure. This enables white balancing scenes where the observable parts of the two exposures are at different color temperatures. The LUTs may contain multiplicative correction factors which are evaluated as follows:
From the definitions of α, β , γ, δ and Y it follows, that for a white image (where by definition R*=G=B), the following relations should hold:
Figure imgf000011_0001
Based on these relations correction factors can be calculated by enforcing these relations on the average of a white image:
Figure imgf000012_0001
where Y denotes a selective average over Y in the given white image and
Figure imgf000012_0004
and
Figure imgf000012_0005
are the respective average values of α, β, γ and δ . Saturated or cutoff pixels are excluded from this average. Since by definition,
Figure imgf000012_0002
the equations for the correction factors are:
Figure imgf000012_0003
The LUT values are calculated by simple multiplication of each mosaic data type by its respective correction factor: αwb = α * Cα
βwb = β * cβ
γwb = 7 * Cγ
δwb = δ * Cδ
In an alternative embodiment, these LUTs are replaced by digital multipliers. Furthermore, the LUTs may also be loaded with correction functions other than simple linear multiplicative factors. Alternatively, the mosaic balance correction factors can be computed based on four average signals, namely, α, β, γ and δ , instead of merely two of them as above. This alternative yields improved uniformity of color balance under difficult conditions. Alternatively, the white balance function may be done on the RGB color components in the color conversion block 78 (described below).
Referring now to Figure 4, color path block 28 is shown in more detail. As previously stated, the input to color path block 28 is the image data αwb, βwb, γwb, δwb after processing by mosaic white balance block 32.
The initial processing of color path block 28 is performed by color difference evaluation block 34 which receives data αwb, βwb, γwb, δwb from mosaic white balance block 32 and calculates color difference components dr, db for each pixel in the array: dr dr dr . . . . = (γ-α) (γ-a) (γ-α) . . . . db db db . . . . = ( δ-β) ( δ-β) ( δ-β) . . . . dr dr dr . . . . = (γ-a) (γ-a) (γ-α) . . . . db db db . . . . = ( δ-β) ( δ-β) ( δ-β) . . . . wherein:
dr Ξ γ - α , the differences between successive readings in even lines . db ≡ δ - β , the differences between successive readings in odd lines.
The correct evaluation of dr and db requires horizontal interpolation as described in the following equations:
For each pixel (i,j) in even lines i:
Figure imgf000014_0001
where j is the pixel index along the line (here and henceforth).
For each pixel (i,j) in odd lines i: db(jodd) =δwb(Jodd)
Figure imgf000014_0002
db (jeven) βwb(jeven)
Figure imgf000014_0003
Color difference components dr, db are thereafter received by low-pass color component block 36 which calculates a low-pass color component drlp or dblp for each pixel: drlpdrlpdrlp . . . . = (γ-α)lp (7-0-)lp . . . .
dblpdblpdblp . . . . = (δ-β)lp (δ-β)lp . . . .
drlpdrlpdrlp . . . . = (γ-α)lp (γ-α)lp . . . .
dblpdblpdblp . . . . = (δ-β)lp (δ-β)lp . . . .
Block 36 performs horizontal low-pass filtering on drhp and dbhp calculated in block 34. This reduces color artifacts caused by interpolation. In a preferred embodiment, the low-pass filter width is five pixels and its coefficients are ¼, ¼, ¼,
Figure imgf000014_0004
The equations follow: For pixels (i,j) in even lines i: dr(j-2) + 2*dr(j-1) + 2*dr(j) + 2*dr(j+1) + dr(j+2) drlp(j) _________________________________________________________
8
For pixels (i,j) in odd lines i: db(j-2) + 2*db(j-1) + 2*db(j) + 2*db(j+1) + db(j+2) dblp (J)= _________________________________________________________
8
Delay buffer 38 receives the output from low-pass color component block 36 and directs dblp(ieven -1), drlp (iodd - 1), dblp(ieven +1) and drlp (iodd+1) to vertical interpolation block 40 and dr(ieven) and db(iodd) to multiplexer 42.
Vertical interpolation block 40 receives the low-pass color components as described above and generates interpolated low-pass color components drlp in the odd numbered lines and dbL in the even numbered lines: dblp dblp dblp dblp . . . . .
dblp dblp dblp dblp . . . . .
dblp dblp dblp dblp . . . . .
dblp dblp dblp dblp . . . . .
The equations follow:
For even lines i:
dblp(i-1,j) + dblp(i+1,j)
dblp (i,j)= _________________________
2
For odd lines i:
drlp(i-1,j) + dlp (i+1,j)
drlp (i,j)= _____________________________
2 The interpolated low-pass color components drlp dblp are multiplexed with the original low-pass components drlp, dblp to give the color path output values dr and dp for each pixel. This function is performed by multiplexer 42, which separates the output received from delay buffer block 38 and vertical interpolator block 40 into a first path including dblp (ieven) and dblp(iodd) and a second path including drlp (ieven) and drlp(iodd).
Referring n
Figure imgf000016_0003
igure 5, which discloses in more detail the intensity (Y) processing block 30 shown in Figure 3, one sees that the input to intensity (Y) processing block 30 from mosaic white balance block 32 (Figure 3) is received by intensity evaluation block 44 which outputs computed intensity Y for each pixel.
Since only one of the four data types (α, β, γ, δ) is present at any given pixel, the intensity evaluation block 44 calculation is performed as follows (based on the prior definition of Y) :
For pixels (i,j) in even lines i:
Figure imgf000016_0001
For pixels (i , j ) in odd lines i :
Figure imgf000016_0002
The output from intensity evaluation block 44 is received by delay buffer 46, generate output intensity block 48 and limit block 50.
Delay buffer 46 is a delay line of two horizontal lines, required for the 3x3 and 1x3 matrix transformations in Y path block 30. Together with the color path delay buffer 38 and with Y path delay buffer 54, it may be implemented in a preferred embodiment in mosaic data space, operating on the input α, β, y, δ data before the intensity (Y) evaluation block 44 and color difference evaluation block 34. It is shown here schematically for clarity.
Vertical low-pass filter 52 receives intensity (Y) signals from the intensity evaluation block 44 as delayed by delay buffer 46. Block 52 generates the vertical low- pass intensity Yvlp defined as: γvlp(i,j) = Y(i-1,j) + 2Y(i,j) + Y(i+1,j)
4
The unfiltered intensity (Y) input will sometimes exhibit horizontal stripes, one pixel high in each field, in areas of transition to saturation. These stripes stem from the different color spectra of the a , β, y, and δ pixels, as a result of which the α+7 value of Y(i(even),j) may, for instance, reach saturation at a lower level of optical intensity than the β+δ value of the vertically adjacent Y(i+1(odd),j). Yvlp averages these values to obtain a function that is smooth over the transition area.
Generate output intensity block 48 receives intensity (Y) information from intensity evaluation block 44 and vertical low-pass intensity (Yvlp) information from vertical low-pass filter 52. The output of block 48 is output intensity (Yout) to point processing LUT block 62 (see Figure
1).
Block 48 replaces the original luminance Y, computed by the intensity evaluation block 44, with Yvlp when Yvlp approaches saturation, in order to prevent the appearance of horizontal stripes as explained above. Block 48 implements the function: Y if Yvlp < Ythreahold
Yout =
Figure imgf000018_0002
Yvlp if Yvlp ≥ Ythreahold
The value of Ythrahold is typically equal to approximately 220 on an 8-bit scale of 0-255. As values of Y approach saturation, image detail is lost in any event, so that substituting Yvlp in the high range does not adversely affect the perceived resolution. Yvlp is used as the selecting input in order to ensure a smooth transition.
Limit block 50 receives intensity (Y) signals from intensity evaluation block 44 and generates limited luminance Ylimit. Limit block 50 cuts off the upper range of intensity (Y) values that are to be input to edge detection block 56, in order to prevent detection of false edges or horizontal stripes that can arise in areas of transition to saturation. Limit block 50 implements the function: Ylimit = min {Y, Ylim} the value of Ylim is typically equal to approximately 220.
The output of limit block 50 (i.e., Ylimit) is delayed by delay buffer 54 and received by edge detection block 56 which outputs edge information for each pixel.
Edge detector block 56 convolves the Ylimit value and its 8 immediate neighbors, with a high-pass or edge detecting kernel.
In one embodiment, the 3x3 Laplacian operator may be used:
Figure imgf000018_0001
Alternatively, to accommodate the geometric characteristics of the CCD raster and to give greater emphasis to the vertical edges, the following kernel may be used:
Figure imgf000019_0001
In an alternative embodiment, the edge detector block 56 could be implemented as separate horizontal and vertical convolution operations (such as a 1 x 3 or 3 x 1 matrix), with additional logic to avoid overemphasis of diagonal edges. This alternative embodiment is less hardware intensive and gives improved picture quality in some circumstances.
Edge suppress block 58 receives the vertical low-pass intensity (Yvlp) signals from vertical low-pass filter 52 and outputs edge suppression function fedge to edge multiplier 60.
The edge suppression function varies between 0 and 1 in the long exposure processing block 24 only. In the short exposure processing block 26, the function is set to 1, i.e., no edge suppression at this point. The function is typically implemented in block 24 in a piecewise linear fashion as follows: if Yvlp < LOWSAT
if LOWSAT ≤ Yvlp < DEEPSAT if DEEPSAT≤ Ylp
Figure imgf000019_0002
Typically LOWSAT is set to approximately 190 and
DEEPSAT to approximately 220.
Edge multiplier 60 receives input from blocks 56, 58 and generates suppressed edge edsupp to intensity (Y) result calculation.
Edge multiplier 60 multiplies the edge output of the edge detector block 56 by the edge suppression function fedge from block 58 to generate an output value edsupp to joint operations block 64 (see Figure 1). The purpose of this multiplication is to suppress distorted large edges that may appear in the long exposure at intensity (Y) values near saturation, at the same time as they appear in the short exposure at lower values of intensity (Y). The double appearance of such edges was found empirically to cause the resulting displayed edges to be overemphasized and sometimes smeared on account of blooming in the long exposure. The long exposure edge is suppressed so that only the short exposure edge will pass through to the output image. The edge suppress function may also be used to reduce the amplitude of edges from the long exposure which may be otherwise exaggerated due to the higher gain of the long exposure relative to the short exposure.
Additionally, as shown in phantom in Figure 5, an optional multiplier or LUT (block 57) may be added to multiply the output of block 56 times the ratio of exposure times (duration of long exposure/duration of short exposure) or the corresponding gain ratio, or some function of the exposure and/or gain ratio. This reflects the ratio of scales of these two values.
In the above manner, Y path block 30 outputs processed luminance Yout, edge, and edgesupp to point processing block 62 and joint operations block 64.
Referring now to Figure 6, one sees that point processing block 62 includes four point processing functions, all of which receive output intensity (Yout) values from the long and short exposure processing blocks 24, 26 (see Figure 1). These functions may typically be implemented as LUTs in RAM or ROM memory. Point processing block 62 generates arbitrary function values for input to the joint operations block 64 (Figure 1). The four tables of block 62 are:
1. The intensity (DC result) block 66 which generates a LUT value of intensity (Ylut) for the joint operations block 64.
Block 66 controls the amount of point ("DC") luminance that is summed with the edge information in generating the output luminance, Yresult. In its most general formulation,
Figure imgf000021_0001
where f is an arbitrary function. It has been found that a quasilogarithmic or fractional power dependence of Ylut on the inputs gives the best output image appearance, and the general function above can generally be reduced to a more compact LUT or piecewise linear implementation.
One simple possible computation of Ylut is as follows:
a) Yshort is multiplied by the exposure ratio, so that it is on the same scale as Ylong. That is, if a certain pixel x is acquired within the active (linear) sensitivity region of both the short and long exposures, then Ylong(x) = R*Yshort(x), where R is the exposure ratio, R=long exposure time/short exposure time (or any other ratio representing the two sensitivities).
b) Subsequently, Ylong and R*Yshort are linearly combined, so that the sum of their relative weights is always 1. That is, Ywdr=a*Ylong + b*R *Yshort, 1≥a≥0, 1≥b≥0, and a+b=1 (the wdr index stands for 'wide dynamic range'). The common practice is to set a=1 in the region where the short exposure is cut-off (too dark), b=1 in the region where the long exposure is saturated (too bright), and a>0, b>0 in the region where both exposures carry meaningful information. However, this does not cover all cases, e.g. when neither exposure carries any information (long is saturated and short is cut-off, or both saturated, or both cut-off).
c) Finally, the dynamic range of Ywdr is reduced (yielding Ylut) by either a logarithmic function, or by multiplying it by a small fraction, or by using any empirically found mapping which resembles the log function or a similar contraction, e.g., the square root.
Other possible values for Ylut comprise empirical modifications of the function described above.
2. & 3. Color weight normalize blocks 68, 70 for long and short exposures, respectively, which generate normalizing color weights w1/Y1 and w5/Y5. Blocks 68 and 70 control the proportions of mixing the color values, dr and db, from the long and short exposures, respectively, that will be used to generate the output color values, drresult and Yresult. Generally, w1 and w5 are chosen so as to give predominant weight at each pixel to the color values taken from the exposure in which the intensity (Y) luminance values are in the linear portion of the range, and to give a smooth transition over luminance gradient regions of the image. For the most part, W1 and w5 are determined on the basis of Yout(long) alone, except for cases where the long exposure is near saturation while the short is near cutoff, so that neither gives a linear reading.
The weighting values are complementary, i.e., w1 = 1 - w5, and w1, w5 > 0. The outputs of blocks 68 and 70 are normalized by division by the corresponding values of Yout for the long and short exposures. Preferably, a floating point representation for the output values of blocks 68, 70 is used so as to maintain sufficient accuracy to prevent noticeable quantization in the output image.
Alternatively, instead of weighted addition of the normalized color from the two exposures, simple selection of the normalized color from one exposure or the other, with an ordered dither
(alternation) of the color selection in areas of transition may be used. In this case, when W1 = 1, the color is selected from the long exposure; when w5 = 1, it is selected from the long exposure, and when neither w factor is 1, color values are taken alternately from long and short, according to a pseudo-random probability distribution in which the long and short color value probabilities are equal to the w1 and w5 values. A normalized color value that is an average of the long and short values may also be mixed into the dither, in order to give a smoother color transition.
4. Saturation color suppression factor block 72 generates the color suppression factor Wht that reduces chroma saturation (adds white to the image) in areas of luminance saturation of the input image. An additional edge color suppression factor, Zed, is computed in the joint operations block (as will be described hereinafter). The minimum of Wht and Zed, both of which vary from 1 to 0, multiplies the chroma components at the output stage of color conversion. Thus, as Wht approaches zero, so does the color saturation of the output image. The purpose of the saturation color suppression function is to reduce the appearance of color artifacts that arise due to CCD saturation. The linear relationships between the α, β, γ, and δ CCD outputs and the true RGB colors break down as the CCD 14 approaches saturation. As non-linear deviations cannot be readily corrected, suspected distorted colors are "whitewashed". Similar techniques are used in the analog domain in conventional CCD cameras.
As shown in Figure 6:
Wht = w1 + w5*z5 w1 and w5 are identical to the above color weighting values. The variable Z5 is a function of Yout(short), varying between 0 and 1, as shown schematically in the lower right corner of Figure 6. It tends to zero in areas where the short exposure luminance approaches either saturation or cutoff. This function will give Wht = 0 at the saturation end (where generally w5 = 1 while w1 = 0). At the cutoff end, normally w1 ≈ 1 as long as there is adequate overlap between the long and short exposures, so that in this range the function will usually give Wht ≈ 1.
In normal mode, in which only one input channel is operative (see explanation below), Wht = 1 from Y=0 up to the low saturation threshold (typically 190). From this threshold up to the deep saturation limit of Y (typically 220), Wht drops linearly to its saturation value of 0. In replay mode (see below), there is no saturation color suppression.
Figure 7 discloses the joint operations block 64 (also see Figure 1). Joint operations block 64 combines the chrominance and luminance data from the long and short exposure processing blocks 24, 26, together with data from point processing block 62, to generate a combined Y/dr/db result. Block 64 then converts this result to output in standard RGB or Y/Cr/Cb (luminance, chrominance (red) and chrominance (blue)) color space. A color suppression factor Z is computed and applied to the chrominance outputs in order to reduce color artifacts (by reducing chroma saturation) around edges and areas of luminance signal saturation.
Joint operations block 64 includes:
1. The dr, db, Y block 74 (recalling that dr
and db are the differences between successive readings in even and odd lines, respectively) which receives dr, db values from the color path outputs of long and short exposure processing blocks 24, 26 respectively; edsupp from the intensity (Y) path output of long exposure processing block 24 and edge data from the intensity
(Y) path output of short exposure processing block 26; and YLUT, W1/Y1 and W5/Y5 from table processing (LUT) block 62. Block 74 generates combined intensity Y/dr/db results to color conversion block 78 (to be discussed). Block 74 will be discussed in greater detail hereinafter.
2. The color suppression factor block 76 which
receives edlong and edshort from edge detector block 56 and saturation color suppression factor (Wht) from point processing block 62 and generates chroma suppression factor Z for color conversion block 78. Block 76 will be discussed in greater detail hereinafter. 3. The color conversion block 78 which receives Yresult, drresult, dbresult from block 74 and Z, the color suppression factor from block 76 and generates Rout, Gout, and Bout and Cr and Cb.
Block 78 will be discussed in greater detail hereinafter.
The dr, db, Y block 74 is shown in further detail in Figure 8.
Block 74 includes an intensity (Y) calculation which is performed by adders 79, 80 and edge limiting block 81.
Adder 79 receives edsupp (long) data from long exposure processing block 24, and edshort from short exposure processing block 26. These two inputs are added to give edgeresult, which is then input to the edge limiting block 81. Edge limiting is implemented as a piecewise linear function with 6 inflection points (A1...A6) and 4 slopes (S1...S4), as shown in the upper right inset of Figure 8. Generally the inflection points and slopes are chosen so as to enhance the smaller edges (i.e., S2 and S3 ≥ 1), while large edges (edge > A5 or < A2) are suppressed. Since these large edges come through strongly in the YLUT contribution anyway, the output image has a more pleasing appearance if they are not additionally enhanced. A3 and A4 may be set to 0, but it is sometimes desirable to set them to small non-zero values in order to suppress false edges due to noise. The best results appear to be obtained with |A1| and |A6| values of
50 to 60. The best values of the slopes |Si| are typically in the range 0.5 to 2, but the hardware allows a greater range.
The edgeUmitcd output is then summed by adder 80 with the
YLUT output of block 62 to obtain the output luminance value
Yresult. Additionally, as shown in phantom in Figure 10, adder
80 may be removed from its location in Figure 8 and placed so thac the output of block 81 is not added to Yresult until just before being added into block 113A-C, that is, as late as possible.
Block 74 further includes a dr, db calculation which is performed by the remaining sections of block 74. The dr, db calculation receives low-pass color components dr, db from the color paths of long and short exposure processing blocks 24, 26; w1/Y1 and w5/Y5 from block 62; and Yresult as calculated by adder 80. The dr, db calculation outputs drresult and dbresult.
The long and short values of dr and db are multiplied by the respective normalized color weights, w1/Y1 and w5/Y5 by multipliers 82, 84, 86, 88. These normalized, weighted color values from the two exposures are summed together by adders 90, 92 and then multiplied by Yresult by multipliers 94, 96 to give the scaled values:
drresult = Yresult * dbresult = Yresult *
Figure imgf000027_0001
Alternatively, drresult and dbresult may be generated by selection between the long and short normalized dr and db inputs (and possibly their long/short average values).
The color suppression factor block 76 of Figure 7 is shown in more detail in Figure 9.
Maximum value block 100 selects the higher of the two absolute values of edlong and edshort as calculated by absolute value blocks 98, 99. The result of the calculation of block 100, edmax, is input to edge chroma suppression factor block 102 to calculate Zed. The calculation of Zed is implemented as a piecewise linear function, shown in the upper right corner of Figure 9. As can be seen in Figure 9, Zed receives a value between Th and 1, given by: if edmax<E1 Zed = ( 1 - Th) if E1≤edmax<E2
if E2≤edmax
Figure imgf000028_0001
Typically, E1=10 and E2=27 have been found to give good results. The minimum value of Zed, Th, is ordinarily set to zero, to give complete chroma suppression at very strong edges. Th ≠ 0 is used only in replay of images stored in mosaic format (see generate mosaic block 120 described hereinafter), in which case Zed serves to suppress color anomalies resulting from the reinterpolation of the pixel values.
Thereafter, as shown in Figure 9, minimum value block 104 selects the minimum of the two color suppression factors, Zed and Wht, thereby determining the edge criterion or saturation criterion that should be used to provide the required degree of chroma suppression at the given pixel.
Referring now to Figure 10, which discloses in detail color conversion block 78 of Figure 7, one sees that color conversion block 78 receives Yresult drresult, and dbresult from block 74 and Z from block 76 and generates outputs in both the RGB and Y/Cr/Cb formulations.
In other words, block 78 takes the interim dynamic range enhancement results Y/dr/db, and converts them into conventional color components for system output.
Block 78 includes horizontal low-pass filter 106 which receives Yresult and calculates Yresult (1p) for the color matrix block 108.
Horizontal low-pass filter 106 is identical to the low-pass color component block 36 in the color path block
28 (see Figures 3 and 4). Since the dr and db inputs to the color matrix 108 have already been low-pass filtered by this low-pass filter operatar, it is necessary to filter the intensity (Y) value an well in order to prevent color artifacts.
Color matrix block 108 receives Yresult (lp) from horizontal low-pass filter 106 and drresult and dbresult from block 74 and generates low-pass RGB color component outputs.
If one recalls the derivation of Y, dr and db from the original α , β, γ and δ values of the mosaic CCD input:
Y ≡
Figure imgf000029_0001
dr ≡ γ - α
db ≡ δ - β
together with the RGB equivalencies of α, β, γ and δ , one obtains the following relationships between RGB and the Y/dr/db values:
R 0.2 0.4 0.1 Y
G = 3.5 * 0.4 -0.2 0.2 dr
B 0.2 -0.1 -0.4 db
Figure imgf000029_0002
Figure imgf000029_0003
Figure imgf000029_0004
Figure imgf000029_0005
Figure imgf000029_0006
The factor of 3.5 is required for normalization of the relation Y = R + 1.5G + B. Due to hardware implementation considerations, the color conversion matrix is calculated as follows:
R 1.0 2.0 0.5 Y
G = 0.7 * 2.0 -1.0 1.0 dr
Figure imgf000029_0007
B
Figure imgf000029_0008
Figure imgf000029_0009
1.0 -0.5 -2.0
Figure imgf000029_0010
db
Figure imgf000029_0011
In this way the matrix multiplication is performed by a series of shift/add operations. The multiplicative factor 0.7 is combined (by multiplication) with externally programmed RGB white balance correction factors as is described hereinafter. RGB white balance multipliers 109A, 109B, 109C receive low-pass RGB signals from color matrix block 108 and generate normalized low-pass RGB signals.
Multipliers 109A, 109B, 109C multiply each of the RGB low-pass values by a pre-computed white balance correction factor, adjusted by the normalization factor 0.7 required by the color matrix calculation. Although conventional RGB white balancing uses only two multiplicative factors, correcting R and B while G is held constant, this "short cut" does not preserve constant Y achromatic luminance. This loss of normalization may lead to the appearance of artifacts and incorrect luminance in the output. It is necessary, therefore, to use three multiplicative factors, normalized to preserve constant luminance Y.
The calculation of the correction factors is performed off-line by capturing a white image and selectively computing average values
Figure imgf000030_0003
and
Figure imgf000030_0004
excluding pixels near saturation or cutoff. From the definition Y = R + 1.5G + B, it follows that for a corrected pixel in the white image, it should be found that:
R = G = B = Y
Figure imgf000030_0001
From this relationship one derives the correction factors to be used in multipliers 109A, 109B, 109C:
R factor = G factor =
B factor =
Figure imgf000030_0002
Output signal enhancement block 110 (which includes chroma suppression and RGB output functions) receives corrected low-pass RGB color component signals from color matrix block 108 via multipliers 109A, 109B, 109C; Yresult from block 74; Yresult (lp) from block 106; and chroma suppression factor Z from block 76.
As noted above, the RGB values output from color matrix block 108 are low-pass values. High-frequency image information is "re-injected" into RGB according to the following equation (given here only for the R component, since the treatment of G and B is identical):
Rhp = Rlp + K*Yresult - K*Y
Figure imgf000031_0001
result
K is an arbitrary constant between 0 and 1, chosen according to the degree of high-frequency enhancement required. Values in the range 0.4 < K < 0.8 are typically used.
The addition and subtraction of Yresult values to the RGB components can alter the original values of R/G and B/G, with the result that the correct hue of the image is not preserved. Therefore, in an alternative embodiment, R, G and B are multiplied by a high-pass enhancement function.
Since the RGB color component values contain both luminance and chrominance information, the chroma suppression factor, Z, is best applied to chrominance-only components, by adders 113A, 113B, 113C: Cr = R - Yresult
Cg = G - Yresult
Cb = B - Yresult
Combining these equations with the previous ones for "high frequency re-injection", one obtains the following formula, which is implemented as shown in Figure 10 (including arithmetic element blocks 114, 115, 116) to obtain Cr from Rlp: Cr = Rlp + K* (Yresult - Y
Figure imgf000032_0001
) - Yresult and likewise for Cg and Cb. These Cr/Cg/Cb values are multiplied by Z by multipliers 112A, 112B, 112C. At this point, Y/Cr/Cb output is available directly (using Y = Yresult), though it is preferable to add a bias of +128 to the signed digital outputs Cr and Cb in order to convert them to positive values for D/A conversion. In the alternative, Yresult can be added back into the chroma-suppressed Cr/Cg/Cb values (by adders 113A-113C) to obtain the final Rout/Gout/Bout.
Referring now to Figure 11, which discloses generate mosaic block 120 of Figure 1 in more detail, one sees that the input of generate mosaic block 120 is Rout/Gout/Bout from color conversion block 78 of joint operations block 64. The output of block 120 is the equivalent a, β , y, δ values in the format: αeq γeq αeq γeq . . . . .
βeq δeq βeq δeq . . . . ..
αeq γeq αeq γeq . . . . ..
βeq δeq βeq δeq . . . . ..
In order to reduce memory requirements for image storage and to allow stored images to be replayed through the apparatus 10 for display, the final RGB values from the processed image are used to generate equivalent, simulated mosaic values of α, β, γ, and δ . In this way, only eight bits per pixel of information must be stored, rather than the 24 bits of full output information. These mosaic values can later be replayed to regenerate the stored image.
The simulated mosaic values are generated by the following matrix in matrix block 122, based on the color equivalencies given hereinabove.
Figure imgf000033_0001
The factor of ¼ that multiplies the matrix is used for reasons of hardware convenience— in order to ensure that α, β, γ, and δ do not overflow the range 0-255 of 8 bits. To maintain the normalization relations given hereinabove, the factor should actually be 1/3.5. Therefore, in replay mode, mosaic white balance block 32 is used to multiply the a , β, γ, and δ values back by 4/3.5 (=8/7) before reprocessing. Finally, multiplexer 124 selects which one of the four mosaic values to output for each pixel according to the table:
Pixel i,j
Figure imgf000033_0002
Apparatus 10 has three modes of operation: normal, adaptive sensitivity (AS), and replay.
1. Normal mode emulates the performance of a mosaic color CCD camera without adaptive sensitivity. In this mode only the long exposure portion of the pipeline operates. The processing functions are limited to decoding the mosaic input into conventional color components: Y/Cr/Cb or RGB, while additionally performing filtering operations for anti-aliasing, detail (edge) enhancement and chroma suppression where required. 2. Adaptive sensitivity mode uses all the resources of the processing pipeline to generate wide dynamic range images as described hereinabove.
3. Replay mode is required for displaying images that have been stored in RAM or disk. Apparatus 10 stores these images in a regenerated mosaic format in order to save on storage memory requirements. Replay mode is similar to normal mode, except that most of the enhancement operations are not performed: since the stored data have already been filtered once, it is for the most part not desirable to filter them again.
The preceding specific embodiments are illustrative of the practice of the invention. It is to be understood, however, that other expedients known to those skilled in the art or disclosed herein, may be employed without departing from the spirit of the invention or the scope of the appended claims.

Claims

CLAIMS What is Claimed is:
1. A color wide dynamic range video imaging apparatus comprising:
sensor means for providing a plurality of color video images of a scene at different exposure levels;
means for dividing each color video image into components; and
means for processing said components of each of said plurality of video images to produce a combined color video image including image information from said components of each of said plurality of color video images by applying neighborhood transforms to at least one of said components of each of said plurality of video images,
wherein said means for processing includes means for calculating point intensity data for said each of said plurality of said video images.
2. The color wide dynamic range video imaging apparatus of Claim 1, wherein said means for processing includes means for calculating color weighting factors.
3. The color wide dynamic range video imaging apparatus of Claim 2, wherein said means for processing includes means for calculating saturation color suppression factors.
4. The color wide dynamic range video imaging apparatus of Claim 1, wherein said means for dividing each color video image into components includes a filter means in front of said sensor means and said filter means includes filter elements of a plurality of colors, said plurality of colors corresponding to said components.
5. The color wide dynamic range video imaging apparatus of Claim 4, wherein said sensor means includes a plurality of pixel sensing elements and
wherein said filter elements are arranged in a regular repeating pattern with each filter element in front of a single pixel element of said sensing means.
6. The color wide dynamic range video imaging apparatus of Claim 5, wherein said processing means includes means for evaluating an intensity of each color component;
means for evaluating color suppression factors for each pixel; and
means for converting said components into RGB color space.
7. The color wide dynamic range video imaging apparatus of Claim 6, wherein said means for evaluating an intensity of each color component communicates with a means for substituting luminance values when luminance otherwise approaches saturation.
8. The color wide dynamic range video imaging apparatus of Claim 6, wherein said means for evaluating an intensity of each color component communicates with a means for limiting luminance values.
9. The color wide dynamic range video imaging apparatus of Claim 5, wherein said processing means includes white balancing means which calculates correction factors for said components based upon the intensity of said color video images.
10. The color wide dynamic range video imaging apparatus of Claim 9, wherein white balancing means calculates an average intensity of said color video images excluding saturated pixels and cut-off pixels.
11. The color wide dynamic range video imaging apparatus of Claim 9, including vertical low-pass filter means receiving intensity and false edge suppression means, wherein said false edge suppression means receives intensity data from said vertical low-pass filter means and calculates false edge suppression factors which are reduced when said intensity data exceeds a pre-selected saturation value.
12. The color wide dynamic range video imaging apparatus of Claim 11, wherein said false edge suppression means calculates said false edge suppression factors for an image of a longest of said different exposure levels.
13. The color wide dynamic range video imaging apparatus of Claim 12, further including edge detection means and wherein an output of said edge detection means is multiplied times said false edge suppression factors.
14. The color wide dynamic range video imaging apparatus of Claim 5, wherein color components are calculated by multiplying an intensity times a sum of products of prior color components and output of said means for calculating said color weighting factors.
15. The color wide dynamic range video imaging apparatus of Claim 6, including means for calculating white balance correction factors for said components converted into RGB color space, wherein said white balance correction factors for a given respective RGB component are calculated by dividing average overall luminance by a multiple of an average value of said respective given RGB component.
16. The color wide dynamic range video imaging apparatus of Claim 15, wherein said average value of said respective given RGB component is calculated excluding pixels which are substantially near saturation or substantially near cut-off.
17. The color wide dynamic range video imaging apparatus of Claim 6, including means for converting said RGB color space components into mosaic color components corresponding to colors of said filter elements of said filter means.
18. The color wide dynamic range video imaging apparatus of Claim 1, wherein said means for processing is implemented on a single chip.
19. An imaging apparatus comprising:
sensor means for providing a plurality of color video images of a scene at different exposure levels;
means for dividing each color video image into components; and
means for processing said components of each of said plurality of video images to produce a combined color video image including image information from said components of each of said plurality of color video images by applying neighborhood transforms to at least one of said components of each of said plurality of video images,
wherein said means for processing includes means for calculating point intensity data for said each of said plurality of said video images, and
wherein said processing means includes white balancing means which calculates correction factors for said components based upon the intensity of said color video images.
20. A color wide dynamic range video processing chip comprising:
means for processing components of each of a plurality of video images of a scene at different exposure levels to produce a combined color video image including image information from said components of each of said plurality of color video images by applying neighborhood transforms to at least one of said components of each of said plurality of video images,
wherein said means for processing includes means for calculating point intensity data for said each of said plurality of said video images.
PCT/US1994/001358 1993-02-08 1994-02-07 Color wide dynamic range camera using a charge coupled device with mosaic filter WO1994018801A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP94907434A EP0739571A1 (en) 1993-02-08 1994-02-07 Color wide dynamic range camera using a charge coupled device with mosaic filter

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US1454593A 1993-02-08 1993-02-08
US08/014,545 1993-02-08

Publications (1)

Publication Number Publication Date
WO1994018801A1 true WO1994018801A1 (en) 1994-08-18

Family

ID=21766100

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/001358 WO1994018801A1 (en) 1993-02-08 1994-02-07 Color wide dynamic range camera using a charge coupled device with mosaic filter

Country Status (2)

Country Link
EP (1) EP0739571A1 (en)
WO (1) WO1994018801A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0713342A3 (en) * 1994-11-18 1996-09-04 Canon Kk Color image sensing apparatus and method of expanding dynamic range
EP0823814A2 (en) * 1996-08-05 1998-02-11 Matsushita Electric Industrial Co., Ltd. Image mixing circuit
WO1998059491A1 (en) * 1997-06-12 1998-12-30 Finnelpro Oy Digitizing apparatus
GB2341029A (en) * 1998-08-29 2000-03-01 Marconi Gec Ltd Television camera having neutral density striped filter and producing output with extended dynamic range/contrast
WO2002005208A2 (en) * 2000-07-06 2002-01-17 The Trustees Of Columbia University In The City Of New York Method and apparatus for enhancing data resolution
EP1246459A2 (en) * 2001-03-27 2002-10-02 Matsushita Electric Industrial Co., Ltd. Video camera imager and imager ic capable of plural kinds of double-image processings
EP1286554A2 (en) * 2001-08-14 2003-02-26 Canon Kabushiki Kaisha Chrominance signal processing apparatus, image-sensing apparatus and control methods for same
US6628327B1 (en) * 1997-01-08 2003-09-30 Ricoh Co., Ltd Method and a system for improving resolution in color image data generated by a color image sensor
EP1488732A1 (en) * 2003-06-17 2004-12-22 Olympus Corporation Electronic endoscope device
EP1592235A1 (en) * 2003-02-05 2005-11-02 Matsushita Electric Industrial Co., Ltd. Image processing device, image processing program and program-recorded recording medium
US7149262B1 (en) * 2000-07-06 2006-12-12 The Trustees Of Columbia University In The City Of New York Method and apparatus for enhancing data resolution
WO2007016554A1 (en) * 2005-07-29 2007-02-08 Qualcomm Incorporated Compensating for improperly exposed areas in digital images
EP2290950A3 (en) * 2000-02-23 2011-03-16 The Trustees of Columbia University of the City of New York Method and apparatus for obtaining high dynamic range images
CN103489165A (en) * 2013-10-01 2014-01-01 中国人民解放军国防科学技术大学 Decimal lookup table generation method for video stitching
EP3709254A4 (en) * 2017-11-06 2020-10-21 EIZO Corporation Image processing device, image processing method, and image processing program

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4584606A (en) * 1983-09-01 1986-04-22 Olympus Optical Co., Ltd. Image pickup means
US4614966A (en) * 1982-08-20 1986-09-30 Olympus Optical Co., Ltd. Electronic still camera for generating long time exposure by adding results of multiple short time exposures
US4647975A (en) * 1985-10-30 1987-03-03 Polaroid Corporation Exposure control system for an electronic imaging camera having increased dynamic range
US4774564A (en) * 1986-09-09 1988-09-27 Fuji Photo Film Co., Ltd. Electronic still camera for compensating color temperature dependency of color video signals
US4858014A (en) * 1986-07-21 1989-08-15 Technion Research & Development Foundation Ltd. Random scan system
US5138458A (en) * 1989-12-22 1992-08-11 Olympus Optical Co., Ltd. Electronic camera apparatus capable of providing wide dynamic range image signal
US5144442A (en) * 1988-02-08 1992-09-01 I Sight, Inc. Wide dynamic range camera
US5247366A (en) * 1989-08-02 1993-09-21 I Sight Ltd. Color wide dynamic range camera

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4614966A (en) * 1982-08-20 1986-09-30 Olympus Optical Co., Ltd. Electronic still camera for generating long time exposure by adding results of multiple short time exposures
US4584606A (en) * 1983-09-01 1986-04-22 Olympus Optical Co., Ltd. Image pickup means
US4647975A (en) * 1985-10-30 1987-03-03 Polaroid Corporation Exposure control system for an electronic imaging camera having increased dynamic range
US4858014A (en) * 1986-07-21 1989-08-15 Technion Research & Development Foundation Ltd. Random scan system
US4774564A (en) * 1986-09-09 1988-09-27 Fuji Photo Film Co., Ltd. Electronic still camera for compensating color temperature dependency of color video signals
US5144442A (en) * 1988-02-08 1992-09-01 I Sight, Inc. Wide dynamic range camera
US5247366A (en) * 1989-08-02 1993-09-21 I Sight Ltd. Color wide dynamic range camera
US5138458A (en) * 1989-12-22 1992-08-11 Olympus Optical Co., Ltd. Electronic camera apparatus capable of providing wide dynamic range image signal

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6040858A (en) * 1994-11-18 2000-03-21 Canon Kabushiki Kaisha Method and apparatus for expanding the dynamic range of sensed color images
EP0713342A3 (en) * 1994-11-18 1996-09-04 Canon Kk Color image sensing apparatus and method of expanding dynamic range
EP0823814A2 (en) * 1996-08-05 1998-02-11 Matsushita Electric Industrial Co., Ltd. Image mixing circuit
EP0823814A3 (en) * 1996-08-05 1999-03-24 Matsushita Electric Industrial Co., Ltd. Image mixing circuit
US6078357A (en) * 1996-08-05 2000-06-20 Matsushita Electric Industrial Co., Ltd. Image mixing circuit
CN1082310C (en) * 1996-08-05 2002-04-03 松下电器产业株式会社 Image mixing circuit
US6628327B1 (en) * 1997-01-08 2003-09-30 Ricoh Co., Ltd Method and a system for improving resolution in color image data generated by a color image sensor
WO1998059491A1 (en) * 1997-06-12 1998-12-30 Finnelpro Oy Digitizing apparatus
GB2341029A (en) * 1998-08-29 2000-03-01 Marconi Gec Ltd Television camera having neutral density striped filter and producing output with extended dynamic range/contrast
WO2000013421A1 (en) * 1998-08-29 2000-03-09 Marconi Electronic Systems Limited Cameras
US7064782B1 (en) 1998-08-29 2006-06-20 E2V Technologies (Uk) Limited Cameras
GB2341029B (en) * 1998-08-29 2002-12-31 Marconi Gec Ltd Cameras
EP2290950A3 (en) * 2000-02-23 2011-03-16 The Trustees of Columbia University of the City of New York Method and apparatus for obtaining high dynamic range images
US8610789B1 (en) 2000-02-23 2013-12-17 The Trustees Of Columbia University In The City Of New York Method and apparatus for obtaining high dynamic range images
US7999858B2 (en) 2000-02-23 2011-08-16 The Trustees Of Columbia University In The City Of New York Method and apparatus for obtaining high dynamic range images
WO2002005208A3 (en) * 2000-07-06 2003-06-26 Univ Columbia Method and apparatus for enhancing data resolution
US7149262B1 (en) * 2000-07-06 2006-12-12 The Trustees Of Columbia University In The City Of New York Method and apparatus for enhancing data resolution
WO2002005208A2 (en) * 2000-07-06 2002-01-17 The Trustees Of Columbia University In The City Of New York Method and apparatus for enhancing data resolution
EP1246459A2 (en) * 2001-03-27 2002-10-02 Matsushita Electric Industrial Co., Ltd. Video camera imager and imager ic capable of plural kinds of double-image processings
US7053946B2 (en) 2001-03-27 2006-05-30 Matsushita Electric Industrial Co., Ltd. Video camera imager and imager IC capable of plural kinds of double-image processings
EP1246459A3 (en) * 2001-03-27 2004-12-15 Matsushita Electric Industrial Co., Ltd. Video camera imager and imager ic capable of plural kinds of double-image processings
EP1286554A2 (en) * 2001-08-14 2003-02-26 Canon Kabushiki Kaisha Chrominance signal processing apparatus, image-sensing apparatus and control methods for same
US7113207B2 (en) 2001-08-14 2006-09-26 Canon Kabushiki Kaisha Chrominance signal processing apparatus, image-sensing apparatus and control methods for same
EP1286554A3 (en) * 2001-08-14 2005-04-20 Canon Kabushiki Kaisha Chrominance signal processing apparatus, image-sensing apparatus and control methods for same
EP1592235A1 (en) * 2003-02-05 2005-11-02 Matsushita Electric Industrial Co., Ltd. Image processing device, image processing program and program-recorded recording medium
EP1592235A4 (en) * 2003-02-05 2010-02-24 Panasonic Corp Image processing device, image processing program and program-recorded recording medium
EP1488732A1 (en) * 2003-06-17 2004-12-22 Olympus Corporation Electronic endoscope device
US7670286B2 (en) 2003-06-17 2010-03-02 Olympus Corporation Electronic endoscopic device having a color balance adjustment system
CN100384366C (en) * 2003-06-17 2008-04-30 奥林巴斯株式会社 Electronic endoscope device
WO2007016554A1 (en) * 2005-07-29 2007-02-08 Qualcomm Incorporated Compensating for improperly exposed areas in digital images
CN103489165A (en) * 2013-10-01 2014-01-01 中国人民解放军国防科学技术大学 Decimal lookup table generation method for video stitching
EP3709254A4 (en) * 2017-11-06 2020-10-21 EIZO Corporation Image processing device, image processing method, and image processing program
US11363245B2 (en) 2017-11-06 2022-06-14 Eizo Corporation Image processing device, image processing method, and image processing program

Also Published As

Publication number Publication date
EP0739571A1 (en) 1996-10-30

Similar Documents

Publication Publication Date Title
US5247366A (en) Color wide dynamic range camera
US8184181B2 (en) Image capturing system and computer readable recording medium for recording image processing program
US8736723B2 (en) Image processing system, method and program, including a correction coefficient calculation section for gradation correction
US8295595B2 (en) Generating full color images by demosaicing noise removed pixels from images
US7072509B2 (en) Electronic image color plane reconstruction
US8081239B2 (en) Image processing apparatus and image processing method
EP1930853A1 (en) Image signal processing apparatus and image signal processing
JP3548504B2 (en) Signal processing device, signal processing method, and imaging device
JP2009124552A (en) Noise reduction system, noise reduction program and imaging system
US8086032B2 (en) Image processing device, image processing method, and image pickup apparatus
WO1994018801A1 (en) Color wide dynamic range camera using a charge coupled device with mosaic filter
EP2360929B1 (en) Image processing device
JP5041886B2 (en) Image processing apparatus, image processing program, and image processing method
KR20050055011A (en) A method for interpolation and sharpening of images
JP2007041834A (en) Image processor
EP0554035B1 (en) Solid state color video camera
JP4272443B2 (en) Image processing apparatus and image processing method
JPH11313336A (en) Signal processor and photographing signal processing method
JPH11313338A (en) Signal processor and photographing signal processing method
US7012719B1 (en) Sign sensitive aperture correction system and method
JP5103580B2 (en) Image processing apparatus and digital camera
JP4086572B2 (en) Video signal processing device
JP3938715B2 (en) Color drift reduction device
JP4122082B2 (en) Signal processing apparatus and processing method thereof
JP3837881B2 (en) Image signal processing method and electronic camera

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1994907434

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1994907434

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1994907434

Country of ref document: EP