US20040246352A1 - Imaging device - Google Patents
Imaging device Download PDFInfo
- Publication number
- US20040246352A1 US20040246352A1 US10/856,842 US85684204A US2004246352A1 US 20040246352 A1 US20040246352 A1 US 20040246352A1 US 85684204 A US85684204 A US 85684204A US 2004246352 A1 US2004246352 A1 US 2004246352A1
- Authority
- US
- United States
- Prior art keywords
- pixel
- color component
- imaging device
- component pixel
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
- H04N25/611—Correction of chromatic aberration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
Abstract
A color image sensor has a first, second, and third color component pixels. The third color component pixel includes more intensity information than the first and second color component pixels. A difference estimating section estimates difference in a signal level exerted on a pixel signal of the third color component pixel, using the pixel signal of a pixel peripheral to the third color component pixel. A correcting section corrects the pixel signal of the third color component pixel on the basis of the estimated difference. Accordingly, when the third color component is green in a Bayer array, a pixel signal of a green pixel in a row of red pixels becomes equal to the pixel signal of a green pixel in a row of blue pixels. Therefore, line crawling can be suppressed without smoothing an image.
Description
- This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2003-159459, filed on Jun. 4, 2003, the entire contents of which are incorporated herein by reference.
- 1. Field of the Invention
- The present invention relates to an imaging device. To be more specific, the present invention relates to image data processing to be performed on an output signal from a color image sensor in order to obtain a high quality image.
- 2. Description of the Related Art
- When shooting by a video camera or an electronic camera, there are cases where line crawling (line-shaped or grid-like noise) occurs in an image. A cause of the occurrence of the line crawling is fixed pattern noise, such as difference in the sensitivity of each pixel, difference in the output level of each vertical transfer line and the like in a color image sensor. A method for smoothing the image is known as a conventional art for making the line crawling inconspicuous. A low pass filter (hereinafter abbreviated as LPF) and the like are used in this smoothing processing (for example, refer to pages 539-548 of “Image Analysis Handbook” written by Mikio Takagi, supervising editor Haruhisa Shimoda, published by University of Tokyo Press on Jan. 17, 1991).
- By the way, in many cases, shot image data is subjected to sharpness enhancing processing such as unsharp masking and the like, in order to increase its sharpness and graininess. This sharpness enhancing processing also emphasizes line crawling, which is supposed to be made inconspicuous. Thus, when the line crawling occurs in original image data, it is necessary that original image data be subjected to the smoothing processing such as LPF before being subjected to the sharpness enhancing processing, in order to reduce the line crawling. It is difficult, however, to effectively subject image data to the sharpness enhancing processing after being smoothed. Therefore, it is difficult to obtain a fine image with sharpness from original image data with the line crawling.
- An object of the present invention is to provide a method for reducing line crawling without image smoothing processing, in an imaging device using a color image sensor.
- According to one aspect of the present invention, an imaging device includes a color image sensor, a difference estimating section, and a correcting section, which have the following functions. The color image sensor has a first color component pixel, a second color component pixel, and a third color component pixel. The first to third color component pixels are regularly arranged in a two-dimensional matrix. Each of the first to third color component pixels generates a pixel signal in accordance with the amount of light received. The color image sensor transfers and outputs each of the pixel signals in succession. The third color component pixel includes more intensity information than the first and second color component pixels. The difference estimating section estimates difference in a signal level exerted on the pixel signal of the third color component pixel, with the use of the pixel signal of a pixel peripheral to the third color component pixel. The correcting section corrects the pixel signal of the third color component pixel on the basis of the estimated difference.
- “The pixel peripheral to the third color component pixel” that the difference estimating section uses for estimating the difference may be, for example, “at least one of two pixels adjacent to the third color component pixel in a horizontal direction”. The horizontal direction designates, for example, the elongation direction of a horizontal CCD.
- Otherwise, “the pixel peripheral to the third color component pixel” may be “at least one of two pixels adjacent to the third color component pixel in a vertical direction”. The vertical direction designates, for example, the elongation direction of a vertical CCD.
- Otherwise, “the pixel peripheral to the third color component pixel” may be “a pixel immediately preceding the third color component pixel in transfer order in the color image sensor”.
- The foregoing color image sensor refers to, for example, a CCD. The difference estimating section refers to, for example, the calculation function of a striped noise suppressing section for calculating a modified value of a signal level of a pixel signal. The correcting section refers to, for example, the function of the striped noise suppressing section for correcting a signal level of a pixel signal to the modified value.
- It is preferable that the imaging device according to this aspect may be as follows. That is, the difference estimating section estimates difference caused by ringing, on the basis of a level of the pixel signal of a pixel, which immediately precedes the third color component pixel in output order of the pixel signal outputted from the color image sensor, or on the basis of variation in levels between the third color component pixel and the immediately preceding pixel thereof.
- Otherwise, it is preferable that the imaging device according to this aspect may be as follows. That is, the difference estimating section estimates difference caused by flare on a light-receiving surface of the third color component pixel.
- Otherwise, it is preferable that the imaging device according to this aspect may satisfy the following four terms. Firstly, the first color component pixel is a red pixel selectively receiving red light. Secondly, the second color component pixel is a blue pixel selectively receiving blue light. Thirdly, the third color component pixel is a green pixel selectively receiving green light. Fourthly, the difference estimating section estimates the difference on the basis of a level of the pixel signal of the red pixel. Here, the difference is caused by a phenomenon that “red light and infrared light passing through a charge accumulating region in the color image sensor are reflected in a bulk and mixed into a peripheral pixel”.
- Otherwise, it is preferable that the imaging device according to this aspect is provided with a judging section having the following functions. The judging section judges whether or not a level of the pixel signal is equal to or more than a predetermined value A. The level of the pixel signal is transferred from a pixel, which immediately precedes the third color component pixel in transfer order in the color image sensor. The judging section causes the correcting section to operate only when the judged level is equal to or more than the predetermined value A.
- Otherwise, it is preferable that the imaging device according to this aspect is provided with a judging section having the following functions. The judging section compares a level of the pixel signal transferred from a pixel, which immediately precedes the third color component pixel in transfer order in the color image sensor, with a level of the pixel signal of the third color component pixel. The judging section causes the correcting section to operate only when difference in levels of both signals is equal to or more than a predetermined value B.
- Otherwise, it is preferable that the imaging device according to this aspect may satisfy the following three terms. Firstly, the imaging device further includes an analog-to-digital conversion section which performs analog-to-digital conversion on the pixel signal outputted from the color image sensor. Secondly, the difference estimating section estimates the difference with the use of the pixel signal after analog-to-digital conversion. Thirdly, the correcting section corrects the pixel signal after analog-to-digital conversion. The analog-to-digital conversion section refers to, for example, the function of an image-data generating section which performs analog-to-digital conversion.
- According to another aspect of the present invention, an imaging device includes a color image sensor, a difference estimating section, and a correcting section which have the following functions. The color image sensor has pixels of at least two types of color components, one of which is a first color component pixel. These pixels are regularly arranged in a two-dimensional matrix, and each of the pixels generates a pixel signal in accordance with an amount of light received. The color image sensor transfers and outputs each of the pixel signals in succession. The difference estimating section estimates difference in a signal level exerted on the pixel signal of the first color component pixel, with the use of the pixel signal of the pixel that is adjacent to and different from the first color component pixel. The correcting section corrects the pixel signal of the first color component pixel on the basis of the estimated difference.
- The nature, principle, and utility of the invention will become more apparent from the following detailed description when read in conjunction with the accompanying drawings in which like parts are designated by identical reference numbers, in which:
- FIG. 1 is a block diagram of a photographing system in which an imaging device according to the present invention is installed;
- FIG. 2 is a block diagram showing the details of a CCD in FIG. 1; and
- FIG. 3 is a flowchart showing an essential part of signal processing in the imaging device of FIG. 1.
- An embodiment of the present invention will be hereinafter described with reference to the drawings.
- <Configuration of this Embodiment>
- FIG. 1 shows a block diagram of a photographing system in which an imaging device according to the present invention is installed. Referring to FIG. 1, a photographing
system 10 includes animaging device 12 according to the present invention, aphototaking lens 14 attached to theimaging device 12, and arecording medium 16 connected to theimaging device 12. Thephototaking lens 14 includes alens group 20 and anaperture 22. In this embodiment, by way of example, theimaging device 12 composes an electronic camera. - The
imaging device 12 includes arelease button 30, anEEPROM 32, aCPU 34, a memory (for example, DRAM) 36, a focal-plane shutter 40, aCCD 44, an image-data generating section 46, a defectpixel correcting section 48, aclamp processing section 50, a judgingsection 54, a stripednoise suppressing section 60, a whitebalance adjusting section 66, a colorinterpolation processing section 68, agamma correcting section 70, acolor correcting section 74, an image-data compressing section 78, and arecording section 80. - Upon turning on the
release button 30, therelease button 30 commands theCPU 34 to start photographing. - The
EEPROM 32 stores a parameter which is necessary for controlling theimaging device 12. - The
CPU 34 controls a system of theimaging device 12 by use of theEEPROM 32 and the memory 36 (controls a section surrounded by dashed line in FIG. 1). - The
memory 36 temporarily stores image data before being converted into a predetermined format and processed. - FIG. 2 is a block diagram showing the details of the
CCD 44. TheCCD 44 includesmany pixels 84 arranged on a not-illustrated semiconductor substrate in the form of a Bayer pattern color filter array,vertical CCDs 86, ahorizontal CCD 88, and a readingamplifier 90. - Each
pixel 84 includes a part of thevertical CCD 86, asensor section 94, and a gate section for reading 96. Thesensor section 94 is, for example, a buried photodiode which has a P-type surface region on the side of a light-receiving surface and an N-type buried region (charge accumulating region). Thesensor section 94 generates and accumulates a pixel signal in accordance with the amount of light received. The pixel signal generated by thesensor section 94 is in a state of signal charge inside thepixel 84. - The side of the light-receiving surface of each
pixel 84 is covered with a microlens and an optical filter. The optical filter selectively allows light with a wavelength of any one of red, blue, and green to pass through (not illustrated). - A plurality of
vertical CCDs 86 is formed in a direction along the arrangement of thepixels 84. Each of thevertical CCDs 86 is formed per vertical line of thesensor section 94. The signal charge is transferred from thesensor section 94 to eachvertical CCD 86 through the gate section for reading 96. Thevertical CCDs 86 successively transfer the signal charge to thehorizontal CCD 88. - The
horizontal CCD 88 successively transfers the signal charge, which is vertically transferred from thevertical CCDs 86 in succession, to the readingamplifier 90 in a horizontal direction. - The reading
amplifier 90 amplifies the transferred signal charge with a predetermined gain, to successively input it to the image-data generating section 46. - <Explanation for the Operation of this Embodiment>
- FIG. 3 is a flowchart showing an essential part of signal processing in the foregoing
imaging device 12. The signal processing in theimaging device 12 will be hereinafter described by following step numbers shown in FIG. 3. - In the following description, the
pixel 84 selectively receiving red light is abbreviated as a red pixel. Thepixel 84 selectively receiving blue light is abbreviated as a blue pixel, and thepixel 84 selectively receiving green light is abbreviated as a green pixel. A pixel signal generated by the green pixel in a row of the red pixels is abbreviated as Gr, and a pixel signal generated by the green pixel in a row of the blue pixels is abbreviated as Gb. - Moreover, a pixel signal of the red pixel horizontally adjacent to the pixel corresponding to Gb on the side of the reading
amplifier 90 is abbreviated as Bl, and a pixel signal of the red pixel horizontally adjacent thereto on the opposite side of the readingamplifier 90 is abbreviated as Br (refer to Gb in white text on black in FIG. 2). A pixel signal of the blue pixel vertically adjacent to the pixel corresponding to Gb on the side of thehorizontal CCD 88 is abbreviated as Rd, and a pixel signal of the blue pixel vertically adjacent thereto on the opposite side of thehorizontal CCD 88 is abbreviated as Ru. - [Step S1]
- The
CCD 44 is exposed in well-known operation. The image-data generating section 46 subjects an analog pixel signal outputted from theCCD 44 to correlated double sampling processing, analog-to-digital conversion, and the like, in order to generate image data. The image data here represents the color of each pixel as the pixel signal of a predetermined bit number. Then, image data is subjected to defect pixel correction and clamp processing, and is inputted to the judgingsection 54 and the stripednoise suppressing section 60. - [Step S2]
- The judging
section 54 judges whether Bl>Gb is satisfied or not with respect to every Gb. The judgingsection 54 inputs the judgment result into the stripednoise suppressing section 60 as table data. - [Step S3]
- The striped
noise suppressing section 60 calculates Gb′, which is a modified value of Gb, with respect to each Gb. Before the explanation of this calculation, on what base the difference in the pixel signal is estimated will be described cause-by-cause. - Firstly, difference exerted on Gb by a pixel signal of a pixel, which immediately precedes the pixel corresponding to Gb in output order in an output stage (reading amplifier90) of the
CCD 44, is estimated (ringing). The immediately preceding pixel is the blue pixel horizontally adjacent to the pixel of Gb on the side of the readingamplifier 90. When Gb″ represents Gb after receiving the difference, and Ka represents a coefficient of the difference, Gb″ satisfies the following equation. - Gb″=Gb+Bl×Ka (1)
- In this embodiment, by way of example, the effect of ringing in the output stage of the
CCD 44 is considered only in Gb judged to be Bl>Gb. This is because difference in Gb caused by the ringing is negligible, when Bl is sufficiently smaller than Gb. - Secondly, difference caused by ringing which occurs in a circuit (not illustrated) for carrying out the correlated double sampling processing in the image-
data generating section 46 is estimated. The circuit for carrying out the correlated double sampling processing is disposed just behind theCCD 44. - When Gb″ represents Gb after receiving the difference, and Kb represents a coefficient of the difference, Gb″ satisfies the following equation.
- Gb″=Gb+Bl×Kb (2)
- In this embodiment, however, by way of example, the effect of the ringing in the circuit for carrying out the correlated double sampling processing is considered only in Gb judged to be Bl>Gb, for the same reason as in the case of the equation (1). It is preferable that variation in a reference level in the circuit for carrying out the correlated double sapling processing is also considered by use of this correction coefficient Kb.
- Thirdly, difference caused by a pixel signal of a red pixel, which immediately precedes the pixel corresponding to Gb in transfer order during the vertical transfer of the pixel signal in the
CCD 44, is estimated. When Gb″ represents Gb after receiving the difference, and Kc represents a coefficient of the difference, Gb″ satisfies the following equation. - Gb″=Gb+Rd×Kc (3)
- Fourthly, difference caused by flare (the leak of light) on the light-receiving surface of the green pixel is estimated. To be more specific, the difference means difference in a case where light obliquely incident on the microlenses over the light-receiving surfaces of the red pixel and the blue pixel, both of which are adjacent to the green pixel corresponding to Gb, is mixed into the
sensor section 94 of the green pixel. When Gb″ represents Gb after receiving the difference, and Kd and Ke represent coefficients of the difference, Gb″ satisfies the following equation. - Gb″=Gb+{(Bl+Br)/2}×Kd+{(Ru+Rd)/2}×Ke (4)
- In the above equation, both of coefficients Kd and Ke are positive. Generally, the
single pixel 84 including a part of thevertical CCD 86, the gate section for reading 96, and thesensor section 94, is formed in the shape of an approximately square. In other words, thesensor section 94 is longer in the vertical direction. Therefore, the vertically adjacent red pixel has the larger effect of flare on the green pixel, than the horizontally adjacent blue pixel. Thus, Ke is larger than Kd. - Fifthly, difference caused by a phenomenon that red light and infrared light which pass through the charge accumulating region in the red pixel are reflected in a bulk, and mixed into the green pixel corresponding to Gb is estimated. When Gb″ represents Gb after receiving the difference, and Kf represents a coefficient of the difference, Gb″ satisfies the following equation.
- Gb″=Gb+{(Ru+Rd)/2}×Kf (5)
- The coefficient Kf is positive in the above equation. In most cases, the difference caused by the fifth reason is little in a CCD-type image sensor, and is large in a CMOS-type image sensor.
- On the basis of the foregoing five reasons, the striped
noise suppressing section 60 calculates the modified value Gb′ of each Gb judged to be Bl>Gb, by use of the following equation (6). Since Gb in the equation (6) is assumed to be affected by the foregoing five reasons, Gb corresponds to Gb″ in the equations (1) to (5). Therefore, Gb is corrected to the modified value Gb′, by contrarily subtracting terms of the correction coefficients added in the equations (1) to (5). - Gb′=Gb−Bl×(Ka+Kb)−Rd×Kc−{(Bl+Br)/2}×Kd−{(Ru+Rd)/2}×(Ke+Kf) (6)
- In a like manner, the striped
noise suppressing section 60 calculates a modified value Gb′ of each Gb which is not judged to be Bl>Gb, by use of the following equation. - Gb′=Gb−Rd×Kc−{(Bl+Br)/2}×Kd−{(Ru+Rd)/2}×(Ke+Kf) (7)
- In the equation (7), a correction term for ringing in the equation (6) is eliminated due to the foregoing reason.
- The striped
noise suppressing section 60 also calculates a modified value R′ of a pixel signal R of the red pixel with the use of the following equation. R includes both of Ru and Rd without distinction. - R′=R×{1+(Ke/2)+(Kf/2)} (8)
- Furthermore, the striped
noise suppressing section 60 calculates a modified value B′ of a pixel signal B of the blue pixel with the use of the following equation. B includes both of Bl and Br without distinction. - B′=B×(1+Kd/2) (9)
- [Step S4]
- The striped
noise suppressing section 60 corrects every signal level of Gb, R, and B to the modified values Gb′, R′, and B′ calculated by the equations (6) to (9). - In this embodiment, Gr is not corrected due to the following reason.
- Since the
sensor section 94 is longer in the vertical direction, Gb and Gr tend to be affected by the vertically adjacent pixels rather than the horizontally adjacent pixels, except the ringing in the output stage of theCCD 44 and a correlated double sampling circuit. The intensity of red color is higher than that of blue color. Thus, in general, when a subject with even color is photographed, Gb outputted from the pixel vertically adjacent to the red pixel tends to be larger than Gr. Therefore, when Gb is corrected to be smaller without making a correction to Gr, Gr and Gb when shooting the subject with even color become equal to each other. - [Step S5]
- Thereafter, image data is subjected to white balance adjustment, color interpolation processing, gamma correction, color correction processing (for example, edge enhancing processing), and image compression (for example, JPEG conversion). Then, the
recording section 80 records compressed image data on therecording medium 16. That is the explanation of the operation of theimaging device 12 according to this embodiment. - How to calculate the foregoing coefficients Ka to Kf will be hereinafter supplemented.
- The coefficients Ka to Kf may be obtained by test photography using the
imaging device 12, for example, during manufacturing or before shipment. After photographing a test chart the whole surface of which is evenly red, a test chart the whole surface of which is evenly blue, and the like, for example, obtained image data is analyzed to calculate the coefficients Ka to Kf. Then, obtained coefficients are rewritably stored in theEEPROM 32. - Taking a case that striped noise appears only in the vertical direction in image data which is obtained by photographing the foregoing test charts, for example, the foregoing coefficients Ka, Kb, and Kd may be made zero, and only vertical correction may be carried out. Otherwise, when striped noise appears only in the horizontal direction, the foregoing coefficients Kc and Ke may be made zero, and only horizontal correction may be carried out.
- The coefficients Ka to Kf may be calculated one-by-one on a cause basis. For example, ringing which occurs by inputting a predetermined pulse wave into the reading
amplifier 90 may be measured to obtain Ka. Otherwise, photographing a test chart in which only a single pixel is black point and the other whole pixels are illuminated by red light, signal levels outputted from the black point and peripheral pixels thereof may be measured to obtain Ke (in the case of CCD-type image sensor and Kf=. 0). - <Effect of this Embodiment>
- In this embodiment, the modified value of Gb is calculated in consideration of the foregoing reasons of the difference, on the basis of the pixel signals of four pixels with different colors which are vertically and horizontally adjacent to the green pixel corresponding to Gb (step S3). Then, Gb, which tends to be larger than Gr in general, is corrected to be smaller by use of the obtained modified value (step S4). Therefore, when a subject with even color is photographed, Gb is so corrected as to be equal to Gr. In other words, it is possible to suppress line crawling without smoothing an image.
- Accordingly, when photographed image data is subjected to processing for suppressing striped noise according to this embodiment, image data after the processing is not smoothed. Thus, it is possible to effectively subject image data to edge enhancing processing. As a result, it is possible to obtain a fine image with sharpness.
- In the step S2, the judging
section 54 judges whether Bl>Gb is satisfied or not with respect to every Gb, and inputs the judgment result into the stripednoise suppressing section 60 as table data. The difference by the ringing is corrected in only Gb which is judged to be Bl>Gb, on the basis that the smaller Bl is, the smaller the difference of Gb caused by the ringing will be (equation (6)). Therefore, it is possible to effectively correct the difference of Gb caused by the ringing. - The pixel signals of the red pixel and the blue pixel are also corrected with the use of the coefficients Kd and Ke for correcting the difference of Gb caused by flare (equations (8) and (9)). This processing means that light, which is obliquely incident on the microlenses over the surfaces of the red pixel and the blue pixel and is mixed into the adjacent green pixel as noise, is returned to the pixel signals of the red and blue pixels. Therefore, it is possible to increase the reproducibility of the color of a photographed subject.
- In the case of the CMOS-type image sensor, in a like manner, the pixel signal of the red pixel is also corrected with the use of the coefficient Kf, which corrects the difference caused by the red light and infrared light reflected in the bulk (equation (9)). This processing means that light, which passes through the charge accumulating region of the red pixel and is mixed into the peripheral pixels as noise, is returned to the pixel signal of the red pixel. Therefore, it is possible to increase the reproducibility of the color of a photographed subject.
- Furthermore, the coefficients Ka to Kf are rewritably stored in the
EEPROM 32. Therefore, it is possible to repeatedly calculate and store the foregoing coefficients Ka to Kf, thereby increasing the user-friendliness. This is because it is assumed that, for example, a user who has used theimaging device 12 for the long term requests readjustment of a service center. - <Supplements of this Embodiment>
- [1] In this embodiment, the color image sensor is the Bayer pattern color filter array. The present invention, however, is not limited to such an embodiment. The color image sensor may be, for example, a honeycomb array, a complementary color array, and the like. In the case of a CCD-type image sensor with the honeycomb array, a horizontal direction described in claims corresponds to, for example, a direction clockwise or counterclockwise inclined approximately 45 degrees with respect to the elongation direction of a horizontal transfer section in a light-receiving surface. The vertical direction is a direction orthogonal to the horizontal direction in the light receiving surface.
- [2] In this embodiment, the
imaging device 12 composes the electronic camera. The present invention, however, is not limited to such an embodiment. The method for suppressing striped noise according to the present invention is also applicable to an imaging device of a scanner and the like. - [3] In this embodiment, whether or not Bl>Gb is satisfied is judged in step S2, and the difference caused by the ringing is corrected in only Gb satisfying the equation, by use of a term of the coefficient Ka. The present invention, however, is not limited to such an embodiment.
- When it is known that the main cause of the difference is the ringing in the output stage of the
CCD 44, for example, calculation processing may be simplified in any of the following two ways. - Firstly, only when Bl outputted from a pixel, which immediately precedes the pixel corresponding to Gb in transfer order in the output stage of the
CCD 44, is equal to or more than a predetermined value (corresponding to a predetermined value A described in claims), the difference of Gb is corrected by use of the term of the coefficient Ka. - Secondly, only when a value obtained by dividing Bl from an adjacent pixel by Gb is equal to or more than a predetermined value (corresponding to a predetermined value B described in claims), the difference of Gb is corrected by use of the term of the coefficient Ka. “Difference in levels of both signals” described in claims corresponds to, for example, the ratio of signal levels of the foregoing both signals.
- [4] In this embodiment, Gb is corrected with the use of the pixel signals of the four pixels with different colors, which are adjacent to the green pixel corresponding to Gb in the horizontal and vertical directions. The present invention, however, is not limited to such an embodiment.
- For example, Gb may be corrected with the use of only one or both of the pixel signals from the two blue pixels horizontally adjacent to the green pixel corresponding to Gb. When Gb is corrected with the use of only pixels adjacent in the horizontal direction, in this manner, necessary line memory can be reduced to one-third, as compared with the case of correction by use of the vertically adjacent two pixels.
- Otherwise, Gb may be corrected by use of only one or both of the pixel signals from the two red pixels, which are vertically adjacent to the green pixel corresponding to Gb.
- Otherwise, Gb may be corrected by use of only Bl outputted from the blue pixel, which is horizontally adjacent to the green pixel corresponding to Gb on the side of the reading
amplifier 90. Here, Bl used in the correction is a pixel signal immediately preceding Gb in transfer order during horizontal transfer in theCCD 44. Also in this case, necessary line memory can be reduced to one-third, as compared with the case of correction by use of the vertically adjacent two pixels. - Otherwise, Gb may be corrected by use of only Rd outputted from the red pixel, which is vertically adjacent to the green pixel corresponding to Gb on the side of the
horizontal CCD 88. Rd used in the correction is a pixel signal immediately preceding Gb in transfer order during vertical transfer in theCCD 44. - [5] In this embodiment, Gr and Gb when shooting a subject with even color become equal to each other, by correcting Gb without correcting Gr. The present invention, however, is not limited to such an embodiment. For example, both of Gb and Gr may be corrected by the following procedure.
- As is the case with Gb, Gr′ represents a modified value of Gr, and Rl and Rr represent pixel signals from red pixels which are horizontally adjacent to a pixel corresponding to Gr, respectively (refer to Gr in white text on black in FIG. 2). Gb′ and Gr′ are calculated by the following equations.
- Gb′=Gb+{(Bl+Br)/2}×Km (10)
- Gr′=Gr+{(Rl+Rr)/2}×Kn (11)
- In the above equations, both coefficients Km and Kn, which are positive, are obtained by the above-mentioned test photography and the like. As described above, since Gb tends to become larger than Gr, Km and Kn satisfy an equation of Km>Kn. Km has been taken into account the difference caused by the ringing of Bl, flare on a light-receiving surface, and the like. Kn has been taken into account the difference caused by the ringing of Rl, flare on the light-receiving surface, the mixture of red light and infrared light reflected in the bulk, and the like. Also in this case, necessary line memory can be reduced to one-third, as compared with the case of correction by use of the vertically adjacent two pixels.
- [6] In this embodiment, as shown in the equations (1) and (2), the difference caused by the ringing is corrected on the basis of the signal level of the immediately preceding pixel in transfer order. The present invention, however, is not limited to such an embodiment.
- The difference caused by the ringing may be corrected on the basis of the difference between the signal level of Gb and the signal level of the pixel immediately preceding Gb in transfer order. In this case, the foregoing equation (6) is changed to the following equation (12).
- Gb′=Gb−(Bl−Gb)×(Ka′+Kb′)−Rd×Kc−{(Bl+Br)/2}×Kd−{(Ru+Rd)/2}×(Ke+Kf) (12)
- In the equation (12), Ka′ represents a correction coefficient which corresponds to Ka described above, and Kb′ represents a correction coefficient which corresponds to Kb described above.
- The invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Claims (24)
1. An imaging device comprising:
a color image sensor having “a first color component pixel”, “a second color component pixel”, and “a third color component pixel including more intensity information than said first color component pixel and said second color component pixel”, said first to third color component pixels being regularly arranged in a two-dimensional matrix, each of said first to third color component pixels generating a pixel signal in accordance with an amount of light received, and said color image sensor transferring and outputting said each pixel signal in succession;
a difference estimating section estimating difference in a signal level exerted on said pixel signal of said third color component pixel, with the use of said pixel signal of a pixel peripheral to said third color component pixel; and
a correcting section correcting said pixel signal of said third color component pixel on the basis of the estimated difference.
2. The imaging device according to claim 1 , wherein
said difference estimating section estimates said difference caused by ringing, on the basis of
a level of said pixel signal of a pixel immediately preceding said third color component pixel in output order of said pixel signal outputted from said color image sensor, or
variation in levels between the third color component pixel and the immediately preceding pixel thereof.
3. The imaging device according to claim 2 , wherein
said pixel peripheral to said third color component pixel is at least one of two pixels adjacent to said third color component pixel in a horizontal direction.
4. The imaging device according to claim 2 , wherein
said pixel peripheral to said third color component pixel is a pixel immediately preceding said third color component pixel in transfer order in said color image sensor.
5. The imaging device according to claim 1 , wherein
said difference estimating section estimates said difference caused by flare on a light-receiving surface of said third color component pixel.
6. The imaging device according to claim 5 , wherein
said pixel peripheral to said third color component pixel is at least one of two pixels adjacent to said third color component pixel in a horizontal direction.
7. The imaging device according to claim 5 , wherein
said pixel peripheral to said third color component pixel is at least one of two pixels adjacent to said third color component pixel in a vertical direction.
8. The imaging device according to claim 5 , wherein
said pixel peripheral to said third color component pixel is a pixel immediately preceding said third color component pixel in transfer order in said color image sensor.
9. The imaging device according to claim 1 , wherein:
said first color component pixel is a red pixel selectively receiving red light;
said second color component pixel is a blue pixel selectively receiving blue light;
said third color component pixel is a green pixel selectively receiving green light; and
said difference estimating section estimates said difference on the basis of a level of said pixel signal of said red pixel, said difference being caused by a phenomenon that red light and infrared light passing through a charge accumulating region in said color image sensor are reflected in a bulk and mixed into a peripheral pixel.
10. The imaging device according to claim 9 , wherein
said pixel peripheral to said third color component pixel is at least one of two pixels adjacent to said third color component pixel in a horizontal direction.
11. The imaging device according to claim 9 , wherein
said pixel peripheral to said third color component pixel is at least one of two pixels adjacent to said third color component pixel in a vertical direction.
12. The imaging device according to claim 9 , wherein
said pixel peripheral to said third color component pixel is a pixel immediately preceding said third color component pixel in transfer order in said color image sensor.
13. The imaging device according to claim 1 , further comprising:
a judging section judging whether or not a level of said pixel signal is equal to or more than a predetermined value A, the level of said pixel signal being transferred from a pixel immediately preceding said third color component pixel in transfer order in said color image sensor, and said judging section causing said correcting section to operate only when the judged level is equal to or more than said predetermined value A.
14. The imaging device according to claim 13 , wherein
said pixel peripheral to said third color component pixel is at least one of two pixels adjacent to said third color component pixel in a horizontal direction.
15. The imaging device according to claim 13 , wherein
said pixel peripheral to said third color component pixel is at least one of two pixels adjacent to said third color component pixel in a vertical direction.
16. The imaging device according to claim 13 , wherein
said pixel peripheral to said third color component pixel is a pixel immediately preceding said third color component pixel in transfer order in said color image sensor.
17. The imaging device according to claim 1 , further comprising
a judging section comparing a level of said pixel signal transferred from a pixel immediately preceding said third color component pixel in transfer order in said color image sensor, with a level of said pixel signal of said third color component pixel, and said judging section causing said correcting section to operate only when difference in levels of both signals is equal to or more than a predetermined value B.
18. The imaging device according to claim 17 , wherein
said pixel peripheral to said third color component pixel is at least one of two pixels adjacent to said third color component pixel in a horizontal direction.
19. The imaging device according to claim 17 , wherein
said pixel peripheral to said third color component pixel is at least one of two pixels adjacent to said third color component pixel in a vertical direction.
20. The imaging device according to claim 17 , wherein
said pixel peripheral to said third color component pixel is a pixel immediately preceding said third color component pixel in transfer order in said color image sensor.
21. The imaging device according to claim 1 , wherein
said pixel peripheral to said third color component pixel is at least one of two pixels adjacent to said third color component pixel in a horizontal direction.
22. The imaging device according to claim 1 , wherein
said pixel peripheral to said third color component pixel is a pixel immediately preceding said third color component pixel in transfer order in said color image sensor.
23. The imaging device according to claim 1 , further comprising
an analog-to-digital conversion section performing analog-to-digital conversion on said pixel signal outputted from said color image sensor, wherein:
said difference estimating section estimates said difference with the use of said pixel signal after analog-to-digital conversion; and
said correcting section corrects said pixel signal after analog-to-digital conversion.
24. An imaging device comprising:
a color image sensor having pixels of at least two types of color components (one of which is a first color component pixel), said pixels being regularly arranged in a two-dimensional matrix, each of said pixels generating a pixel signal in accordance with an amount of light received, and said color image sensor transferring and outputting said each pixel signal in succession;
a difference estimating section estimating difference in a signal level exerted on said pixel signal of said first color component pixel, with the use of said pixel signal of the pixel that is adjacent to and different from said first color component pixel; and
a correcting section correcting said pixel signal of said first color component pixel on the basis of the estimated difference.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/285,984 US7792358B2 (en) | 2003-06-04 | 2008-10-17 | Imaging device performing color image data processing |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-159459 | 2003-06-04 | ||
JP2003159459A JP4379006B2 (en) | 2003-06-04 | 2003-06-04 | Imaging device |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/285,984 Continuation US7792358B2 (en) | 2003-06-04 | 2008-10-17 | Imaging device performing color image data processing |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040246352A1 true US20040246352A1 (en) | 2004-12-09 |
Family
ID=33157181
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/856,842 Abandoned US20040246352A1 (en) | 2003-06-04 | 2004-06-01 | Imaging device |
US12/285,984 Expired - Fee Related US7792358B2 (en) | 2003-06-04 | 2008-10-17 | Imaging device performing color image data processing |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/285,984 Expired - Fee Related US7792358B2 (en) | 2003-06-04 | 2008-10-17 | Imaging device performing color image data processing |
Country Status (4)
Country | Link |
---|---|
US (2) | US20040246352A1 (en) |
EP (1) | EP1484928B1 (en) |
JP (1) | JP4379006B2 (en) |
CN (4) | CN102006488B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060146149A1 (en) * | 2005-01-05 | 2006-07-06 | Eastman Kodak Company | Hue correction for electronic imagers |
US20060238629A1 (en) * | 2005-04-25 | 2006-10-26 | Hidehiko Sato | Pixel defect correction device |
US20070177032A1 (en) * | 2006-01-27 | 2007-08-02 | Nethra Imaging | Automatic color calibration of an image sensor |
US20090316984A1 (en) * | 2006-07-25 | 2009-12-24 | Ho-Young Lee | Color interpolation method and device considering edge direction and cross stripe noise |
US20130188023A1 (en) * | 2012-01-23 | 2013-07-25 | Omnivision Technologies, Inc. | Image sensor with optical filters having alternating polarization for 3d imaging |
US9454693B2 (en) | 2013-03-28 | 2016-09-27 | Fujitsu Limited | Image correction apparatus, image correction method, and biometric authentication apparatus |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100825172B1 (en) * | 2004-04-05 | 2008-04-24 | 미쓰비시덴키 가부시키가이샤 | Imaging device |
JP4717371B2 (en) * | 2004-05-13 | 2011-07-06 | オリンパス株式会社 | Image processing apparatus and image processing program |
JP4402161B1 (en) * | 2009-03-30 | 2010-01-20 | マスレ ホールディングス エルエルピー | Imaging apparatus, image reproducing apparatus, and imaging method |
JP5269016B2 (en) * | 2010-09-10 | 2013-08-21 | 株式会社東芝 | Image processing device |
JP5648010B2 (en) * | 2012-03-30 | 2015-01-07 | 富士フイルム株式会社 | Device operating method, photographing apparatus and electronic endoscope apparatus |
JP6136669B2 (en) * | 2013-07-08 | 2017-05-31 | 株式会社ニコン | Imaging device |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4652928A (en) * | 1983-06-15 | 1987-03-24 | Kabushiki Kaisha Toshiba | Solid state image sensor with high resolution |
US4701784A (en) * | 1984-06-01 | 1987-10-20 | Matsushita Electric Industrial Co., Ltd. | Pixel defect correction apparatus |
US4945406A (en) * | 1988-11-07 | 1990-07-31 | Eastman Kodak Company | Apparatus and accompanying methods for achieving automatic color balancing in a film to video transfer system |
US5361147A (en) * | 1990-02-06 | 1994-11-01 | Canon Kabushiki Kaisha | Method and apparatus for encoding and decoding color images |
US5754678A (en) * | 1996-01-17 | 1998-05-19 | Photon Dynamics, Inc. | Substrate inspection apparatus and method |
US5909505A (en) * | 1990-02-06 | 1999-06-01 | Canon Kabushiki Kaisha | Color image encoding method and apparatus |
US20020012053A1 (en) * | 2000-02-04 | 2002-01-31 | Olympus Optical Co., Ltd. | Imaging apparatus |
US20020015111A1 (en) * | 2000-06-30 | 2002-02-07 | Yoshihito Harada | Image processing apparatus and its processing method |
US20020039143A1 (en) * | 2000-10-02 | 2002-04-04 | Mega Chips Corporation | Image processing circuit |
US20020158977A1 (en) * | 2001-02-19 | 2002-10-31 | Eastman Kodak Company | Correcting defects in a digital image caused by a pre-existing defect in a pixel of an image sensor |
US20020163674A1 (en) * | 2001-02-19 | 2002-11-07 | Eastman Kodak Company | Correcting for defects in a digital image taken by an image sensor caused by pre-existing defects in two pixels in adjacent columns of an image sensor |
US20030025813A1 (en) * | 2000-07-31 | 2003-02-06 | Kazuhisa Yoshiwara | Method of detecting defective pixels of a solid-state image-pickup device and image-pickup apparatus using the same |
US20030030738A1 (en) * | 2001-05-22 | 2003-02-13 | Clynes Steven Derrick | On-chip 2D adjustable defective pixel filtering for CMOS imagers |
US20040169746A1 (en) * | 1999-03-15 | 2004-09-02 | Chen Zhiliang Julian | Defective pixel filtering for digital imagers |
US6788340B1 (en) * | 1999-03-15 | 2004-09-07 | Texas Instruments Incorporated | Digital imaging control with selective intensity resolution enhancement |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS56147578A (en) | 1980-04-18 | 1981-11-16 | Nec Corp | Correction system of mixed color for solidstate color image pickup device |
JP3345182B2 (en) | 1994-08-03 | 2002-11-18 | 松下電器産業株式会社 | Driving method of solid-state imaging device |
JP3466855B2 (en) * | 1997-02-07 | 2003-11-17 | 株式会社リコー | Image reading device |
JP4022962B2 (en) * | 1997-12-11 | 2007-12-19 | ソニー株式会社 | Signal processing circuit and solid-state image sensor output signal processing method |
DE10042633C2 (en) * | 2000-08-30 | 2002-06-20 | Infineon Technologies Ag | Detection of a device connection status with the USB |
CN1186921C (en) * | 2001-06-08 | 2005-01-26 | 光宝科技股份有限公司 | Digital signal compensator |
-
2003
- 2003-06-04 JP JP2003159459A patent/JP4379006B2/en not_active Expired - Lifetime
-
2004
- 2004-06-01 US US10/856,842 patent/US20040246352A1/en not_active Abandoned
- 2004-06-03 EP EP04013188.0A patent/EP1484928B1/en not_active Expired - Fee Related
- 2004-06-04 CN CN201010502819XA patent/CN102006488B/en not_active Expired - Fee Related
- 2004-06-04 CN CN2010105028185A patent/CN102006487B/en not_active Expired - Fee Related
- 2004-06-04 CN CN2004100484010A patent/CN1574902B/en not_active Expired - Fee Related
- 2004-06-04 CN CN2010105028170A patent/CN102006486B/en not_active Expired - Fee Related
-
2008
- 2008-10-17 US US12/285,984 patent/US7792358B2/en not_active Expired - Fee Related
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4652928A (en) * | 1983-06-15 | 1987-03-24 | Kabushiki Kaisha Toshiba | Solid state image sensor with high resolution |
US4701784A (en) * | 1984-06-01 | 1987-10-20 | Matsushita Electric Industrial Co., Ltd. | Pixel defect correction apparatus |
US4945406A (en) * | 1988-11-07 | 1990-07-31 | Eastman Kodak Company | Apparatus and accompanying methods for achieving automatic color balancing in a film to video transfer system |
US5361147A (en) * | 1990-02-06 | 1994-11-01 | Canon Kabushiki Kaisha | Method and apparatus for encoding and decoding color images |
US5909505A (en) * | 1990-02-06 | 1999-06-01 | Canon Kabushiki Kaisha | Color image encoding method and apparatus |
US5754678A (en) * | 1996-01-17 | 1998-05-19 | Photon Dynamics, Inc. | Substrate inspection apparatus and method |
US20040169746A1 (en) * | 1999-03-15 | 2004-09-02 | Chen Zhiliang Julian | Defective pixel filtering for digital imagers |
US6788340B1 (en) * | 1999-03-15 | 2004-09-07 | Texas Instruments Incorporated | Digital imaging control with selective intensity resolution enhancement |
US20020012053A1 (en) * | 2000-02-04 | 2002-01-31 | Olympus Optical Co., Ltd. | Imaging apparatus |
US20020015111A1 (en) * | 2000-06-30 | 2002-02-07 | Yoshihito Harada | Image processing apparatus and its processing method |
US20030025813A1 (en) * | 2000-07-31 | 2003-02-06 | Kazuhisa Yoshiwara | Method of detecting defective pixels of a solid-state image-pickup device and image-pickup apparatus using the same |
US20020039143A1 (en) * | 2000-10-02 | 2002-04-04 | Mega Chips Corporation | Image processing circuit |
US20020158977A1 (en) * | 2001-02-19 | 2002-10-31 | Eastman Kodak Company | Correcting defects in a digital image caused by a pre-existing defect in a pixel of an image sensor |
US20020163674A1 (en) * | 2001-02-19 | 2002-11-07 | Eastman Kodak Company | Correcting for defects in a digital image taken by an image sensor caused by pre-existing defects in two pixels in adjacent columns of an image sensor |
US20030030738A1 (en) * | 2001-05-22 | 2003-02-13 | Clynes Steven Derrick | On-chip 2D adjustable defective pixel filtering for CMOS imagers |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060146149A1 (en) * | 2005-01-05 | 2006-07-06 | Eastman Kodak Company | Hue correction for electronic imagers |
US7656441B2 (en) * | 2005-01-05 | 2010-02-02 | Eastman Kodak Company | Hue correction for electronic imagers |
US20060238629A1 (en) * | 2005-04-25 | 2006-10-26 | Hidehiko Sato | Pixel defect correction device |
US8035702B2 (en) * | 2005-04-25 | 2011-10-11 | Eastman Kodak Company | Pixel defect correction device for line crawl |
US20070177032A1 (en) * | 2006-01-27 | 2007-08-02 | Nethra Imaging | Automatic color calibration of an image sensor |
US7586521B2 (en) * | 2006-01-27 | 2009-09-08 | Nethra Imaging Inc. | Automatic color calibration of an image sensor |
US20090316984A1 (en) * | 2006-07-25 | 2009-12-24 | Ho-Young Lee | Color interpolation method and device considering edge direction and cross stripe noise |
US8229213B2 (en) * | 2006-07-25 | 2012-07-24 | Mtekvision Co., Ltd. | Color interpolation method and device considering edge direction and cross stripe noise |
US20130188023A1 (en) * | 2012-01-23 | 2013-07-25 | Omnivision Technologies, Inc. | Image sensor with optical filters having alternating polarization for 3d imaging |
US9177983B2 (en) * | 2012-01-23 | 2015-11-03 | Omnivision Technologies, Inc. | Image sensor with optical filters having alternating polarization for 3D imaging |
US9454693B2 (en) | 2013-03-28 | 2016-09-27 | Fujitsu Limited | Image correction apparatus, image correction method, and biometric authentication apparatus |
Also Published As
Publication number | Publication date |
---|---|
JP4379006B2 (en) | 2009-12-09 |
US20090122166A1 (en) | 2009-05-14 |
CN102006487B (en) | 2012-08-08 |
CN102006487A (en) | 2011-04-06 |
CN102006488A (en) | 2011-04-06 |
EP1484928B1 (en) | 2015-03-25 |
CN1574902A (en) | 2005-02-02 |
US7792358B2 (en) | 2010-09-07 |
CN1574902B (en) | 2013-06-12 |
CN102006486B (en) | 2012-08-08 |
EP1484928A2 (en) | 2004-12-08 |
CN102006488B (en) | 2012-08-08 |
CN102006486A (en) | 2011-04-06 |
EP1484928A3 (en) | 2007-07-11 |
JP2004363902A (en) | 2004-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7792358B2 (en) | Imaging device performing color image data processing | |
US7990447B2 (en) | Solid-state image sensor | |
US6813046B1 (en) | Method and apparatus for exposure control for a sparsely sampled extended dynamic range image sensing device | |
US8300120B2 (en) | Image processing apparatus and method of processing image for reducing noise of the image | |
US6924841B2 (en) | System and method for capturing color images that extends the dynamic range of an image sensor using first and second groups of pixels | |
US10015424B2 (en) | Method and apparatus for eliminating crosstalk amount included in an output signal | |
US6937777B2 (en) | Image sensing apparatus, shading correction method, program, and storage medium | |
JP4768448B2 (en) | Imaging device | |
EP1209903B1 (en) | Method and system of noise removal for a sparsely sampled extended dynamic range image | |
US8208038B2 (en) | Image signal processing device and image signal processing method | |
KR101639382B1 (en) | Apparatus and method for generating HDR image | |
TWI422234B (en) | An image signal correcting means, an image capturing means, an image signal correcting means, and an image signal correcting processing means | |
KR20060000715A (en) | Apparatus and method for improving image quality in a image sensor | |
US20030063185A1 (en) | Three-dimensional imaging with complementary color filter arrays | |
EP1173010A2 (en) | Method and apparatus to extend the effective dynamic range of an image sensing device | |
JP2005198319A (en) | Image sensing device and method | |
US20060017824A1 (en) | Image processing device, image processing method, electronic camera, and scanner | |
US7202895B2 (en) | Image pickup apparatus provided with image pickup element including photoelectric conversion portions in depth direction of semiconductor | |
JP2006319827A (en) | Solid-state imaging device and image correcting method | |
JP4993275B2 (en) | Image processing device | |
JP6837652B2 (en) | Imaging device and signal processing method | |
Yamashita et al. | Wide-dynamic-range camera using a novel optical beam splitting system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NIKON CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SUZUKI, MASAHIRO;REEL/FRAME:015405/0742 Effective date: 20040526 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |