US20070201760A1 - Flat-field, panel flattening, and panel connecting methods - Google Patents

Flat-field, panel flattening, and panel connecting methods Download PDF

Info

Publication number
US20070201760A1
US20070201760A1 US11/740,878 US74087807A US2007201760A1 US 20070201760 A1 US20070201760 A1 US 20070201760A1 US 74087807 A US74087807 A US 74087807A US 2007201760 A1 US2007201760 A1 US 2007201760A1
Authority
US
United States
Prior art keywords
panel
map
curvature
image
panels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/740,878
Inventor
Carl Brown
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Global Life Sciences Solutions USA LLC
Original Assignee
Applied Precision Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Applied Precision Inc filed Critical Applied Precision Inc
Priority to US11/740,878 priority Critical patent/US20070201760A1/en
Publication of US20070201760A1 publication Critical patent/US20070201760A1/en
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECURITY AGREEMENT Assignors: APPLIED PRECISION, INC.
Assigned to APPLIED PRECISION, INC. reassignment APPLIED PRECISION, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: APPLIED PRECISION, LLC
Assigned to APPLIED PRECISION, INC. reassignment APPLIED PRECISION, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: SILICON VALLEY BANK
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/41Extracting pixel data from a plurality of image sensors simultaneously picking up an image, e.g. for increasing the field of view by combining the outputs of a plurality of sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30072Microarray; Biochip, DNA array; Well plate

Definitions

  • This invention relates to image analysis, and more particularly to using correcting for non-uniformities among several panels of a single image.
  • Sequential processing techniques have resulted in important discoveries in a variety of biologically related fields, including, among others, genetics, biochemistry, immunology and enzymology. Historically, sequential processing involved the study of one or two biologically relevant molecules at the same time. These original sequential processing methods, however, were quite slow and tedious. Study of the required number of samples (up to tens of thousands) was time consuming and costly.
  • a plurality of samples are arranged in arrays, referred to herein as microarrays, of rows and columns into a field, on a substrate slide or similar member.
  • the specimens on the slide are then biochemically processed in parallel.
  • the specimen molecules are fluorescently marked as a result of interaction between the specimen molecule and other biological material.
  • Some applications for imaging require two apparently contradictory attributes: high-resolution and high-content.
  • the resolution requirement is driven by the need to have detail in the image that exceeds by at least 2 ⁇ the information content of the object being images (the so called Nyquist Limit).
  • the content requirement is driven by the need to have information over a large area.
  • One method that addresses these needs is to acquire a plurality of individual images with high spatial resolution (panels) and to collect these panels over adjacent areas so as to encompass the large desired area. The multiple panels can then be assembled into a single large image based on the relative location of the optics and the sample when each panel was collected.
  • FIG. 1 is a flat-field calibration map showing the overall curvature and offset maps according to one embodiment of the present invention.
  • FIG. 2 is a close-up view of a 20 ⁇ 20 region of the inverse gain map and offset map of FIG. 1 .
  • FIG. 3 illustrates an image before and after applying curvature flattening according to one embodiment of the present invention.
  • a plurality of smaller images are collected by a detector and assembled into a single large image.
  • Each of the plurality of smaller images collected by the detector may be affected by a combination of the non-uniform optics and detector response.
  • illumination vignetting and collection vignetting introduce a substantial intensity curvature to the images collected by the detector.
  • Non-uniform detector response comes in the form of gain and offset differences among all the detector elements.
  • the gain and offset maps correct for the illumination optics, collection optics, and detector non-uniformity at the same time.
  • the average dark current image may be used instead of the linear regression result. That is, the offset-map used to flat-field images is the average of many dark current images rather than the intercept calculated by the linear regression.
  • the intercept is inherently noisy (the intercept is measured at the low signal-to-noise part of the camera range).
  • Use of the calculated offset map reduces the sensitivity of the instrument by increasing the baseline noise.
  • the offset map shown in FIGS. 1 and 2 are the average dark current. The calculated intercept would have about double the noise of the average dark current.
  • Averaging multiple frames for each measurement improves the signal-to-noise of the data and reduces the noise in the resulting gain and offset maps (in the event that the calculated offset map is used for flat-fielding).
  • Another technique is to smooth the gain map with a low-pass filter.
  • FIGS. 1 and 2 illustrate flat-field calibration maps made from uniformly fluorescent calibration slides.
  • the gain map 105 , 205 contains approximately 0.3% noise whereas the offset map 110 , 210 contributes 1.24 counts (gain correction is multiplicative, offset is additive).
  • flat-field calibration is an effective technique, the technique introduces noise. Cleaning the flat-field calibration maps could yield substantial improvements in image quality. In particular, further reduction of offset map noise would improve low-end sensitivity.
  • the read-noise in the CCD camera used to collect the maps above has about 1.77 counts of read-noise. Adding the offset map noise (in quadrature) yields about 2.2 counts of baseline noise, a 24% increase.
  • FIG. 3 illustrates an image 300 without any curvature correction.
  • a combination of illumination vignetting and collection vignetting leads to more brightness or higher collection efficiency, respectively, in the center of the field-of-view.
  • the curvature map may be smoothed to reduce the sensitivity to noise and spurious signals in the average curvature image.
  • the curvature map may be curve-fitted using a weighting scheme that emphasizes relatively low intensity values. Curve-fitting would be useful for reducing noise.
  • the goal of curve-fitting is to measure only the background curvature and reduce the influence of the image signal.
  • Other refinements include averaging lots of small panels reduces sensitivity to image signal corruption and over-scanning the desired image area to provide more panels for averaging and panels that contain only the background intensity curvature.
  • Another problem with combining a plurality of small images to form one large image is that small discontinuities between adjacent panels become visible. Intensity differences of 1-2 counts are readily detected by the human eye, even in the presence of 1-2 counts of random noise and when important information is much more intense. The remaining discontinuity create a visible stitching artifact. Examples of the discontinuities may be seen in the image 300 of FIG. 3 .
  • a panel edge connection technique is performed.
  • the border of each panel is compared with all neighbors to the left, right, top, and bottom. This comparison generates border intensity scaling values for the entire boundary of each panel.
  • the boundary may then be scaled so that the result is half way between the boundary of the current panel and the adjacent panel.
  • the intensities are then connected at the half-way point between the adjacent border intensities.
  • the boundary scaling may be applied to each pixel in the panel based on the distance from the four boundaries.
  • a weighted combination of the scaling factors is used such that a continuous intensity ramp is applied from one boundary to the next. (In the middle of the image, the scaling factor should be the average of the left, right, top, and bottom scaling factors.)
  • Some examples of the weighting methods include inverse square weighting and inverse weighting. These techniques may be implemented using the following formulas:
  • connection and curvature flattening are important for panels with significant background intensity.
  • An image having curvature flattening is shown in FIG. 3 .
  • Further refinements include median filtering the boundary scaling values to reduce sensitivity to outliers. Misalignment of the panels causes miscalculation of the scaling factors. The miscalculation is significant when bright (or dark) spots do not overlap along the borders of adjacent panels. Additionally, smoothing of the median filtered boundary scaling values may be used to remove spikes caused by alignment problems. Finally, the boundary scaling values may be curve-fit to find the general trend and avoid noise and misalignment.

Abstract

A plurality of panels are assembled into a single image. Each of the panels may have different intensities throughout the panel, as well as non-uniformities between panels. The panels are modified using flat-field calibration, panel flattening, and panel connecting techniques. These techniques correct for non-uniformities and provide a cleaner, single image.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims benefit of U.S. Provisional Application No. 60/178,476, filed Jan. 27, 2000.
  • TECHNICAL FIELD
  • This invention relates to image analysis, and more particularly to using correcting for non-uniformities among several panels of a single image.
  • BACKGROUND
  • Biomedical research has made rapid progress based on sequential processing of biological samples. Sequential processing techniques have resulted in important discoveries in a variety of biologically related fields, including, among others, genetics, biochemistry, immunology and enzymology. Historically, sequential processing involved the study of one or two biologically relevant molecules at the same time. These original sequential processing methods, however, were quite slow and tedious. Study of the required number of samples (up to tens of thousands) was time consuming and costly.
  • A breakthrough in the sequential processing of biological specimens occurred with the development of techniques of parallel processing of the biological specimens, using fluorescent marking. A plurality of samples are arranged in arrays, referred to herein as microarrays, of rows and columns into a field, on a substrate slide or similar member. The specimens on the slide are then biochemically processed in parallel. The specimen molecules are fluorescently marked as a result of interaction between the specimen molecule and other biological material. Such techniques enable the processing of a large number of specimens very quickly.
  • Some applications for imaging require two apparently contradictory attributes: high-resolution and high-content. The resolution requirement is driven by the need to have detail in the image that exceeds by at least 2× the information content of the object being images (the so called Nyquist Limit). The content requirement is driven by the need to have information over a large area. One method that addresses these needs is to acquire a plurality of individual images with high spatial resolution (panels) and to collect these panels over adjacent areas so as to encompass the large desired area. The multiple panels can then be assembled into a single large image based on the relative location of the optics and the sample when each panel was collected. When assembling the plurality of panels into a single montage, a number of steps may be taken to correct for intensity non-uniformities within each panel (known herein as flat-field Calibration and Panel Flattening) as well as non-uniformities in the panel to panel intensities.
  • DESCRIPTION OF DRAWINGS
  • These and other features and advantages of the invention will become more apparent upon reading the following detailed description and upon reference to the accompanying drawings.
  • FIG. 1 is a flat-field calibration map showing the overall curvature and offset maps according to one embodiment of the present invention.
  • FIG. 2 is a close-up view of a 20×20 region of the inverse gain map and offset map of FIG. 1.
  • FIG. 3 illustrates an image before and after applying curvature flattening according to one embodiment of the present invention.
  • DETAILED DESCRIPTION
  • To create a large image, a plurality of smaller images are collected by a detector and assembled into a single large image. Each of the plurality of smaller images collected by the detector may be affected by a combination of the non-uniform optics and detector response. In the case of the optics, illumination vignetting and collection vignetting introduce a substantial intensity curvature to the images collected by the detector. Non-uniform detector response comes in the form of gain and offset differences among all the detector elements.
  • To correct for these errors, a series of images are acquired that range from dark current (no exposure) to near full-well. Linear regression of each pixel in the detector yields a slope (gain) and intercept (offset). That is, for each pixel the following equation is solved form and b:
    Measured_image=Desired_image*m+b
    Flat-field calibration is then accomplished with the following calculation (again for each pixel):
    Desired_image=(Measured_image−offset_map)/gain _map
    Where m has been replaced with “gain_map” and b with “offset map”.
  • The gain and offset maps correct for the illumination optics, collection optics, and detector non-uniformity at the same time.
  • Flat-field calibration maps that correct the image field curvature and offset problem do so at the expense of adding noise to the image. Both maps contain measurement noise that is then passed on to the calibrated image. The gain map contains noise that is mostly photon counting noise (“shot noise”), whereas the offset map is dominated by the electronic read-noise of the CCD camera.
  • To correct for the offset map noise, the average dark current image (no exposure) may be used instead of the linear regression result. That is, the offset-map used to flat-field images is the average of many dark current images rather than the intercept calculated by the linear regression. Experience has shown that the intercept is inherently noisy (the intercept is measured at the low signal-to-noise part of the camera range). Use of the calculated offset map reduces the sensitivity of the instrument by increasing the baseline noise. The offset map shown in FIGS. 1 and 2 are the average dark current. The calculated intercept would have about double the noise of the average dark current.
  • Averaging multiple frames for each measurement improves the signal-to-noise of the data and reduces the noise in the resulting gain and offset maps (in the event that the calculated offset map is used for flat-fielding).
  • Another technique is to smooth the gain map with a low-pass filter.
  • Perfectly uniform flat-field calibration slides are nearly impossible to fabricate. Non-uniform fluorescence is typical even with very carefully prepared slides. However, moving the calibration slide during camera exposure averages non-uniform fluorescent response of the slide. Flat-field calibration maps can be generated from significantly lower quality calibration slides.
  • FIGS. 1 and 2 illustrate flat-field calibration maps made from uniformly fluorescent calibration slides. The gain map 105, 205 contains approximately 0.3% noise whereas the offset map 110, 210 contributes 1.24 counts (gain correction is multiplicative, offset is additive).
  • Although flat-field calibration is an effective technique, the technique introduces noise. Cleaning the flat-field calibration maps could yield substantial improvements in image quality. In particular, further reduction of offset map noise would improve low-end sensitivity. The read-noise in the CCD camera used to collect the maps above has about 1.77 counts of read-noise. Adding the offset map noise (in quadrature) yields about 2.2 counts of baseline noise, a 24% increase.
  • Another problem is that the intensity curvature of the panels creates a visible artifact. FIG. 3 illustrates an image 300 without any curvature correction. A combination of illumination vignetting and collection vignetting leads to more brightness or higher collection efficiency, respectively, in the center of the field-of-view. Even when flat-fielding techniques have been applied to the panels, a variety of factors contribute to a residual curvature. For instance, lamp fluctuation, camera bias instability change the general intensity level of the acquired image and affect the standard flat-fielding calculation, which is:
    flat_image=(acquired_image−offset_map)/gain_map.
    Small errors in the offset map cause the gain map (which is usually curved) to introduce a field curvature. The more curvature that exists in the acquired image, the greater the potential for residual curvature.
  • Because the intensity curvature is typically consistent from one panel to the next, averaging the intensity profile of each panel gives an average curvature map. Dividing each panel by the curvature map is then a way to flatten the intensity curvature that is consistent among all panels. Normalizing the curvature map by the average intensity, or similar value, of the curvature map allows the calculation to be performed without altering the net intensity scale of the image.
  • One example of how to average the intensity profile of each panel is to perform the following procedure for each pixel in each panel. First, if the pixel in the current panel is not signal, apply the following equations:
    Accumulator_map=accumulator_map+pixel_intensity
    Accumulation_counter_map=accumulation_counter_map+1
    Second, for all pixels within the accumulator_map, calculate the curvature map using the following technique:
      • If counter_map is greater than 0
        Curvature_map=accumulator map/accumulation_counter_map
        Otherwise
        Curvature_map=average of neighboring curvature values
        This creates a curvature flattening map that is defined as:
        Curvature_flattener=1/curvature_map
  • The procedure may be refined in several manners. First, the curvature map may be smoothed to reduce the sensitivity to noise and spurious signals in the average curvature image. Second, only the pixels from each panel that are not significantly above the background intensity may be averaged. A histogram of each panel is used to distinguish background areas (desired) from image signals (undesired). A map of the number of pixels added to each point in the curvature map is then required to calculate the average since not all panels contribute information to each pixel in the curvature map. Pixels that contain no information can be synthesized from the average of neighboring pixels. Third, the curvature map may be curve-fitted using a weighting scheme that emphasizes relatively low intensity values. Curve-fitting would be useful for reducing noise. The goal of curve-fitting is to measure only the background curvature and reduce the influence of the image signal. Other refinements include averaging lots of small panels reduces sensitivity to image signal corruption and over-scanning the desired image area to provide more panels for averaging and panels that contain only the background intensity curvature.
  • Another problem with combining a plurality of small images to form one large image is that small discontinuities between adjacent panels become visible. Intensity differences of 1-2 counts are readily detected by the human eye, even in the presence of 1-2 counts of random noise and when important information is much more intense. The remaining discontinuity create a visible stitching artifact. Examples of the discontinuities may be seen in the image 300 of FIG. 3.
  • To correct this problem, a panel edge connection technique is performed. In this technique, the border of each panel is compared with all neighbors to the left, right, top, and bottom. This comparison generates border intensity scaling values for the entire boundary of each panel. The boundary may then be scaled so that the result is half way between the boundary of the current panel and the adjacent panel. The intensities are then connected at the half-way point between the adjacent border intensities. The boundary scaling may be applied to each pixel in the panel based on the distance from the four boundaries. A weighted combination of the scaling factors is used such that a continuous intensity ramp is applied from one boundary to the next. (In the middle of the image, the scaling factor should be the average of the left, right, top, and bottom scaling factors.) Some examples of the weighting methods include inverse square weighting and inverse weighting. These techniques may be implemented using the following formulas:
  • Inverse square weighting:
    Left_weight=1/(i+1)ˆ2
    Right_weight=1/(nx−i+1)ˆ2
    Bottom_weight=1/(j+1)ˆ2
    Top_weight=1/(ny−j+1)ˆ2
  • Inverse weighting:
    Left_weight=1/(i+1)
    Right_weight=1/(nx−i+1)
    Bottom_weight=1/(j+1)
    Top_weight=1/(ny−j+1)
    Total_weight=Left_weight+Right weight+Top_weight+Bottom_weight
  • Scaling Factors:
    Left_scale(j)=½*[Left_border(j)+Right_border_of_left_panel(j)]/Left_border(j)
    Right_scale(j)=½*[Right_border(j)+Left_border of_right_panel(j)]/Right_border(j)
    Top_scale(i)=½*[Top_border(i)+Bottom_border_of_upper_panel(i)]/Top_border(i)
    Bottom_scale(i)=½*[Bottom_border(i)+Top_border_of_lower_panel(i)]/Bottom_border(i)
    Pixel(i,j) intensity scaling factor=[Left_scale(j)*Left_weight+Right_scale(j)*Right_weight+Bottom_scale(i)*Bottom_weight+Top_scale(i)*Top weight]/Total_weight
  • Definitions:
  • nx Number of pixel columns
  • ny Number of pixel rows
  • i Column number (0 based)
  • j Row number (0 based)
  • Both connection and curvature flattening are important for panels with significant background intensity. An image having curvature flattening is shown in FIG. 3. Further refinements include median filtering the boundary scaling values to reduce sensitivity to outliers. Misalignment of the panels causes miscalculation of the scaling factors. The miscalculation is significant when bright (or dark) spots do not overlap along the borders of adjacent panels. Additionally, smoothing of the median filtered boundary scaling values may be used to remove spikes caused by alignment problems. Finally, the boundary scaling values may be curve-fit to find the general trend and avoid noise and misalignment.
  • Numerous variations and modifications of the invention will become readily apparent to those skilled in the art. Accordingly, the invention may be embodied in other specific forms without departing from its spirit or essential characteristics.

Claims (4)

1-14. (canceled)
15. A method of reducing discontinuities between adjacent panels in an image comprising:
comparing a border of each panel on all sides to generate border intensity scaling values; and
scaling a boundary of each panel to a point approximately midway between a current panel and an adjacent panel.
16. The method of claim 15, further comprising scaling the boundary of each panel using an inverse square weighting.
17. The method of claim 15, further comprising scaling the boundary of each panel using an inverse weighting.
US11/740,878 2000-01-27 2007-04-26 Flat-field, panel flattening, and panel connecting methods Abandoned US20070201760A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/740,878 US20070201760A1 (en) 2000-01-27 2007-04-26 Flat-field, panel flattening, and panel connecting methods

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US17847600P 2000-01-27 2000-01-27
US09/771,343 US20010038717A1 (en) 2000-01-27 2001-01-26 Flat-field, panel flattening, and panel connecting methods
US10/872,293 US7228003B2 (en) 2000-01-27 2004-06-18 Flat-field, panel flattening, and panel connecting methods
US11/740,878 US20070201760A1 (en) 2000-01-27 2007-04-26 Flat-field, panel flattening, and panel connecting methods

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/872,293 Division US7228003B2 (en) 2000-01-27 2004-06-18 Flat-field, panel flattening, and panel connecting methods

Publications (1)

Publication Number Publication Date
US20070201760A1 true US20070201760A1 (en) 2007-08-30

Family

ID=22652686

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/771,343 Abandoned US20010038717A1 (en) 2000-01-27 2001-01-26 Flat-field, panel flattening, and panel connecting methods
US10/872,293 Expired - Fee Related US7228003B2 (en) 2000-01-27 2004-06-18 Flat-field, panel flattening, and panel connecting methods
US11/740,878 Abandoned US20070201760A1 (en) 2000-01-27 2007-04-26 Flat-field, panel flattening, and panel connecting methods

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US09/771,343 Abandoned US20010038717A1 (en) 2000-01-27 2001-01-26 Flat-field, panel flattening, and panel connecting methods
US10/872,293 Expired - Fee Related US7228003B2 (en) 2000-01-27 2004-06-18 Flat-field, panel flattening, and panel connecting methods

Country Status (8)

Country Link
US (3) US20010038717A1 (en)
EP (1) EP1254430B1 (en)
JP (1) JP4278901B2 (en)
AT (1) ATE475952T1 (en)
AU (1) AU2001234588A1 (en)
CA (1) CA2397817C (en)
DE (1) DE60142678D1 (en)
WO (1) WO2001055964A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3451226A4 (en) * 2017-07-17 2019-03-06 Shenzhen Goodix Technology Co., Ltd. Method for determining optical sensing correction parameters, biometric detection apparatus and electronic terminal

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7864369B2 (en) * 2001-03-19 2011-01-04 Dmetrix, Inc. Large-area imaging by concatenation with array microscope
US20050084175A1 (en) * 2003-10-16 2005-04-21 Olszak Artur G. Large-area imaging by stitching with array microscope
JP2005181244A (en) 2003-12-24 2005-07-07 Yokogawa Electric Corp Correction method of quantity-of-light distribution, and biochip reader
US7634152B2 (en) * 2005-03-07 2009-12-15 Hewlett-Packard Development Company, L.P. System and method for correcting image vignetting
US7733357B2 (en) * 2006-01-13 2010-06-08 Hewlett-Packard Development Company, L.P. Display system
WO2009037817A1 (en) * 2007-09-19 2009-03-26 Panasonic Corporation Contour correcting device, contour correcting method and video display device
CA2724563C (en) * 2008-05-16 2018-07-17 Huron Technologies International Inc. Imaging system with dynamic range maximization
US20220374641A1 (en) * 2021-05-21 2022-11-24 Ford Global Technologies, Llc Camera tampering detection

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4979824A (en) * 1989-05-26 1990-12-25 Board Of Trustees Of The Leland Stanford Junior University High sensitivity fluorescent single particle and single molecule detection apparatus and method
US5675513A (en) * 1996-02-16 1997-10-07 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Method of calibrating an interferometer and reducing its systematic noise
US5875258A (en) * 1994-09-20 1999-02-23 Neopath, Inc. Biological specimen analysis system processing integrity checking apparatus
US5974113A (en) * 1995-06-16 1999-10-26 U.S. Philips Corporation Composing an image from sub-images
US6101238A (en) * 1998-11-25 2000-08-08 Siemens Corporate Research, Inc. System for generating a compound x-ray image for diagnosis
US6111596A (en) * 1995-12-29 2000-08-29 Lucent Technologies Inc. Gain and offset correction for efficient stereoscopic coding and improved display
US6128108A (en) * 1997-09-03 2000-10-03 Mgi Software Corporation Method and system for compositing images
US6434265B1 (en) * 1998-09-25 2002-08-13 Apple Computers, Inc. Aligning rectilinear images in 3D through projective registration and calibration
US6507665B1 (en) * 1999-08-25 2003-01-14 Eastman Kodak Company Method for creating environment map containing information extracted from stereo image pairs
US6556960B1 (en) * 1999-09-01 2003-04-29 Microsoft Corporation Variational inference engine for probabilistic graphical models

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3377323B2 (en) 1995-02-09 2003-02-17 株式会社モリタ製作所 Medical X-ray equipment
US6075905A (en) * 1996-07-17 2000-06-13 Sarnoff Corporation Method and apparatus for mosaic image construction
JPH10145595A (en) 1996-11-11 1998-05-29 Konica Corp Method for deciding gradation adjusting quantity in image pickup unit and image pickup unit
JPH11196299A (en) 1997-12-26 1999-07-21 Minolta Co Ltd Image pickup device
US6556690B1 (en) * 1999-06-17 2003-04-29 Eastman Kodak Company Articles bearing invisible encodements on curved surfaces

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4979824A (en) * 1989-05-26 1990-12-25 Board Of Trustees Of The Leland Stanford Junior University High sensitivity fluorescent single particle and single molecule detection apparatus and method
US5875258A (en) * 1994-09-20 1999-02-23 Neopath, Inc. Biological specimen analysis system processing integrity checking apparatus
US5974113A (en) * 1995-06-16 1999-10-26 U.S. Philips Corporation Composing an image from sub-images
US6111596A (en) * 1995-12-29 2000-08-29 Lucent Technologies Inc. Gain and offset correction for efficient stereoscopic coding and improved display
US5675513A (en) * 1996-02-16 1997-10-07 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Method of calibrating an interferometer and reducing its systematic noise
US6128108A (en) * 1997-09-03 2000-10-03 Mgi Software Corporation Method and system for compositing images
US6349153B1 (en) * 1997-09-03 2002-02-19 Mgi Software Corporation Method and system for composition images
US6434265B1 (en) * 1998-09-25 2002-08-13 Apple Computers, Inc. Aligning rectilinear images in 3D through projective registration and calibration
US6101238A (en) * 1998-11-25 2000-08-08 Siemens Corporate Research, Inc. System for generating a compound x-ray image for diagnosis
US6507665B1 (en) * 1999-08-25 2003-01-14 Eastman Kodak Company Method for creating environment map containing information extracted from stereo image pairs
US6556960B1 (en) * 1999-09-01 2003-04-29 Microsoft Corporation Variational inference engine for probabilistic graphical models

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3451226A4 (en) * 2017-07-17 2019-03-06 Shenzhen Goodix Technology Co., Ltd. Method for determining optical sensing correction parameters, biometric detection apparatus and electronic terminal
US10289886B2 (en) 2017-07-17 2019-05-14 Shenzhen GOODIX Technology Co., Ltd. Method for determining optical sensing correction parameters, biological feature detection apparatus and electronic terminal

Also Published As

Publication number Publication date
CA2397817C (en) 2008-08-12
WO2001055964A3 (en) 2002-06-27
EP1254430A2 (en) 2002-11-06
JP2003521075A (en) 2003-07-08
US20050008207A1 (en) 2005-01-13
ATE475952T1 (en) 2010-08-15
AU2001234588A1 (en) 2001-08-07
US20010038717A1 (en) 2001-11-08
US7228003B2 (en) 2007-06-05
JP4278901B2 (en) 2009-06-17
WO2001055964A9 (en) 2002-10-17
WO2001055964A2 (en) 2001-08-02
DE60142678D1 (en) 2010-09-09
EP1254430B1 (en) 2010-07-28
CA2397817A1 (en) 2001-08-02

Similar Documents

Publication Publication Date Title
US20070201760A1 (en) Flat-field, panel flattening, and panel connecting methods
US6118846A (en) Bad pixel column processing in a radiation detection panel
EP3851832B1 (en) A system and method for image acquisition using supervised high quality imaging
JP4966242B2 (en) Highly sensitive optical scanning using memory integration.
US20040252874A1 (en) Radiation imaging method, radiation imaging apparatus, computer program and computer-readable recording medium
CN1518120A (en) Image sensor with double automatic exposure control
KR101637408B1 (en) Display unevenness detection method and device for display device
US10788423B2 (en) Image capture for large analyte arrays
WO2003014400A1 (en) Time-delay integration imaging of biological specimens
KR20140133882A (en) Display unevenness detection method and device for display device
US8660335B2 (en) Transient pixel defect detection and correction
WO2012069457A1 (en) Methods and systems for automatic capture of an image of a faint pattern of light emitted by a specimen
Watanabe et al. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging
US20150304653A1 (en) Mapping electrical crosstalk in pixelated sensor arrays
JP5210571B2 (en) Image processing apparatus, image processing program, and image processing method
US20070116376A1 (en) Image based correction for unwanted light signals in a specific region of interest
CN110827362A (en) Luminosity calibration method based on polynomial camera response function and vignetting effect compensation
CN112150481A (en) Powdery mildew image segmentation method
Som et al. Photometric Repeatability and Sensitivity Evolution of WFC3/IR
Moomaw et al. Overview of digital electrophoresis analysis
Wang et al. Overview of Digital Electrophoresis Analysis
Vollmerhausen et al. Range performance benefit of contrast enhancement
InouéA et al. Techniques for optimizing microscopy and analysis through digital image processing
Inoué et al. Techniques for optimizing microscopy and analysis through digital image processing
JPH1093861A (en) Method for image input scanning by using detector including radiant sensor array and device to input and scanning of image

Legal Events

Date Code Title Description
AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:APPLIED PRECISION, INC.;REEL/FRAME:021328/0695

Effective date: 20080429

Owner name: SILICON VALLEY BANK,CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:APPLIED PRECISION, INC.;REEL/FRAME:021328/0695

Effective date: 20080429

AS Assignment

Owner name: APPLIED PRECISION, INC., WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:APPLIED PRECISION, LLC;REEL/FRAME:021517/0889

Effective date: 20080429

Owner name: APPLIED PRECISION, INC.,WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:APPLIED PRECISION, LLC;REEL/FRAME:021517/0889

Effective date: 20080429

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: APPLIED PRECISION, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:026398/0420

Effective date: 20110531