US20140300753A1 - Imaging pipeline for spectro-colorimeters - Google Patents
Imaging pipeline for spectro-colorimeters Download PDFInfo
- Publication number
- US20140300753A1 US20140300753A1 US14/099,802 US201314099802A US2014300753A1 US 20140300753 A1 US20140300753 A1 US 20140300753A1 US 201314099802 A US201314099802 A US 201314099802A US 2014300753 A1 US2014300753 A1 US 2014300753A1
- Authority
- US
- United States
- Prior art keywords
- color
- data
- camera
- camera system
- spectrometer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 80
- 238000000034 method Methods 0.000 claims abstract description 101
- 230000008878 coupling Effects 0.000 claims abstract description 3
- 238000010168 coupling process Methods 0.000 claims abstract description 3
- 238000005859 coupling reaction Methods 0.000 claims abstract description 3
- 238000012549 training Methods 0.000 claims description 77
- 239000011159 matrix material Substances 0.000 claims description 51
- 238000012360 testing method Methods 0.000 claims description 49
- 238000012937 correction Methods 0.000 claims description 47
- 230000003287 optical effect Effects 0.000 claims description 16
- 230000009466 transformation Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 description 48
- 238000012512 characterization method Methods 0.000 description 34
- 239000003086 colorant Substances 0.000 description 25
- 230000004044 response Effects 0.000 description 21
- 238000009826 distribution Methods 0.000 description 17
- 238000005259 measurement Methods 0.000 description 16
- 230000003595 spectral effect Effects 0.000 description 10
- 230000000694 effects Effects 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000007935 neutral effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000003702 image correction Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000009987 spinning Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 238000000053 physical method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
- G01J3/50—Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors
- G01J3/502—Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors using a dispersive element, e.g. grating, prism
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
- G01J3/50—Measurement of colour; Colour measuring devices, e.g. colorimeters using electric radiation detectors
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/46—Measurement of colour; Colour measuring devices, e.g. colorimeters
- G01J3/52—Measurement of colour; Colour measuring devices, e.g. colorimeters using colour charts
- G01J3/524—Calibration of colorimeters
Definitions
- the described embodiments relate generally to methods, devices, and systems for an imaging pipeline, and more particularly to an optical test equipment/method for display testing that features a calibration configuration including spectral and colorimetric measurements with spatial resolution.
- Imaging system calibration typically measures display artifacts such as black and yellow mura, Moire patterns, display non-uniformity, linearization, and dark current correction.
- spectrometers are the typical instruments for color measurement.
- spectrometers can only measure one spot of flat uniform colors, while typical imaging system measure extended images in at least two dimensions to detect display artifacts.
- Using digital cameras as a means of color measurement device overcomes this limitation, but performance of digital cameras in terms of accuracy, resolution, precise color rendition is lower than spectrometers. A compromise is therefore made between a fast and inaccurate system using a digital camera, or a slow and highly precise system that alternates between a camera and a spectrometer.
- a spectro-colorimeter system for imaging pipeline including a camera system including a separating component and a camera.
- the separating component directs a first portion of an incident light to the camera system.
- the system may also include a spectrometer system having an optical channel, a slit, and a spectroscopic resolving element, the separating component directing a second portion of the incident light to the spectrometer system through the optical channel; a controller coupling the camera system and the spectrometer system.
- the camera system is configured to provide a color image with the first portion of the incident light.
- the spectrometer system is configured to provide a tristimulus signal from the second portion of the incident light.
- the controller is configured to correct the color image from the camera system using the tristimulus signal from the spectrometer.
- an imaging pipeline method including providing a calibration target and receiving Red, Green, and Blue (RGB) data from a camera system. Also, the method may include receiving tristimulus (XYZ) data from a spectrometer system; providing a color correction matrix; and providing an error correction to the camera system.
- RGB Red, Green, and Blue
- a method for color selection in an imaging pipeline calibration may include selecting a training sample and including the training sample in a predictor set when the training sample is not already included.
- the method may also include obtaining a color correction matrix using the predictor set; obtaining an error value using the color correction matrix and a plurality of test samples; and forming a set of error values from a plurality of predictor sets when no more training samples are selected.
- the method may include selecting a training sample and a predictor set form a set of error values and providing the color correction matrix and the selected predictor set when the error value is less than a tolerance.
- FIG. 1 illustrates a spectro-colorimeter system for handling an imaging pipeline, according to some embodiments.
- FIG. 2 illustrates a flow chart including steps in an imaging pipeline method, according to some embodiments.
- FIG. 3 illustrates a flow chart including steps in an imaging pipeline method, according to some embodiments.
- FIG. 4 illustrates a flow chart including steps in an imaging pipeline method, according to some embodiments.
- FIG. 5 illustrates a flow chart including steps in an imaging pipeline calibration method, according to some embodiments.
- FIG. 6A illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments.
- FIG. 6B illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments.
- FIG. 7A illustrates a color distribution chart for a plurality of test samples in an imaging pipeline calibration method, according to some embodiments.
- FIG. 7B illustrates a color distribution chart for a plurality of test samples in an imaging pipeline calibration method, according to some embodiments.
- FIG. 8 illustrates a flow chart including steps in a color selection algorithm used for an imaging pipeline calibration method, according to some embodiments.
- FIG. 9A illustrates a camera system response chart for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.
- FIG. 9B illustrates a camera system response chart for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.
- FIG. 9C illustrates a camera system response chart for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.
- FIG. 10 illustrates a color distribution chart for a plurality of test samples measured and predicted in an imaging pipeline calibration method, according to some embodiments.
- FIG. 11 illustrates a camera display for a uniformity correction step of a camera system in an imaging pipeline calibration method, according to some embodiments.
- FIG. 12 illustrates an error average chart in an imaging pipeline calibration method, according to some embodiments.
- FIG. 13A illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments.
- FIG. 13B illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments.
- FIG. 14 illustrates a block diagram of a spectro-colorimeter system for handling an imaging pipeline, according to some embodiments.
- Color measurement instruments fall into two general categories: broadband and narrowband.
- a broadband measurement instrument reports up to 3 color signals obtained by optically processing the input light through broadband filters.
- Photometers are the simplest example, providing a measurement only of the luminance of a stimulus. Photometers may be used to determine the nonlinear calibration function of displays.
- Densitometers are an example of broadband instruments that measure optical density of light filtered through red, green and blue filters.
- Colorimeters are another example of broadband instruments that directly report tristimulus (XYZ) values, and their derivatives such as CIELAB (i.e., International Commission on Illumination—CIE, French transalation—1976 (L*, a*, b*) color space).
- CIELAB International Commission on Illumination
- Spectrophotometers and spectro radiometers are examples of narrowband instruments. These instruments typically record spectral reflectance and radiance respectively within the visible spectrum in increments ranging from 1 to 10 nm, resulting in 30-200 channels. They also have the ability to internally calculate and report tristimulus coordinates from a narrowband spectral data. Spectro radiometers can measure both emissive and reflective stimuli, while spectrophotometers measure reflective stimuli, colorimeters or imaging photometers are imaging devices that behave like a camera. In some embodiments, imaging colorimeters include a time-sequential configuration or a Bayer-filter configuration.
- the time-sequential configuration separates the measurement objective color in a time sequential manner by spinning a color wheel.
- the measurement objective photons with a selected color transmit through the filter and hit the embedded CCD or CMOS imager inside the colorimeter. Accordingly, the overall display color information and imaging is reconstructed after at least one cycle of the color wheel spinning.
- the imaging colorimeter separates color channels using a Bayer filter configuration.
- a Bayer filter configuration includes a color filter array composed of periodically aligned 2 ⁇ 2 filter element.
- the 2 ⁇ 2 filter element may include two green filters, one red filter and one blue filter.
- the time-sequential configuration may be more precise than the Bayer filter configuration.
- an imaging colorimeter may include a spatial Foveon filter separating colors using a vertically stacked photodiode layer.
- a spectro-colorimeter including a camera-based display color measurement system has a master-slave structure. More specifically, in some embodiments a spectrometer is a master device, driving a camera as a slave device.
- the spectro-colorimeter includes a controller that adjusts camera accuracy to match the spectrometer accuracy, maintaining an image pipeline. Adjusting camera accuracy includes building a characterization model using a color correction matrix.
- the color correction matrix transforms the camera color space to spectrometer color space. Accordingly, the color correction matrix is a transformation between RGB values (a first 3-dimensional vector) and XYZ values (a second 3-dimensional vector). Since the spectrometer and the camera are integrated in a spectro-colorimeter system, the color correction matrix can be generated in real time. Thus, a continuous and fluid imaging pipeline is established for display testing.
- FIG. 1 illustrates a spectro-colorimeter system 100 for handling an image pipeline, according to some embodiments.
- Spectro-colorimeter system 100 includes a camera system 150 , a spectrometer system 160 , and a controller system 170 .
- Controller system 170 provides data exchange and control commands between spectrometer system 160 and camera system 150 .
- characterization target 120 Also shown in FIG. 1 is characterization target 120 .
- Characterization target 120 provides an optical target so that camera 150 may form a 2-dimensionsl (2D) image on a sensor array in an image plane of a camera 155 .
- the sensor array is a 2D charge-coupled device (CCD) or complementary metal-oxide system (CMOS) sensor array.
- CCD charge-coupled device
- CMOS complementary metal-oxide system
- characterization target 120 may be an emissive target, or a reflective target.
- Examples of characterization target 120 may include a liquid crystal display (LCD), a light emitting diode (LED) display, or any other type of TV or display, such as used in a TV, a computer, a cellular phone, a laptop, a tablet computer or any other portable or handheld device.
- LCD liquid crystal display
- LED light emitting diode
- Spectro-colorimeter system 100 is able to acquire a high resolution spectrum and form an imaging pipeline simultaneously. Accordingly, the spectral measurement and the imaging may share the measurement lighting area at approximately the same or similar time.
- Light 110 from characterization target 120 is incident on a separating component 130 which splits a portion of incident light 110 towards camera system 150 , and a portion of incident light 110 toward spectral system 160 .
- separating component 130 is a beam splitter.
- separating component 130 may be a mirror having an aperture 131 on the surface.
- Optical channel 140 may include an optical channel, a transparent conduit, lenses and mirrors, and free space optics.
- Lens 167 focuses the incident light through a slit 161 into spectrometer system 160 .
- Spectrometer system 160 may include a collimating mirror 162 , a spectroscopic resolving element 164 , a focusing mirror 163 , and a detector array 165 . Accordingly, in some embodiments slit 161 , mirrors 162 and 163 , spectroscopic resolving element 164 and sensor array 165 are arranged in a crossed Czerny-Turner configuration.
- spectroscopic resolving element 164 may be a diffraction grating or a prism.
- Spectrometer system 160 may include a processor circuit 168 and a memory circuit 169 .
- Memory circuit 169 may store commands the when executed by processor 168 cause spectrometer system 160 to perform the many different operations consistent with embodiments in the present disclosure.
- processor circuit 168 may establish communication with controller circuit 170 , and provide data and commands to camera system 150 .
- Processor circuit 168 may also be configured to execute commands provided by controller 170 .
- processor circuit 168 may provide a tristimulus vector XYZ to controller 170 .
- the tristimulus vector XYZ may include highly resolved spectral information from characterization target 120 provided by spectroscopic system 160 to controller 170 .
- Camera system 150 may include processor circuit 158 and a memory circuit 159 .
- Memory circuit 159 may store commands the when executed by processor 158 cause camera system 150 to perform the many different operations consistent with embodiments in the present disclosure.
- processor circuit 158 may establish communication with controller circuit 170 , and provide data and commands to spectrometer system 160 .
- Processor circuit 158 may also be configured to execute commands provided by controller 170 .
- processor circuit 158 provides RGB values measured by camera system 150 to controller 170 .
- embodiments consistent with the present disclosure substantially reduce test time of characterization target 120 using simultaneous capture of a large number of measurements in a single image.
- Embodiments as disclosed herein also provide camera system 150 (e.g., a CCD device) coupled to spectrometer system 160 in an imaging pipeline.
- camera system 150 e.g., a CCD device
- the highly resolved 2-dimensional information of camera system 150 may be calibrated in real time with the highly resolved spectral information provided by spectroscopic system 160 .
- FIG. 2 illustrates a flow chart including steps in an imaging pipeline method 200 , according to some embodiments.
- Some steps in imaging pipeline method 200 may be applied in a production environment for display devices (e.g., a factory), using a ‘golden’ sample, for example once a month.
- steps in imaging pipeline method 200 may be performed more frequently, such as for every display being tested.
- Some steps in imaging pipeline method 200 may be performed for each one of a plurality of images tested on each display. Steps in method 200 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g., controller 170 , camera system 150 , and spectrometer 160 , cf. FIG. 1 ).
- the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g., processor circuits 158 and 168 , and memory circuits 159 and 169 , cf. FIG. 1 ).
- a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g., processor circuits 158 and 168 , and memory circuits 159 and 169 , cf. FIG. 1 ).
- Step 210 includes providing a calibration target.
- step 210 may include selecting a plurality of screen displays having standardized characteristics.
- the plurality of screen displays may include a set of screens, each having a single, pre-determined color.
- selecting a plurality of screen displays may include selecting screen displays having spatial uniformity.
- step 210 may include selecting a plurality of screen displays having a uniform intensity.
- Step 220 includes receiving RGB data from camera system 150 .
- Step 230 includes receiving XYZ data from spectrometer system 160 .
- the XYZ data received in step 230 may include a tristimulus vector determined by a highly resolved spectral analysis of incident light 110 .
- Step 240 may include providing a color correction matrix (CCM).
- CCM color correction matrix
- Step 250 includes providing an error correction to camera system 150 so that camera system may adjust the image settings.
- steps 240 and 250 may include steps and procedures as described in detail below.
- the output responses of camera system 150 and the tristimulus values from spectrometer system 160 are related by a characterization model included in steps 240 and 250 .
- the output RGB values from camera system 150 are transformed to CIE colorimetric values, such as XYZ or CIELAB.
- the model is developed based on two sets of data, colorimetric values (e.g., tristimulus vector XYZ) provided by spectrometer system 160 and camera responses (e.g., RGB output) from camera system 150 . Accordingly, the colorimetric response and the camera responses are originated by a characterization target.
- the characterization target may be characterization target 120 .
- a calibration method of an imaging pipeline may include characterization targets that are accurate colorimetric standards.
- a calibration process as in imaging pipeline method 200 may provide a reliable camera model that may be used in a display manufacturing environment.
- the CCM in step 240 may be constructed by simultaneously measuring the RGB response of camera system 150 and the XYZ colorimetric values provided by spectrometer system 160 from a characterization target 120 .
- characterization models are built by first measuring the characterization target on the media considered, and then generating the mathematical model to transform any color in the device color space to a particular color space. It is often possible to define the relationship between two color spaces through a 3 by 3 matrix. For example,
- [ X Y Z ] ( q 1 , 1 q 1 , 2 q 1 , 3 q 2 , 1 q 2 , 2 q 2 , 3 q 3 , 1 q 3 , 2 q 3 , 3 ) ⁇ [ R G B ] Eq . ⁇ 1
- X, Y and Z may be the CIE tristimulus values provided by spectrometer system 160 .
- R, G and B are camera signals provided by camera system 150 .
- a complex or non-linear model may be desirable.
- a polynomial model is established without any assumption of physical features of the associated device. It includes a series of coefficients which is determined by regression from a set of a set of known samples.
- the generic formula for the polynomial model is given in Eq.2:
- i R , i G and i B are nonnegative integer indices representing the order of R, G and B camera response;
- n P is the order of the polynomial model;
- q x,i R ,i G ,i B , q y,i R ,i G ,i B , and q z,i R ,i G ,i B are the model coefficients to be determined.
- n p 1
- Eq.2 becomes:
- Eq.1 can be expressed in matrix form as given in Eq. 5:
- Q is a 3 by 4 matrix:
- c is a column vector of tristimulus values
- Q is the polynomial mapping matrix, and is a column vector formed by camera responses.
- n P from 1, 2, 3, 4 to 5 all the sizes of the column vectors and together with the matrix Q are tabulated in Table 1.
- T represents the transpose of vector or matrix. Since the polynomial model is established when the mapping matrix Q is defined, some training samples may be desirable.
- the camera response vector ⁇ can be obtained by imaging the sample using camera.
- the tristimulus values vector c can be also measured by physical measurement such as spectrophotometers.
- Eq.5 can be expressed as:
- ⁇ tilde over (C) ⁇ j are ⁇ tilde over (Q) ⁇ j and Np row vectors, but c j and are q j and Np column vectors.
- a minimum norm used may be the square root of the sum of squared unknowns (elements in the Q matrix).
- the pseudo or generalized inverse is defined in Eq.14.
- RGB first order polynomial, white color
- 1 zero order polynomial, black color
- characterization target 120 when more colors are included in characterization target 120 , the model can predict with better accuracy until the model performance stabilizes.
- a large number of colors may increase production costs in terms of testing time and complexity, while increasing the accuracy of the color rendition of camera system 150 .
- embodiments consistent with the present disclosure provide an optimized set of display colors to construct characterization target 120 with reduced impact in testing time and complexity, while maximizing colorimetric accuracy.
- FIG. 3 illustrates a flow chart including steps in an imaging pipeline method 300 , according to some embodiments. Steps in method 300 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g., controller 170 , camera system 150 , and spectrometer 160 , cf. FIG. 1 ). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g., processor circuits 158 and 168 , and memory circuits 159 and 169 , cf. FIG. 1 ).
- a controller e.g., controller 170 , camera system 150 , and spectrometer 160 , cf. FIG. 1 .
- the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.
- Imaging pipeline method 300 may include a calibration of camera system 150 .
- Step 310 includes forming an image from characterization target 120 .
- Step 315 includes correcting defect pixels.
- the defect pixels may be included in the 2D sensor array of camera 155 (cf. FIG. 1 ).
- Step 320 may include correcting signal linearity.
- Step 325 may include compensating for lens shading effects.
- Step 330 includes correcting for dark current and smear in the sensor array of camera system 150 .
- the CC chart is applied as a characterization target as a benchmark for the system to build a 3 by 3 CCM using least-square regression.
- An evaluation of the CCM derived from the data with or without dark current removal is shown in Tables 2(a) and (b), respectively. The differences are as small as sub-0.1 range.
- the results with and without removing dark current are shown in Tables 3 (a) and (b), respectively.
- the CIEDE2000 color differences are used as the metric to determine training and testing performance.
- the training performance is the model trained and tested by the Color Correction (CC) chart.
- the testing performance is the model trained by CC chart and tested by the 729 dataset. It can be seen that the average performance was slightly improved by 0.2 E00 units when we remove the dark current.
- Step 335 may include correct uniformity in the 2D image provided by the sensor array in camera system 150 .
- step 335 may include correct of Mura and Moire artifacts in the image.
- An optical low pass filter or a digital filter may be used to remove the artifacts.
- step 335 may include correction of artifacts resulting from a larger field of view of camera 155 relative to characterization target 120 .
- An algorithm to detect the points of interest (POI) (the portion of a sensor array including light 110 from characterization target 120 ) may crop the area from a full camera view.
- edge detection Since the display testing patterns are uniform colors, a technique of edge detection is used. A measure of edge strength such as gradient magnitude is derived for searching local directional maxima magnitude. Based on the magnitude, a threshold is applied to decide whether edges are present of not at an image point. The higher the threshold, the more edges will be removed.
- Step 340 may include balancing a white display. Accordingly, step 345 may include presenting a standard ‘white’ characterization target 120 and determine the RGB camera output. Step 345 may include correct the gamma value of camera system 150 .
- Step 350 may include providing RGB data for a color correction matrix step. Accordingly, step 350 may include providing RGB data after steps 310 through 345 are completed, to controller 170 . Controller 170 may then form CCM matrix executing step 240 (cf. FIG. 2 ).
- Step 355 includes receiving a CCM. For example, step 355 may include processor circuit 158 receiving CCM from controller 170 when step 240 is complete (cf. FIG. 2 ). Step 360 includes providing corrected RGB data from the received CCM.
- step 360 may include receiving tristimulus data XYZ together with CCM, so that processor circuit 158 may obtain the corrected RGB values.
- processor circuit 158 may receive corrected RGB values directly from controller 170 .
- Step 365 includes receiving an error value.
- the error value may be a difference between the RGB data provided in step 350 and the corrected RGB data provided in step 360 .
- step 370 processor circuit 158 determines whether or not the error value is below or above a tolerance value. When the error value is below the tolerance, then step 375 includes obtaining a tristimulus XYZ image from the CCM and the corrected RGB data.
- the XYZ image provided in step 375 may have a high colorimetric accuracy since it uses data provided by a high resolution spectrometer system 160 and a controller 170 forming a CCM as in step 240 (cf. FIG. 2 and Eqs. 1-14 above).
- imaging pipeline method 300 is repeated from step 350 .
- FIG. 4 illustrates a flow chart including steps in an imaging pipeline method 400 , according to some embodiments.
- Steps in method 400 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g., controller 170 , camera system 150 , and spectrometer 160 , cf. FIG. 1 ).
- the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g., processor circuits 158 and 168 , and memory circuits 159 and 169 , cf. FIG. 1 ).
- Imaging pipeline method 400 may include a calibration of spectrometer system 160 .
- Step 410 may include correcting a signal linearity.
- the signal linearity may be the linearity of sensor array 165 (cf. FIG. 1 ).
- step 410 is performed by providing a uniform light source to spectrometer system 160 .
- Step 420 may include adjusting a wavelength scale.
- Step 430 may include adjusting the spectral sensitivity.
- Step 440 may include correcting for a dark current.
- the dark level error may be caused by the imperfect glass trap and specular beam error.
- step 440 may include placing a glass wedge in the optical path of spectrometer system 160 .
- Step 450 may include receiving a characterization target light.
- step 460 may include providing XYZ data from the spectrum formed with the received characterization light.
- FIG. 5 illustrates a flow chart including steps in an imaging pipeline calibration method 500 , according to some embodiments.
- Steps in method 500 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g., controller 170 , camera system 150 , and spectrometer 160 , cf. FIG. 1 ).
- the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g., processor circuits 158 and 168 , and memory circuits 159 and 169 cf. FIG. 1 ).
- Step 510 includes selecting a plurality of training samples.
- Step 510 may include selecting a plurality of colors from a standard, or a ‘gold’ standard.
- Step 520 includes providing a plurality of test samples from the plurality of training samples selected in step 510 . Accordingly, step 520 may include digitally processing the training samples provided in step 510 to generate a larger number of test samples.
- a plurality of training samples as selected in step 510 may be as described in detail below, with reference to FIGS. 6A and 6B .
- training samples 610 may be obtained from a well-known standard chart.
- the ColorChecker® Color Rendition Chart supplied by Macbeth Company in 1976 is now called ColorChecker® (CC) owned by X-Rite.
- CC ColorChecker® owned by X-Rite.
- the chart includes a matrix of 24 scientifically prepared color squares including three additive and three subtractive primaries, 6 greyscale tones, and natural color objects such as foliage, human skin and blue sky which exemplify the color of their counterparts. These 24 colors are reproduced on the testing display as characterization target 120 .
- FIG. 6A illustrates a color distribution chart 600 A for a plurality of training samples 610 in an imaging pipeline calibration method, according to some embodiments.
- FIG. 6A shows the color distribution of the CC on a*b* planes. Accordingly, the abscissa 601 A in chart 600 A corresponds to the a* value (red-green scale), and the ordinate 602 A in chart 600 A corresponds to the b* value (yellow-blue scale).
- the CC chart may include a set of gray scale colors 620 that are displayed in the origin of chart 600 A (neutral color).
- the greyscale of CC chart can be applied to correct the linearity between luminance level and camera response. Once the camera has been characterized, the greyscale is also used to check the gamma of the testing display (e.g., in step 345 , cf. FIG. 3 ).
- FIG. 6B illustrates a color distribution chart 600 B for the plurality of training samples 610 in an imaging pipeline calibration method, according to some embodiments.
- FIG. 6B shows the color distribution of the CC on L*-C*ab planes. Accordingly, the abscissa 601 B in chart 600 B corresponds to the Ca*b* value ( ⁇ square root over (a* 2 +b* 2 ) ⁇ ), and the ordinate 602 B in chart 600 B corresponds to the L* value (luminance).
- Test samples 610 in may include a set 620 of gray scale colors that are displayed along the 602 B axis at regular intervals (evenly graded ‘lightness’).
- a plurality of test samples as used in method 500 may be as described in detail with respect to FIGS. 7A and 7B , below.
- the abscissa 701 A and ordinate 702 A may be as in FIG. 6A .
- the abscissa 701 B and ordinate 701 B may be as in FIG. 6B .
- FIG. 7A illustrates a color distribution chart 700 A for a plurality of test samples 710 in an imaging pipeline calibration method, according to some embodiments. Accordingly, test samples may include 729 uniform distribution colors on display color gamut.
- test samples may include 729 uniform distribution colors on display color gamut.
- the test colors 710 are formed from training colors 610 using 16 bits intervals along red, green and blue channels plus a grey scale are accumulated to have 729 colors. These colors uniformly distribute in the display color gamut as shown in FIGS. 7A and 7B .
- FIG. 7B illustrates a color distribution chart 700 B for a plurality of test samples 710 in an imaging pipeline calibration method, according to some embodiments.
- Test samples 710 may include gray scale samples 720 .
- Chart 700 B shows an L*-Ca*b* plane, so that gray scale points 720 are clearly distinguishable along the L* axis (ordinates).
- a color selection algorithm is applied to select colors to establish a characterization target for display measurement. This set is also applied to test the robustness of characterization targets.
- FIG. 8 illustrates a flow chart including steps in a color selection algorithm 800 used for an imaging pipeline calibration method, according to some embodiments.
- Algorithm 800 may include a color selection algorithm (CSA) to achieve high color accuracy in terms of color differences.
- CSA 800 may achieve high color resolution.
- CSA color selection algorithm
- a source dataset including XYZ and camera RGB are first provided (see vectors c and a, in reference to step 240 in method 200 , cf. FIG. 2 ). The number of samples in the source dataset and the training dataset, which are the samples selected from the source dataset are known.
- Steps in method 800 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g., controller 170 , camera system 150 , and spectrometer 160 , cf. FIG. 1 ).
- the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g., processor circuits 158 and 168 , and memory circuits 159 and 169 , cf. FIG. 1 ).
- Step 810 includes collecting a training sample. Accordingly, step 810 may include selecting a training set from a standardized set.
- the standardized set may be a set of calibration colors. If K is the total number of samples in a training set, a value ⁇ may be predefined as the number of training samples to form a predictor set. Thus, ⁇ may be a ‘dimension’ of the predictor set.
- Step 815 includes a query as to whether or not the training sample is already included in a predictor set.
- a predictor set may include matrices C and A T , including vectors c and a (cf. the detailed description of step 240 in method 200 , FIG. 2 ).
- the predictor set may include tristimulus values (XYZ, vector c) from spectrometer system 160 , and RGB values from camera system 150 (vector a, formed from RGB values according to Eq. 2).
- step 820 includes the training sample into the predictor set.
- the predictor set may be empty, so that the first training sample selected in set 810 may automatically be used in the predictor set.
- a CCM is obtained using the predictor set.
- the CCM may be formed as matrix Q, from matrices C and A (cf. Eq. 14).
- Step 830 includes obtaining an error value from a plurality of test samples.
- step 830 may include obtaining RGB values for a plurality of test samples obtained with the tristimulus values XYZ provided by spectrometer system 160 and the CCM matrix Q.
- Step 830 may further include comparing the obtained RGB values with the RGB values provided by camera system 150 for each test sample.
- the set of test samples used in step 830 may be much larger than the set of training samples used to form the predictor set.
- the set of training samples in steps 810 through 825 may be as training set 610 (cf. FIGS. 6A and 6B ).
- the set of test samples in step 830 may be as test set 710 (cf. FIGS. 7A and 7B ).
- Step 830 may include obtaining a single error value from a set of error values for each of the test samples.
- step 830 may include averaging the error values from the set of error values for each of the test samples.
- step 830 may include selecting an error value from a statistical distribution of the error values for all the test samples.
- Step 835 includes querying whether or not a new training sample is selected. For example, if a training sample remains to be selected then steps 810 through 835 are repeated until the result in step 835 is a ‘no.’ In some embodiments, step 835 may produce a ‘no’ when all training samples in the set of training samples have been selected or included in a predictor set. Accordingly, up to step 835 a plurality of predictor sets is selected, each predictor set having the same number of c vectors and a vectors ( ⁇ +1).
- each predictor set up to step 835 includes a same set of ⁇ c vectors and ⁇ a vectors, except the c vector and a vector selected in the last iteration of steps 810 through 835 . Also, within a single predictor set, all ( ⁇ +1) vectors c may be different from one another, and all ( ⁇ +1) vectors a may be different from one another. Thus, up to step 835 an error value is assigned to each one of a predictor set associated with each selected training sample.
- step 840 includes forming a set of error values from the plurality of predictor sets.
- Step 845 includes selecting a training sample and a predictor set from the set of error values. Accordingly, step 845 includes selecting the training sample that provides the lowest error in the set of error values formed in step 840 . If the error value of the selected predictor set is less than a tolerance value according to step 850 , then the predictor set is used to form the CCM matrix in step 855 . Accordingly, step 855 may include forming matrix Q using the ( ⁇ +1) c vectors and the ( ⁇ +1) a vectors from the selected predictor set as in Eq. 14. If the error value of the selected predictor set is greater than or equal to the tolerance value according to step 850 , then method 800 may be repeated from step 810 . The dimension of the predictor set is then increased by one (1).
- the second iteration of steps 810 through 845 should provide the best combination of 2 c vectors and a vectors for a predictor set.
- the previously selected training sample c vector and a vector is removed from the source dataset and ⁇ equals to one (1).
- each remaining training sample combined with the already selected training sample is used for training the model in steps 810 through 845 .
- the number of predictor sets in step 840 will be K ⁇ 1 models. Again, each predictor set is used to predict the full source dataset. From the predictions, the sample combined with already selected training sample with the smallest color difference will be selected.
- method 800 may be repeated until it reaches a number of training samples producing an error lower than the threshold.
- This CSA is simple and easy to implement.
- a predictor set having a single element may include the lightest neutral color in the training set, with a mean error value ( ⁇ E00) of about 15. Thus, in some embodiments it is desirable that the lightest neutral color be included in the training set.
- FIG. 9A illustrates a camera system response chart 900 A for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.
- Chart 900 A may be the result of step 320 in imaging pipeline method 300 (cf. FIG. 3 ).
- Abscissa 901 in chart 900 A may be associated to a tristimulus XYZ vector provided by spectrometer system 160 , such as luminance L*, or a Y coordinate.
- Ordinate 902 in chart 900 A may be associated to an RGB data from camera system 150 , such as the ‘Green’ count, ‘G.’
- Data points 910 may be associated to each training sample in a set of training samples (e.g., training samples 610 , cf. FIGS. 6A and 6B ).
- Data points 910 may also comprise gray scale data points 920 - 1 , 920 - 2 , 920 - 3 , 920 - 4 , 920 - 5 , and 920 - 6 .
- the exposure time of camera 155 in camera system 150 may be adjusted, as follows. Chart 900 A is associated with a fixed exposure time scenario. In particular the exposure time may be a few milliseconds, such as less than 10, 10, 20, 24, or even more milliseconds.
- the exposure time should be controlled by signal to noise ratio (SNR) of an image.
- SNR signal to noise ratio
- Fixed exposure time for all measurements keeps the linearity between camera response and colors which is desirable for CCM development. Accordingly, it may be desirable to avoid SNR fluctuations with different color pattern, especially for a dark characterization target 120 .
- Using a fixed exposure time may include ensuring that test colors are within the dynamic range of camera system 150 .
- FIG. 9B illustrates a camera system response chart 900 B for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.
- Ordinates 901 , abscissae 902 , and data points 910 and 920 in chart 900 B are as in chart 900 A, described above.
- Chart 900 B includes a configuration wherein the exposure time in camera 155 is set in auto-exposure mode. The auto-exposure setting ensures images with high SNR.
- chart 900 B shows that camera linearity to color stimulus will be lower than fixed exposure setting (chart 900 A).
- a configuration of camera system 150 as described in chart 900 B may be desirable to increase the average camera signal. Accordingly, a signal level from about 40000 to 65535 may be obtained for some test samples, rendering higher average SNR as in chart 900 A.
- FIG. 9C illustrates a camera system response chart 900 C for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.
- Ordinates 901 , abscissae 902 , and data points 910 and 920 in chart 900 B are as in charts 900 A and 900 B, described above.
- Chart 900 C includes a configuration wherein the exposure time in camera 155 is set in auto-exposure mode. Further, in the configuration illustrated in FIG. 9C the output from camera system 150 (ordinate 902 ) is normalized by the exposure time.
- Chart 900 C illustrates that in order to correct signal linearity (e.g., in step 320 , method 300 , cf. FIG. 3 ), the camera output may be normalized with the exposure time.
- the camera RGB responses in FIGS. 9A-9C are measured for a set of achromatic samples, a uniform white and the dark condition.
- FIG. 10 illustrates a color distribution chart 1000 for a plurality of test samples measured 1010 , and predicted 1020 , in an imaging pipeline calibration method, according to some embodiments.
- Chart 1000 has an abscissa 601 A, an ordinate 602 A, and a depth axis 602 B, as defined above with respect to FIGS. 6A and 6B .
- a training sample of 24 colors was used (cf. FIGS. 6A and 6B ) to select a preferred predictor set according to method 800 (cf. FIG. 8 ).
- a test sample of 729 colors (cf. FIGS. 7A and 7B ) is shown in FIG. 10 . It can be seen that larger errors occur in the dark region.
- FIG. 11 illustrates a camera display 1100 for a uniformity correction step of a camera system in an imaging pipeline calibration method, according to some embodiments.
- Camera display 1100 may be a 2D sensor array, as discussed in detail above in relation to FIG. 1 .
- method 300 (cf. FIG. 3 ) may include a step for detecting display artifacts such as black mura. Black mura may negatively affect the uniformity of camera system 150 . Accordingly, a spatial correction is conducted to minimize the effect of any spatial non-uniformity of the intensity of the illumination or of the sensitivity of the camera CCD array.
- FIG. 11 shows an example of non-uniformity effect on mura detection at display edge portion 1110 . It can be seen that the middle portion 1120 of display 1100 has very similar luminance intensity to the mura area at the edge. This increases the complexity of mura detection from a uniform display.
- FIG. 12 illustrates an error average chart 1200 in an imaging pipeline calibration method, according to some embodiments.
- Chart 1200 may be the result of several iterations in method 800 , described in detail above.
- the abscissa in chart 1200 corresponds to the dimensionality of the predictor set ( ⁇ ).
- the ordinate in chart 1200 corresponds to the error obtained for the selected predictor seat at the end of each iteration sequence, in step 845 .
- the predictor set is formed from colors selected from a training set including the 729 samples of FIGS. 7A and 7B .
- Characterization target 120 is applied to train the characterization model and tested by the 729 samples.
- FIG. 12 shows the performance in terms of CIEDE2000 against the number of the samples selected by method 800 .
- model performance stabilized at mean of one (1) error (E00) units with as few as four (4) training samples. Accordingly, it is desirable to determine which set of four training samples provides the optimal performance, so that this set is used for a CCM in any one of imaging pipeline methods 200 , 300 , and 400 (cf. FIGS. 2 , 3 , and 4 ).
- FIGS. 13A and 13B illustrate color distribution charts 1300 A and 1300 B for a plurality of training samples 1310 in an imaging pipeline calibration method, according to some embodiments.
- the abscissae and ordinate in chart 1300 A are 601 A and 601 A, (cf. FIG. 6A ).
- the abscissae and ordinate in chart 1300 B are 601 B and 602 B, respectively (cf. FIG. 6B ).
- Chart 1300 A displays the four training samples (open squares) selected in the preferred predictor set (CCM) in method 800 in an a* b* plot.
- Chart 1300 B displays the four training samples (open squares) selected in the preferred predictor set (CCM) in method 800 in an L* Ca*b* plot.
- the four training samples in the preferred predictor set are grey, cyan, yellow and magenta as shown in the FIGS. 13A and 13B .
- the 24 relevant samples of the 729 colors from the display gamut are also plotted. As expected, the training sample points (red circles) fall approximately at the center of the predicted values (open squares).
- the test colors in set 710 cover the display color gamut and include grey scale and saturation colors.
- Embodiments consistent with the present disclosure include a complete imaging pipeline for the new combo device: spectro-colorimeter, including the exposure time, dark current normalization, color correction matrix derivation, and flat field calibration.
- the imaging pipeline achieves a colorimeter accuracy better than two ( ⁇ E ⁇ 2) for 729 test samples covering the full bandwidth of the color space.
- Imaging pipelines as disclosed herein enable close-loop master-slave calibration of spectrometer system 160 and camera system 150 . Therefore, embodiments as disclosed herein integrate two device components into a system, providing the imaging capability with spectrometer accuracy.
- Embodiments consistent with the present disclosure may include applications in the display test industry as well as the machine vision field. Other applications may be readily envisioned, since an imaging pipeline consistent with embodiments as disclosed herein integrate two different hardware components such as a camera system 150 and a spectrometer system 160 .
- FIG. 14 illustrates a block diagram of a spectro-colorimeter system 1400 for handling an imaging pipeline, according to some embodiments.
- Spectro-colorimeter system 1400 includes a spectrometer system 1460 and a camera system 1450 used in an imaging pipeline as described above.
- Spectro-colorimeter system 1400 may include a calibration target display 1420 used in a calibration method for an imaging pipeline consistent with embodiments disclosed herein.
- Spectro-colorimeter system 1400 can include circuitry of a representative computing device.
- spectro-colorimeter system 1400 can include a processor 1402 that pertains to a microprocessor or controller for controlling the overall operation of spectro-colorimeter system 1400 .
- Spectro-colorimeter system 1400 can include instruction data pertaining to operating instructions, such as instructions for implementing and controlling user equipment, in file system 1404 .
- File system 1404 can be a storage disk or a plurality of disks. In some embodiments, file system 1404 can be flash memory, semiconductor (solid state) memory or the like. File system 1404 can provide high capacity storage capability for the spectro-colorimeter system 1400 .
- spectro-colorimeter system 1400 can also include a cache 1406 .
- Cache 1406 can include, for example, Random-Access Memory (RAM) provided by semiconductor memory, according to some embodiments.
- RAM Random-Access Memory
- the relative access time for cache 1406 can be substantially shorter than for file system 1404 .
- file system 1404 may include a higher storage capacity than cache 1406 .
- Spectro-colorimeter system 1400 can also include a RAM 1405 and a Read-Only Memory (ROM) 1407 .
- ROM 1407 can store programs, utilities or processes to be executed in a non-volatile manner.
- RAM 1405 can provide volatile data storage, such as for cache 1406 .
- Spectro-colorimeter system 1400 can also include user input device 1408 allowing a user to interact with the spectro-colorimeter system 1400 .
- user input device 1408 can take a variety of forms, such as a button, a keypad, a dial, a touch screen, an audio input interface, a visual/image capture input interface, an input in the form of sensor data, and any other input device.
- spectro-colorimeter system 1400 can include a display 1410 (screen display) that can be controlled by processor 1402 to display information, such as test results and calibration test results, to the user.
- Data bus 1416 can facilitate data transfer between at least file system 1404 , cache 1406 , processor 1402 , and controller 1470 .
- Controller 1470 can be used to interface with and control different devices such as camera system 1450 , spectrometer system 1460 , and calibration target display 1420 . Controller 1470 may also control or motors to position mirror/lens through appropriate codecs. For example, control bus 1474 can be used to control camera system 1450 .
- Spectro-colorimeter system 1400 can also include a network/bus interface 1411 that couples to data link 1412 .
- Data link 1412 allows spectro-colorimeter system 1400 to couple to a host computer or to accessory devices or to other networks such as the internet.
- data link 1412 can be provided over a wired connection or a wireless connection.
- network/bus interface 1411 can include a wireless transceiver.
- sensor 1426 includes circuitry for detecting any number of stimuli.
- sensor 1426 can include any number of sensors for monitoring environmental conditions such as a light sensor such as a photometer, a temperature sensor and so on.
- the various aspects, embodiments, implementations or features of the described embodiments can be used separately or in any combination.
- Various aspects of the described embodiments can be implemented by software, hardware or a combination of hardware and software.
- the described embodiments can also be embodied as computer readable code on a computer readable medium for controlling manufacturing operations or as computer readable code on a computer readable medium for controlling a manufacturing line.
- the computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, HDDs, DVDs, magnetic tape, and optical data storage devices.
- the computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
Abstract
Description
- The present application claims the benefit under 35 U.S.C. 119(e) of U.S. Prov. Pat. Appl. No. 61/808,549, entitled “IMAGING PIPELINE FOR SPECTRO-COLORIMETERS”, by Ye YIN, et al. filed on Apr. 4, 2013, the contents of which are hereby incorporated herein by reference, in its entirety, for all purposes.
- The present disclosure is related to U.S. patent application Ser. No. 13/736,058, entitled “PARALLEL SENSING CONFIGURATION COVERS SPECTRUM AND COLORIMETRIC QUANTITIES WITH SPATIAL RESOLUTION,” by Ye Yin et al., filed on Jan. 7, 2013, the contents of which are hereby incorporated by reference in their entirety, for all purposes.
- The described embodiments relate generally to methods, devices, and systems for an imaging pipeline, and more particularly to an optical test equipment/method for display testing that features a calibration configuration including spectral and colorimetric measurements with spatial resolution.
- In the field of spectro-colorimeters, calibration procedures of an imaging system for image correction and a spectroscopic system for color correction are performed regularly. Imaging system calibration typically measures display artifacts such as black and yellow mura, Moire patterns, display non-uniformity, linearization, and dark current correction. Conventionally, spectrometers are the typical instruments for color measurement. However, spectrometers can only measure one spot of flat uniform colors, while typical imaging system measure extended images in at least two dimensions to detect display artifacts. Using digital cameras as a means of color measurement device overcomes this limitation, but performance of digital cameras in terms of accuracy, resolution, precise color rendition is lower than spectrometers. A compromise is therefore made between a fast and inaccurate system using a digital camera, or a slow and highly precise system that alternates between a camera and a spectrometer.
- Therefore, what is desired is a method and a system for calibration of a spectro-colorimeter that is fast and provides high color accuracy and resolution together with detailed image correction capabilities.
- In a first embodiment, a spectro-colorimeter system for imaging pipeline is provided, the system including a camera system including a separating component and a camera. The separating component directs a first portion of an incident light to the camera system. The system may also include a spectrometer system having an optical channel, a slit, and a spectroscopic resolving element, the separating component directing a second portion of the incident light to the spectrometer system through the optical channel; a controller coupling the camera system and the spectrometer system. In some embodiments the camera system is configured to provide a color image with the first portion of the incident light. Also, in some embodiments the spectrometer system is configured to provide a tristimulus signal from the second portion of the incident light. Furthermore, in some embodiments the controller is configured to correct the color image from the camera system using the tristimulus signal from the spectrometer.
- In a second embodiment, an imaging pipeline method is provided, the method including providing a calibration target and receiving Red, Green, and Blue (RGB) data from a camera system. Also, the method may include receiving tristimulus (XYZ) data from a spectrometer system; providing a color correction matrix; and providing an error correction to the camera system.
- In yet another embodiment a method for color selection in an imaging pipeline calibration is provided. The method may include selecting a training sample and including the training sample in a predictor set when the training sample is not already included. The method may also include obtaining a color correction matrix using the predictor set; obtaining an error value using the color correction matrix and a plurality of test samples; and forming a set of error values from a plurality of predictor sets when no more training samples are selected. Furthermore, the method may include selecting a training sample and a predictor set form a set of error values and providing the color correction matrix and the selected predictor set when the error value is less than a tolerance.
- Other aspects and advantages of the invention will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the described embodiments.
- The described embodiments may be better understood by reference to the following description and the accompanying drawings. Additionally, advantages of the described embodiments may be better understood by reference to the following description and accompanying drawings. These drawings do not limit any changes in form and detail that may be made to the described embodiments. Any such changes do not depart from the spirit and scope of the described embodiments.
-
FIG. 1 illustrates a spectro-colorimeter system for handling an imaging pipeline, according to some embodiments. -
FIG. 2 illustrates a flow chart including steps in an imaging pipeline method, according to some embodiments. -
FIG. 3 illustrates a flow chart including steps in an imaging pipeline method, according to some embodiments. -
FIG. 4 illustrates a flow chart including steps in an imaging pipeline method, according to some embodiments. -
FIG. 5 illustrates a flow chart including steps in an imaging pipeline calibration method, according to some embodiments. -
FIG. 6A illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments. -
FIG. 6B illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments. -
FIG. 7A illustrates a color distribution chart for a plurality of test samples in an imaging pipeline calibration method, according to some embodiments. -
FIG. 7B illustrates a color distribution chart for a plurality of test samples in an imaging pipeline calibration method, according to some embodiments. -
FIG. 8 illustrates a flow chart including steps in a color selection algorithm used for an imaging pipeline calibration method, according to some embodiments. -
FIG. 9A illustrates a camera system response chart for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments. -
FIG. 9B illustrates a camera system response chart for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments. -
FIG. 9C illustrates a camera system response chart for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments. -
FIG. 10 illustrates a color distribution chart for a plurality of test samples measured and predicted in an imaging pipeline calibration method, according to some embodiments. -
FIG. 11 illustrates a camera display for a uniformity correction step of a camera system in an imaging pipeline calibration method, according to some embodiments. -
FIG. 12 illustrates an error average chart in an imaging pipeline calibration method, according to some embodiments. -
FIG. 13A illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments. -
FIG. 13B illustrates a color distribution chart for a plurality of training samples in an imaging pipeline calibration method, according to some embodiments. -
FIG. 14 illustrates a block diagram of a spectro-colorimeter system for handling an imaging pipeline, according to some embodiments. - In the figures, elements referred to with the same or similar reference numerals include the same or similar structure, use, or procedure, as described in the first instance of occurrence of the reference numeral.
- Representative applications of methods and apparatus according to the present application are described in this section. These examples are being provided solely to add context and aid in the understanding of the described embodiments. It will thus be apparent to one skilled in the art that the described embodiments may be practiced without some or all of these specific details. In other instances, well known process steps have not been described in detail in order to avoid unnecessarily obscuring the described embodiments. Other applications are possible, such that the following examples should not be taken as limiting.
- In the following detailed description, references are made to the accompanying drawings, which form a part of the description and in which are shown, by way of illustration, specific embodiments in accordance with the described embodiments. Although these embodiments are described in sufficient detail to enable one skilled in the art to practice the described embodiments, it is understood that these examples are not limiting; such that other embodiments may be used, and changes may be made without departing from the spirit and scope of the described embodiments.
- Color measurement instruments fall into two general categories: broadband and narrowband. A broadband measurement instrument reports up to 3 color signals obtained by optically processing the input light through broadband filters. Photometers are the simplest example, providing a measurement only of the luminance of a stimulus. Photometers may be used to determine the nonlinear calibration function of displays. Densitometers are an example of broadband instruments that measure optical density of light filtered through red, green and blue filters. Colorimeters are another example of broadband instruments that directly report tristimulus (XYZ) values, and their derivatives such as CIELAB (i.e., International Commission on Illumination—CIE, French transalation—1976 (L*, a*, b*) color space). Under the narrowband category fall instruments that report spectral data of dimensionality significantly larger than three.
- Spectrophotometers and spectro radiometers are examples of narrowband instruments. These instruments typically record spectral reflectance and radiance respectively within the visible spectrum in increments ranging from 1 to 10 nm, resulting in 30-200 channels. They also have the ability to internally calculate and report tristimulus coordinates from a narrowband spectral data. Spectro radiometers can measure both emissive and reflective stimuli, while spectrophotometers measure reflective stimuli, colorimeters or imaging photometers are imaging devices that behave like a camera. In some embodiments, imaging colorimeters include a time-sequential configuration or a Bayer-filter configuration. In some embodiments the time-sequential configuration separates the measurement objective color in a time sequential manner by spinning a color wheel. At any particular moment, the measurement objective photons with a selected color transmit through the filter and hit the embedded CCD or CMOS imager inside the colorimeter. Accordingly, the overall display color information and imaging is reconstructed after at least one cycle of the color wheel spinning. In some embodiments, the imaging colorimeter separates color channels using a Bayer filter configuration. A Bayer filter configuration includes a color filter array composed of periodically aligned 2×2 filter element. The 2×2 filter element may include two green filters, one red filter and one blue filter. The time-sequential configuration may be more precise than the Bayer filter configuration. On the other hand, the Bayer filter configuration may be faster than the time-sequential configuration. Further, the Bayer filter configuration has a ‘one-shot’ capability for extracting color information, albeit with limited resolution. In some embodiments, an imaging colorimeter may include a spatial Foveon filter separating colors using a vertically stacked photodiode layer.
- In embodiments disclosed herein a spectro-colorimeter including a camera-based display color measurement system has a master-slave structure. More specifically, in some embodiments a spectrometer is a master device, driving a camera as a slave device. The spectro-colorimeter includes a controller that adjusts camera accuracy to match the spectrometer accuracy, maintaining an image pipeline. Adjusting camera accuracy includes building a characterization model using a color correction matrix. The color correction matrix transforms the camera color space to spectrometer color space. Accordingly, the color correction matrix is a transformation between RGB values (a first 3-dimensional vector) and XYZ values (a second 3-dimensional vector). Since the spectrometer and the camera are integrated in a spectro-colorimeter system, the color correction matrix can be generated in real time. Thus, a continuous and fluid imaging pipeline is established for display testing.
-
FIG. 1 illustrates a spectro-colorimeter system 100 for handling an image pipeline, according to some embodiments. Spectro-colorimeter system 100 includes acamera system 150, aspectrometer system 160, and acontroller system 170.Controller system 170 provides data exchange and control commands betweenspectrometer system 160 andcamera system 150. Also shown inFIG. 1 ischaracterization target 120.Characterization target 120 provides an optical target so thatcamera 150 may form a 2-dimensionsl (2D) image on a sensor array in an image plane of acamera 155. In some embodiments the sensor array is a 2D charge-coupled device (CCD) or complementary metal-oxide system (CMOS) sensor array. In some embodiments,characterization target 120 may be an emissive target, or a reflective target. Examples ofcharacterization target 120 may include a liquid crystal display (LCD), a light emitting diode (LED) display, or any other type of TV or display, such as used in a TV, a computer, a cellular phone, a laptop, a tablet computer or any other portable or handheld device. - Spectro-
colorimeter system 100, as in embodiments disclosed herein is able to acquire a high resolution spectrum and form an imaging pipeline simultaneously. Accordingly, the spectral measurement and the imaging may share the measurement lighting area at approximately the same or similar time.Light 110 fromcharacterization target 120 is incident on aseparating component 130 which splits a portion of incident light 110 towardscamera system 150, and a portion of incident light 110 towardspectral system 160. Accordingly, in someembodiments separating component 130 is a beam splitter. Further according to some embodiments, separatingcomponent 130 may be a mirror having anaperture 131 on the surface. - A portion of incident light 110 separated by separating
component 130 is directed byoptical channel 140 intospectrometer system 160.Optical channel 140 may include an optical channel, a transparent conduit, lenses and mirrors, and free space optics.Lens 167 focuses the incident light through aslit 161 intospectrometer system 160.Spectrometer system 160 may include acollimating mirror 162, aspectroscopic resolving element 164, a focusingmirror 163, and adetector array 165. Accordingly, in some embodiments slit 161, mirrors 162 and 163, spectroscopic resolvingelement 164 andsensor array 165 are arranged in a crossed Czerny-Turner configuration. In some embodiments, spectroscopic resolvingelement 164 may be a diffraction grating or a prism. One of ordinary skill in the art will recognize that the peculiarities of the spectrometer system configuration are not limiting to embodiments consistent with the present disclosure.Spectrometer system 160 may include aprocessor circuit 168 and amemory circuit 169.Memory circuit 169 may store commands the when executed byprocessor 168cause spectrometer system 160 to perform the many different operations consistent with embodiments in the present disclosure. For example,processor circuit 168 may establish communication withcontroller circuit 170, and provide data and commands tocamera system 150.Processor circuit 168 may also be configured to execute commands provided bycontroller 170. In some embodiments,processor circuit 168 may provide a tristimulus vector XYZ tocontroller 170. Accordingly, the tristimulus vector XYZ may include highly resolved spectral information fromcharacterization target 120 provided byspectroscopic system 160 tocontroller 170. - A portion of incident light 110 reflected from separating
component 130 is directed byoptical component 135 towardsimaging camera 155.Optical component 135 may include a mirror, a lens, a prism, or any combination of the above.Camera system 150 may includeprocessor circuit 158 and amemory circuit 159.Memory circuit 159 may store commands the when executed byprocessor 158cause camera system 150 to perform the many different operations consistent with embodiments in the present disclosure. For example,processor circuit 158 may establish communication withcontroller circuit 170, and provide data and commands tospectrometer system 160.Processor circuit 158 may also be configured to execute commands provided bycontroller 170. Also, in someembodiments processor circuit 158 provides RGB values measured bycamera system 150 tocontroller 170. - Thus, embodiments consistent with the present disclosure substantially reduce test time of
characterization target 120 using simultaneous capture of a large number of measurements in a single image. Embodiments as disclosed herein also provide camera system 150 (e.g., a CCD device) coupled tospectrometer system 160 in an imaging pipeline. Thus, the highly resolved 2-dimensional information ofcamera system 150 may be calibrated in real time with the highly resolved spectral information provided byspectroscopic system 160. -
FIG. 2 illustrates a flow chart including steps in animaging pipeline method 200, according to some embodiments. Some steps inimaging pipeline method 200 may be applied in a production environment for display devices (e.g., a factory), using a ‘golden’ sample, for example once a month. In some embodiments, steps inimaging pipeline method 200 may be performed more frequently, such as for every display being tested. Some steps inimaging pipeline method 200 may be performed for each one of a plurality of images tested on each display. Steps inmethod 200 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g.,controller 170,camera system 150, andspectrometer 160, cf.FIG. 1 ). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g.,processor circuits memory circuits FIG. 1 ). - Step 210 includes providing a calibration target. In some embodiments,
step 210 may include selecting a plurality of screen displays having standardized characteristics. For example, the plurality of screen displays may include a set of screens, each having a single, pre-determined color. In some embodiments selecting a plurality of screen displays may include selecting screen displays having spatial uniformity. For example, step 210 may include selecting a plurality of screen displays having a uniform intensity. Step 220 includes receiving RGB data fromcamera system 150. Step 230 includes receiving XYZ data fromspectrometer system 160. The XYZ data received instep 230 may include a tristimulus vector determined by a highly resolved spectral analysis ofincident light 110. Step 240 may include providing a color correction matrix (CCM). The CCM transforms RGB values provided bycamera system 150 into device independent color space, such as CIE tristimulus vector XYZ. Step 250 includes providing an error correction tocamera system 150 so that camera system may adjust the image settings. In some embodiments,steps - Since the spectral sensitivity functions of
camera system 150 may not be identical to the CIE color matching function of human vision, the output responses ofcamera system 150 and the tristimulus values fromspectrometer system 160 are related by a characterization model included insteps camera system 150 are transformed to CIE colorimetric values, such as XYZ or CIELAB. The model is developed based on two sets of data, colorimetric values (e.g., tristimulus vector XYZ) provided byspectrometer system 160 and camera responses (e.g., RGB output) fromcamera system 150. Accordingly, the colorimetric response and the camera responses are originated by a characterization target. For example, the characterization target may becharacterization target 120. In some embodiments, a calibration method of an imaging pipeline may include characterization targets that are accurate colorimetric standards. Thus, a calibration process as inimaging pipeline method 200 may provide a reliable camera model that may be used in a display manufacturing environment. - The CCM in
step 240 may be constructed by simultaneously measuring the RGB response ofcamera system 150 and the XYZ colorimetric values provided byspectrometer system 160 from acharacterization target 120. - Most characterization models are built by first measuring the characterization target on the media considered, and then generating the mathematical model to transform any color in the device color space to a particular color space. It is often possible to define the relationship between two color spaces through a 3 by 3 matrix. For example,
-
- where X, Y and Z may be the CIE tristimulus values provided by
spectrometer system 160. R, G and B are camera signals provided bycamera system 150. However, when modeling many devices the 3 by 3 matrix does not yield a sufficiently accurate, a complex or non-linear model may be desirable. - With the purpose of display measurement, a polynomial model is established without any assumption of physical features of the associated device. It includes a series of coefficients which is determined by regression from a set of a set of known samples. The generic formula for the polynomial model is given in Eq.2:
-
- where iR, iG and iB are nonnegative integer indices representing the order of R, G and B camera response; nP is the order of the polynomial model; qx,i
R ,iG ,iB , qy,iR ,iG ,iB , and qz,iR ,iG ,iB are the model coefficients to be determined. When all of iR, iG and iB are allowed to be zero, the constant coefficients will be included. When np=1, Eq.2 becomes: -
X=q x,0,0,0 +q x,1,0,0 R+q x,0,1,0 G+q x,0,0,1 B -
Y=q y,0,0,0, +q y,1,0,0 R+q y,0,1,0 G+q y,0,0,1 B -
Z=q z,0,0,0 +q z,1,0,0 R+q z,0,1,0 G+q z,0,0,1 B Eq. 3 - and when nP=2, Eq.2 becomes:
-
X=q x,0,0,0 +q x,1,0,0 R+q x,0,1,0 G+q x,0,0,1 B+q x,2,0,0 R 2 +q x,0,2,0 G 2 +q x,0,0,2 B 2 +q x,1,1,0 RG+q x,1,0,1 RB+q x,0,1,1 GB -
Y=q y,0,0,0 +q y,1,0,0 R+q y,1,1,0 G+q y,0,0,1 B+q y,2,0,0 R 2 +q y,0,2,0 G 2 +q y,0,0,2 B 2 +q y,1,1,0 RG+q y,1,0,1 RB+q y,0,1,1 GB -
Z=q z,0,0,0 +q z,1,0,0 R+q z,0,1,0 G+q z,0,0,1 B+q z,2,0,0 R 2 +q z,0,2,0 G 2 +q z,0,0,2 B 2 +q z,1,1,0 RG+q z,1,0,1 RB+q z,0,1,1 GB Eq. 4 - Eq.1 can be expressed in matrix form as given in Eq. 5:
- Thus, for nP=1, Q is a 3 by 4 matrix:
-
-
-
TABLE 1 Sizes of the matrix for polynomial models np c Q(3 × Np) ā 1 3 3 × 4 4 2 3 3 × 10 10 3 3 3 × 20 20 4 3 3 × 35 35 5 3 3 × 56 56 - For characterizing digital camera by polynomial model, there are two steps:
-
-
- Note here the superscript T represents the transpose of vector or matrix. Since the polynomial model is established when the mapping matrix Q is defined, some training samples may be desirable.
- Suppose K samples are available. For each sample, the camera response vector ō can be obtained by imaging the sample using camera. The tristimulus values vector
c can be also measured by physical measurement such as spectrophotometers. Hence there are K tristimulus values vectors:c k, k=1, 2, . . . , K; and K camera response vectors: ōk, k=1, 2, . . . , K, form the K vectors k, k=1, 2, . . . , K. Then Eq.5 can be expressed as: -
-
results in matrix equation: -
C=QA Eq. 9 - where C is 3 by K matrix, Q is 3 by Np matrix and A is Np by K matrix. In the above matrix equation, matrix Q is unknown. Since both of the matrices C and Q have three rows.
- Let {tilde over (C)}j, j=1,2,3, represents the three row vectors of the matrix C, and {tilde over (Q)}j, j=1, 2, 3, are the three row vectors of the matrix Q. Thus, the matrix in Eq.9 can be split to three linear systems of equations:
-
{tilde over (C)} j ={tilde over (Q)} j A Eq. 10 -
or -
c j =A Tq j withc j=({tilde over (C)} j)T ,q j=({tilde over (Q)} j)T ,j=1,2,3 Eq. 11 - Note that and {tilde over (C)}j are {tilde over (Q)}j and Np row vectors, but
c j and areq j and Np column vectors. - When K>Np, the linear system of equation will
c ej=ATq j have no solution. If K<Np, the equation will have many solutions. In fact when K=Np, it may have unique solution, or many solutions or no solution depending on the conditions of the vectorc j and matrix AT. In general, the least squares solution is required, which is formulated as minimizing the expression: -
∥A Tq j −c j∥2 - Here ∥
c j∥2 denotes the 2-norm of the vectorc j. The above solution can be calculated by -
c j=(A T)+q j Eq. 13 - where (AT)+ is the generalized or pseudo-inverse of the matrix AT.
- If K=Np and Eq.13 has a unique solution, (AT)+ becomes the normal inverse (AT)−1 of the matrix AT. If the problem (Eq.13) has many solutions, the above solution will become the minimum norm solution. Note also that
q j=({tilde over (Q)}j)T in Eq.12, thus after some algebraic manipulations the mapping Q is finally given by -
Q=CA + Eq. 14 - The above K samples with known camera responses ōk and tristimulus values vectors
c k which are applied to compute the matrix Q are called the training datasets. - For example, when using an individual sample to determine matrix Q in Eq.5, there are many matrices Q satisfying Eq.5. Constraints such as the above normalization are desirable since the unknown model parameters are used as multipliers. It is desirable that these parameters be smaller in magnitude in order to reduce noise propagation and to prevent local oscillation in prediction. In some embodiments, a minimum norm used may be the square root of the sum of squared unknowns (elements in the Q matrix). In the proposed method, the pseudo or generalized inverse is defined in Eq.14. Hence, regardless of the number of samples used, the matrix Q with minimum norm is obtained, resulting in a unique solution in each case.
- Generally, a better mapping to the characterization target can be obtained by high-order polynomials which involves more terms in the matrix. However, their experimental results show that several particular terms used such as RGB (first order polynomial, white color) and 1 (zero order polynomial, black color) can provide a more accurate prediction.
- Generally, when more colors are included in
characterization target 120, the model can predict with better accuracy until the model performance stabilizes. A large number of colors may increase production costs in terms of testing time and complexity, while increasing the accuracy of the color rendition ofcamera system 150. Accordingly, embodiments consistent with the present disclosure provide an optimized set of display colors to constructcharacterization target 120 with reduced impact in testing time and complexity, while maximizing colorimetric accuracy. -
FIG. 3 illustrates a flow chart including steps in animaging pipeline method 300, according to some embodiments. Steps inmethod 300 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g.,controller 170,camera system 150, andspectrometer 160, cf.FIG. 1 ). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g.,processor circuits memory circuits FIG. 1 ). - The image quality of
camera system 150 can significantly vary with the method of each step in image-processing pipeline. Incamera system 150, the image pipeline involves exposure time determination, defective pixel correction, linearization, dark current removal, uniform correction, spatial demosaicing, display area detection, clipping algorithm and binning. Since the aim is to accurately correlate camera response to spectrometer and be able to detect display artifacts, the effect of the exposure time, linearization, dark current removal, uniform correction and clipping algorithm on image quality is fully studied.Imaging pipeline method 300 may include a calibration ofcamera system 150. Step 310 includes forming an image fromcharacterization target 120. Step 315 includes correcting defect pixels. The defect pixels may be included in the 2D sensor array of camera 155 (cf.FIG. 1 ). Step 320 may include correcting signal linearity. Step 325 may include compensating for lens shading effects. - Step 330 includes correcting for dark current and smear in the sensor array of
camera system 150. - Each image is obtained with dark current removal and uniformity correction. The camera dark current is measured with no ambient light by 10 times, we get the average RGB reading values after 10 times measurements:
-
TABLE I Camera dark current in R, G, and B channels R G B Dark Current 0.424042 0.4193533 0.4701065 - The CC chart is applied as a characterization target as a benchmark for the system to build a 3 by 3 CCM using least-square regression. An evaluation of the CCM derived from the data with or without dark current removal is shown in Tables 2(a) and (b), respectively. The differences are as small as sub-0.1 range.
-
TABLE 2 CCM derived from the 24 GretagMacbeth ColorChecker chart (a) with and (b) without dark current removal (a) 3.6417 2.3073 0.4027 1.0473 5.9733 −0.7247 −0.0341 −1.2831 5.9315 (b) 3.5314 2.3792 0.3478 0.94161 6.042 −0.7775 −0.1289 −1.2208 5.8851 - The results with and without removing dark current are shown in Tables 3 (a) and (b), respectively. The CIEDE2000 color differences are used as the metric to determine training and testing performance. The training performance is the model trained and tested by the Color Correction (CC) chart. The testing performance is the model trained by CC chart and tested by the 729 dataset. It can be seen that the average performance was slightly improved by 0.2 E00 units when we remove the dark current.
-
TABLE 3 Training and testing performance of the CCM (a) with and (b) without dark current removal Eoo min mean median max (a) Training performance 0.222524 1.595685 0.855206 10.516312 Testing performance 0.039123 1.158254 0.616609 17.589503 (b) Training performance 0.257385 1.746228 1.04665 9.943352 Testing performance 0.033203 1.339187 0.766321 16.974136 - Step 335 may include correct uniformity in the 2D image provided by the sensor array in
camera system 150. For example, step 335 may include correct of Mura and Moire artifacts in the image. When the lines in the display happen to line up closely with some of the lines of CCD sensor, the Moire patterns will occur an interference pattern. An optical low pass filter or a digital filter may be used to remove the artifacts. Accordingly, in some embodiments step 335 may include correction of artifacts resulting from a larger field of view ofcamera 155 relative tocharacterization target 120. An algorithm to detect the points of interest (POI) (the portion of a sensor array including light 110 from characterization target 120) may crop the area from a full camera view. Since the display testing patterns are uniform colors, a technique of edge detection is used. A measure of edge strength such as gradient magnitude is derived for searching local directional maxima magnitude. Based on the magnitude, a threshold is applied to decide whether edges are present of not at an image point. The higher the threshold, the more edges will be removed. - Step 340 may include balancing a white display. Accordingly, step 345 may include presenting a standard ‘white’
characterization target 120 and determine the RGB camera output. Step 345 may include correct the gamma value ofcamera system 150. Step 350 may include providing RGB data for a color correction matrix step. Accordingly, step 350 may include providing RGB data aftersteps 310 through 345 are completed, tocontroller 170.Controller 170 may then form CCM matrix executing step 240 (cf.FIG. 2 ). Step 355 includes receiving a CCM. For example, step 355 may includeprocessor circuit 158 receiving CCM fromcontroller 170 whenstep 240 is complete (cf.FIG. 2 ). Step 360 includes providing corrected RGB data from the received CCM. Accordingly, step 360 may include receiving tristimulus data XYZ together with CCM, so thatprocessor circuit 158 may obtain the corrected RGB values. In someembodiments processor circuit 158 may receive corrected RGB values directly fromcontroller 170. Step 365 includes receiving an error value. The error value may be a difference between the RGB data provided instep 350 and the corrected RGB data provided instep 360. Instep 370processor circuit 158 determines whether or not the error value is below or above a tolerance value. When the error value is below the tolerance, then step 375 includes obtaining a tristimulus XYZ image from the CCM and the corrected RGB data. Accordingly, the XYZ image provided instep 375 may have a high colorimetric accuracy since it uses data provided by a highresolution spectrometer system 160 and acontroller 170 forming a CCM as in step 240 (cf.FIG. 2 and Eqs. 1-14 above). When the error value is above tolerance then imagingpipeline method 300 is repeated fromstep 350. -
FIG. 4 illustrates a flow chart including steps in animaging pipeline method 400, according to some embodiments. Steps inmethod 400 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g.,controller 170,camera system 150, andspectrometer 160, cf.FIG. 1 ). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g.,processor circuits memory circuits FIG. 1 ). -
Imaging pipeline method 400 may include a calibration ofspectrometer system 160. Step 410 may include correcting a signal linearity. For example, the signal linearity may be the linearity of sensor array 165 (cf.FIG. 1 ). In some embodiments,step 410 is performed by providing a uniform light source tospectrometer system 160. Step 420 may include adjusting a wavelength scale. Step 430 may include adjusting the spectral sensitivity. Step 440 may include correcting for a dark current. The dark level error may be caused by the imperfect glass trap and specular beam error. Thus, step 440 may include placing a glass wedge in the optical path ofspectrometer system 160. Step 450 may include receiving a characterization target light. And step 460 may include providing XYZ data from the spectrum formed with the received characterization light. -
FIG. 5 illustrates a flow chart including steps in an imagingpipeline calibration method 500, according to some embodiments. Steps inmethod 500 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g.,controller 170,camera system 150, andspectrometer 160, cf.FIG. 1 ). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g.,processor circuits memory circuits FIG. 1 ). - Step 510 includes selecting a plurality of training samples. Step 510 may include selecting a plurality of colors from a standard, or a ‘gold’ standard. Step 520 includes providing a plurality of test samples from the plurality of training samples selected in
step 510. Accordingly, step 520 may include digitally processing the training samples provided instep 510 to generate a larger number of test samples. A plurality of training samples as selected instep 510 may be as described in detail below, with reference toFIGS. 6A and 6B . In one example,training samples 610 may be obtained from a well-known standard chart. The ColorChecker® Color Rendition Chart supplied by Macbeth Company in 1976 is now called ColorChecker® (CC) owned by X-Rite. It has been widely used as reference in the field of photography and video. The chart includes a matrix of 24 scientifically prepared color squares including three additive and three subtractive primaries, 6 greyscale tones, and natural color objects such as foliage, human skin and blue sky which exemplify the color of their counterparts. These 24 colors are reproduced on the testing display ascharacterization target 120. -
FIG. 6A illustrates acolor distribution chart 600A for a plurality oftraining samples 610 in an imaging pipeline calibration method, according to some embodiments.FIG. 6A shows the color distribution of the CC on a*b* planes. Accordingly, theabscissa 601A inchart 600A corresponds to the a* value (red-green scale), and theordinate 602A inchart 600A corresponds to the b* value (yellow-blue scale). The CC chart may include a set ofgray scale colors 620 that are displayed in the origin ofchart 600A (neutral color). - The greyscale of CC chart can be applied to correct the linearity between luminance level and camera response. Once the camera has been characterized, the greyscale is also used to check the gamma of the testing display (e.g., in
step 345, cf.FIG. 3 ). -
FIG. 6B illustrates acolor distribution chart 600B for the plurality oftraining samples 610 in an imaging pipeline calibration method, according to some embodiments.FIG. 6B shows the color distribution of the CC on L*-C*ab planes. Accordingly, theabscissa 601B inchart 600B corresponds to the Ca*b* value (√{square root over (a*2+b*2)}), and the ordinate 602B inchart 600B corresponds to the L* value (luminance).Test samples 610 in may include aset 620 of gray scale colors that are displayed along the 602B axis at regular intervals (evenly graded ‘lightness’). - A plurality of test samples as used in method 500 (cf.
FIG. 5 above) may be as described in detail with respect toFIGS. 7A and 7B , below. InFIG. 7A the abscissa 701A and ordinate 702A may be as inFIG. 6A . And inFIG. 7B the abscissa 701B and ordinate 701B may be as inFIG. 6B . -
FIG. 7A illustrates acolor distribution chart 700A for a plurality oftest samples 710 in an imaging pipeline calibration method, according to some embodiments. Accordingly, test samples may include 729 uniform distribution colors on display color gamut. One of ordinary skill will recognize that there is nothing limiting with regard to the number of data points intest sample 710. - The
test colors 710 are formed fromtraining colors 610 using 16 bits intervals along red, green and blue channels plus a grey scale are accumulated to have 729 colors. These colors uniformly distribute in the display color gamut as shown inFIGS. 7A and 7B . -
FIG. 7B illustrates acolor distribution chart 700B for a plurality oftest samples 710 in an imaging pipeline calibration method, according to some embodiments.Test samples 710 may includegray scale samples 720.Chart 700B shows an L*-Ca*b* plane, so that gray scale points 720 are clearly distinguishable along the L* axis (ordinates). - Based on test sample set 710, a color selection algorithm is applied to select colors to establish a characterization target for display measurement. This set is also applied to test the robustness of characterization targets.
-
FIG. 8 illustrates a flow chart including steps in acolor selection algorithm 800 used for an imaging pipeline calibration method, according to some embodiments.Algorithm 800 may include a color selection algorithm (CSA) to achieve high color accuracy in terms of color differences. In other words,CSA 800 may achieve high color resolution. During the selection process, a source dataset including XYZ and camera RGB are first provided (see vectors c and a, in reference to step 240 inmethod 200, cf.FIG. 2 ). The number of samples in the source dataset and the training dataset, which are the samples selected from the source dataset are known. - Steps in
method 800 may be performed by a controller using data provided by a camera system and a spectrometer system (e.g.,controller 170,camera system 150, andspectrometer 160, cf.FIG. 1 ). Accordingly, the data provided to the controller may be stored in a memory circuit and processed by a processor circuit in the camera system and, a memory circuit and a processor circuit in the spectrometer system (e.g.,processor circuits memory circuits FIG. 1 ). - Step 810 includes collecting a training sample. Accordingly, step 810 may include selecting a training set from a standardized set. The standardized set may be a set of calibration colors. If K is the total number of samples in a training set, a value κ may be predefined as the number of training samples to form a predictor set. Thus, κ may be a ‘dimension’ of the predictor set. In some embodiments,
method 800 starts with κ equal to zero. Since there are K training samples, each sample is a candidate. Each of the K samples is first used (κ=0) to obtain a predictor set. Thus, K models are obtained. Step 815 includes a query as to whether or not the training sample is already included in a predictor set. If the training sample is already included in the predictor set, thenmethod 800 starts again with a new training sample, to form a new predictor set. A predictor set may include matrices C and AT, including vectors c and a (cf. the detailed description ofstep 240 inmethod 200,FIG. 2 ). Thus, the predictor set may include tristimulus values (XYZ, vector c) fromspectrometer system 160, and RGB values from camera system 150 (vector a, formed from RGB values according to Eq. 2). When the training sample is not included in the predictor set,step 820 includes the training sample into the predictor set. In some embodiments, the predictor set may be empty, so that the first training sample selected inset 810 may automatically be used in the predictor set. In step 825 a CCM is obtained using the predictor set. Accordingly, the CCM may be formed as matrix Q, from matrices C and A (cf. Eq. 14). Step 830 includes obtaining an error value from a plurality of test samples. For example, step 830 may include obtaining RGB values for a plurality of test samples obtained with the tristimulus values XYZ provided byspectrometer system 160 and the CCMmatrix Q. Step 830 may further include comparing the obtained RGB values with the RGB values provided bycamera system 150 for each test sample. - The set of test samples used in
step 830 may be much larger than the set of training samples used to form the predictor set. For example, the set of training samples insteps 810 through 825 may be as training set 610 (cf.FIGS. 6A and 6B ). And the set of test samples instep 830 may be as test set 710 (cf.FIGS. 7A and 7B ). Step 830 may include obtaining a single error value from a set of error values for each of the test samples. In some embodiments step 830 may include averaging the error values from the set of error values for each of the test samples. In some embodiments,step 830 may include selecting an error value from a statistical distribution of the error values for all the test samples. For example, a median, a mean, the maximum, or the minimum error values in a distribution of error values may be selected instep 830. Step 835 includes querying whether or not a new training sample is selected. For example, if a training sample remains to be selected then steps 810 through 835 are repeated until the result instep 835 is a ‘no.’ In some embodiments,step 835 may produce a ‘no’ when all training samples in the set of training samples have been selected or included in a predictor set. Accordingly, up to step 835 a plurality of predictor sets is selected, each predictor set having the same number of c vectors and a vectors (κ+1). Moreover, each predictor set up to step 835 includes a same set of κc vectors and κa vectors, except the c vector and a vector selected in the last iteration ofsteps 810 through 835. Also, within a single predictor set, all (κ+1) vectors c may be different from one another, and all (κ+1) vectors a may be different from one another. Thus, up to step 835 an error value is assigned to each one of a predictor set associated with each selected training sample. - When
step 835 results in a ‘no’ answer, then step 840 includes forming a set of error values from the plurality of predictor sets. Step 845 includes selecting a training sample and a predictor set from the set of error values. Accordingly,step 845 includes selecting the training sample that provides the lowest error in the set of error values formed instep 840. If the error value of the selected predictor set is less than a tolerance value according to step 850, then the predictor set is used to form the CCM matrix instep 855. Accordingly, step 855 may include forming matrix Q using the (κ+1) c vectors and the (κ+1) a vectors from the selected predictor set as in Eq. 14. If the error value of the selected predictor set is greater than or equal to the tolerance value according to step 850, thenmethod 800 may be repeated fromstep 810. The dimension of the predictor set is then increased by one (1). - For example, the second iteration of
steps 810 through 845 should provide the best combination of 2 c vectors and a vectors for a predictor set. In order to avoid selecting 2 same c vectors or 2 same a vectors, the previously selected training sample c vector and a vector is removed from the source dataset and κ equals to one (1). Once again, each remaining training sample combined with the already selected training sample is used for training the model insteps 810 through 845. Thus, in a second iteration, the number of predictor sets instep 840 will be K−1 models. Again, each predictor set is used to predict the full source dataset. From the predictions, the sample combined with already selected training sample with the smallest color difference will be selected. - Accordingly,
method 800 may be repeated until it reaches a number of training samples producing an error lower than the threshold. This CSA is simple and easy to implement. According to some embodiments a predictor set having a single element may include the lightest neutral color in the training set, with a mean error value (ΔE00) of about 15. Thus, in some embodiments it is desirable that the lightest neutral color be included in the training set. -
FIG. 9A illustrates a camerasystem response chart 900A for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.Chart 900A may be the result ofstep 320 in imaging pipeline method 300 (cf.FIG. 3 ).Abscissa 901 inchart 900A may be associated to a tristimulus XYZ vector provided byspectrometer system 160, such as luminance L*, or a Y coordinate.Ordinate 902 inchart 900A may be associated to an RGB data fromcamera system 150, such as the ‘Green’ count, ‘G.’ Data points 910 may be associated to each training sample in a set of training samples (e.g.,training samples 610, cf.FIGS. 6A and 6B ). Data points 910 may also comprise gray scale data points 920-1, 920-2, 920-3, 920-4, 920-5, and 920-6. To correct for signal linearity, the exposure time ofcamera 155 incamera system 150 may be adjusted, as follows.Chart 900A is associated with a fixed exposure time scenario. In particular the exposure time may be a few milliseconds, such as less than 10, 10, 20, 24, or even more milliseconds. - In order to have sufficient image quality to detect display effect, the exposure time should be controlled by signal to noise ratio (SNR) of an image. Fixed exposure time for all measurements keeps the linearity between camera response and colors which is desirable for CCM development. Accordingly, it may be desirable to avoid SNR fluctuations with different color pattern, especially for a
dark characterization target 120. Using a fixed exposure time may include ensuring that test colors are within the dynamic range ofcamera system 150. -
FIG. 9B illustrates a camerasystem response chart 900B for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.Ordinates 901,abscissae 902, anddata points 910 and 920 inchart 900B are as inchart 900A, described above.Chart 900B includes a configuration wherein the exposure time incamera 155 is set in auto-exposure mode. The auto-exposure setting ensures images with high SNR. However, chart 900B shows that camera linearity to color stimulus will be lower than fixed exposure setting (chart 900A). A configuration ofcamera system 150 as described inchart 900B may be desirable to increase the average camera signal. Accordingly, a signal level from about 40000 to 65535 may be obtained for some test samples, rendering higher average SNR as inchart 900A. -
FIG. 9C illustrates a camera system response chart 900C for a signal linearity correction step in an imaging pipeline calibration method, according to some embodiments.Ordinates 901,abscissae 902, anddata points 910 and 920 inchart 900B are as incharts camera 155 is set in auto-exposure mode. Further, in the configuration illustrated inFIG. 9C the output from camera system 150 (ordinate 902) is normalized by the exposure time. Chart 900C illustrates that in order to correct signal linearity (e.g., instep 320,method 300, cf.FIG. 3 ), the camera output may be normalized with the exposure time. The camera RGB responses inFIGS. 9A-9C are measured for a set of achromatic samples, a uniform white and the dark condition. -
FIG. 10 illustrates acolor distribution chart 1000 for a plurality of test samples measured 1010, and predicted 1020, in an imaging pipeline calibration method, according to some embodiments.Chart 1000 has anabscissa 601A, anordinate 602A, and adepth axis 602B, as defined above with respect toFIGS. 6A and 6B . A training sample of 24 colors was used (cf.FIGS. 6A and 6B ) to select a preferred predictor set according to method 800 (cf.FIG. 8 ). A test sample of 729 colors (cf.FIGS. 7A and 7B ) is shown inFIG. 10 . It can be seen that larger errors occur in the dark region. -
FIG. 11 illustrates acamera display 1100 for a uniformity correction step of a camera system in an imaging pipeline calibration method, according to some embodiments.Camera display 1100 may be a 2D sensor array, as discussed in detail above in relation toFIG. 1 . In some embodiments, method 300 (cf.FIG. 3 ) may include a step for detecting display artifacts such as black mura. Black mura may negatively affect the uniformity ofcamera system 150. Accordingly, a spatial correction is conducted to minimize the effect of any spatial non-uniformity of the intensity of the illumination or of the sensitivity of the camera CCD array.FIG. 11 shows an example of non-uniformity effect on mura detection atdisplay edge portion 1110. It can be seen that themiddle portion 1120 ofdisplay 1100 has very similar luminance intensity to the mura area at the edge. This increases the complexity of mura detection from a uniform display. -
FIG. 12 illustrates an erroraverage chart 1200 in an imaging pipeline calibration method, according to some embodiments.Chart 1200 may be the result of several iterations inmethod 800, described in detail above. The abscissa inchart 1200 corresponds to the dimensionality of the predictor set (κ). The ordinate inchart 1200 corresponds to the error obtained for the selected predictor seat at the end of each iteration sequence, instep 845. In this particular example, inchart 1200 the predictor set is formed from colors selected from a training set including the 729 samples ofFIGS. 7A and 7B .Characterization target 120 is applied to train the characterization model and tested by the 729 samples.FIG. 12 shows the performance in terms of CIEDE2000 against the number of the samples selected bymethod 800. It can be seen that the model performance stabilized at mean of one (1) error (E00) units with as few as four (4) training samples. Accordingly, it is desirable to determine which set of four training samples provides the optimal performance, so that this set is used for a CCM in any one ofimaging pipeline methods FIGS. 2 , 3, and 4). -
FIGS. 13A and 13B illustratecolor distribution charts training samples 1310 in an imaging pipeline calibration method, according to some embodiments. The abscissae and ordinate inchart 1300A are 601A and 601A, (cf.FIG. 6A ). The abscissae and ordinate inchart 1300B are 601B and 602B, respectively (cf.FIG. 6B ). Accordingly, charts 1300A and 1300B display the color chart result for thetraining sample points 710 usingmethod 800 up to the fourth iteration (κ=4), as described inFIG. 12 , above.Chart 1300A displays the four training samples (open squares) selected in the preferred predictor set (CCM) inmethod 800 in an a* b* plot.Chart 1300B displays the four training samples (open squares) selected in the preferred predictor set (CCM) inmethod 800 in an L* Ca*b* plot. The four training samples in the preferred predictor set are grey, cyan, yellow and magenta as shown in theFIGS. 13A and 13B . The 24 relevant samples of the 729 colors from the display gamut are also plotted. As expected, the training sample points (red circles) fall approximately at the center of the predicted values (open squares). The test colors inset 710 cover the display color gamut and include grey scale and saturation colors. - Embodiments consistent with the present disclosure include a complete imaging pipeline for the new combo device: spectro-colorimeter, including the exposure time, dark current normalization, color correction matrix derivation, and flat field calibration. In some embodiments the imaging pipeline achieves a colorimeter accuracy better than two (ΔE<2) for 729 test samples covering the full bandwidth of the color space. Imaging pipelines as disclosed herein enable close-loop master-slave calibration of
spectrometer system 160 andcamera system 150. Therefore, embodiments as disclosed herein integrate two device components into a system, providing the imaging capability with spectrometer accuracy. - Embodiments consistent with the present disclosure may include applications in the display test industry as well as the machine vision field. Other applications may be readily envisioned, since an imaging pipeline consistent with embodiments as disclosed herein integrate two different hardware components such as a
camera system 150 and aspectrometer system 160. -
FIG. 14 illustrates a block diagram of a spectro-colorimeter system 1400 for handling an imaging pipeline, according to some embodiments. Spectro-colorimeter system 1400 includes aspectrometer system 1460 and acamera system 1450 used in an imaging pipeline as described above. Furthermore, Spectro-colorimeter system 1400 may include acalibration target display 1420 used in a calibration method for an imaging pipeline consistent with embodiments disclosed herein. - Spectro-
colorimeter system 1400 can include circuitry of a representative computing device. For example, spectro-colorimeter system 1400 can include aprocessor 1402 that pertains to a microprocessor or controller for controlling the overall operation of spectro-colorimeter system 1400. Spectro-colorimeter system 1400 can include instruction data pertaining to operating instructions, such as instructions for implementing and controlling user equipment, infile system 1404.File system 1404 can be a storage disk or a plurality of disks. In some embodiments,file system 1404 can be flash memory, semiconductor (solid state) memory or the like.File system 1404 can provide high capacity storage capability for the spectro-colorimeter system 1400. In some embodiments, to compensate a relatively slow access time forfile system 1404, spectro-colorimeter system 1400 can also include acache 1406.Cache 1406 can include, for example, Random-Access Memory (RAM) provided by semiconductor memory, according to some embodiments. The relative access time forcache 1406 can be substantially shorter than forfile system 1404. On the other hand,file system 1404 may include a higher storage capacity thancache 1406. Spectro-colorimeter system 1400 can also include aRAM 1405 and a Read-Only Memory (ROM) 1407.ROM 1407 can store programs, utilities or processes to be executed in a non-volatile manner.RAM 1405 can provide volatile data storage, such as forcache 1406. - Spectro-
colorimeter system 1400 can also includeuser input device 1408 allowing a user to interact with the spectro-colorimeter system 1400. For example,user input device 1408 can take a variety of forms, such as a button, a keypad, a dial, a touch screen, an audio input interface, a visual/image capture input interface, an input in the form of sensor data, and any other input device. Still further, spectro-colorimeter system 1400 can include a display 1410 (screen display) that can be controlled byprocessor 1402 to display information, such as test results and calibration test results, to the user.Data bus 1416 can facilitate data transfer between at leastfile system 1404,cache 1406,processor 1402, andcontroller 1470.Controller 1470 can be used to interface with and control different devices such ascamera system 1450,spectrometer system 1460, andcalibration target display 1420.Controller 1470 may also control or motors to position mirror/lens through appropriate codecs. For example,control bus 1474 can be used to controlcamera system 1450. - Spectro-
colorimeter system 1400 can also include a network/bus interface 1411 that couples todata link 1412.Data link 1412 allows spectro-colorimeter system 1400 to couple to a host computer or to accessory devices or to other networks such as the internet. In some embodiments,data link 1412 can be provided over a wired connection or a wireless connection. In the case of a wireless connection, network/bus interface 1411 can include a wireless transceiver. In some embodiments,sensor 1426 includes circuitry for detecting any number of stimuli. For example,sensor 1426 can include any number of sensors for monitoring environmental conditions such as a light sensor such as a photometer, a temperature sensor and so on. - The various aspects, embodiments, implementations or features of the described embodiments can be used separately or in any combination. Various aspects of the described embodiments can be implemented by software, hardware or a combination of hardware and software. The described embodiments can also be embodied as computer readable code on a computer readable medium for controlling manufacturing operations or as computer readable code on a computer readable medium for controlling a manufacturing line. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, HDDs, DVDs, magnetic tape, and optical data storage devices. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
- The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of specific embodiments are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the described embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art that many modifications and variations are possible in view of the above teachings.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/099,802 US20140300753A1 (en) | 2013-04-04 | 2013-12-06 | Imaging pipeline for spectro-colorimeters |
US16/160,584 US11193830B2 (en) | 2013-04-04 | 2018-10-15 | Spectrocolorimeter imaging system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361808549P | 2013-04-04 | 2013-04-04 | |
US14/099,802 US20140300753A1 (en) | 2013-04-04 | 2013-12-06 | Imaging pipeline for spectro-colorimeters |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/160,584 Continuation US11193830B2 (en) | 2013-04-04 | 2018-10-15 | Spectrocolorimeter imaging system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140300753A1 true US20140300753A1 (en) | 2014-10-09 |
Family
ID=51654159
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/099,802 Abandoned US20140300753A1 (en) | 2013-04-04 | 2013-12-06 | Imaging pipeline for spectro-colorimeters |
US16/160,584 Active US11193830B2 (en) | 2013-04-04 | 2018-10-15 | Spectrocolorimeter imaging system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/160,584 Active US11193830B2 (en) | 2013-04-04 | 2018-10-15 | Spectrocolorimeter imaging system |
Country Status (1)
Country | Link |
---|---|
US (2) | US20140300753A1 (en) |
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105588642A (en) * | 2014-11-11 | 2016-05-18 | 仪器系统光学测量科技有限公司 | Colorimeter Calibration |
EP3054273A1 (en) * | 2015-02-09 | 2016-08-10 | Instrument Systems Optische Messtechnik Gmbh | Colorimetry system for display testing |
WO2017184821A1 (en) | 2016-04-20 | 2017-10-26 | Leica Biosystems Imaging, Inc. | Digital pathology color calibration and validation |
JPWO2016181750A1 (en) * | 2015-05-14 | 2018-03-08 | コニカミノルタ株式会社 | Spectral colorimetry apparatus and conversion rule setting method |
US20180103219A1 (en) * | 2016-10-12 | 2018-04-12 | Samsung Electronics Co., Ltd. | Method, apparatus, and recording medium for processing image |
US10205896B2 (en) | 2015-07-24 | 2019-02-12 | Google Llc | Automatic lens flare detection and correction for light-field images |
US20190051022A1 (en) * | 2016-03-03 | 2019-02-14 | Sony Corporation | Medical image processing device, system, method, and program |
US20190049306A1 (en) * | 2017-08-10 | 2019-02-14 | Westco Scientific Instruments, Inc | Calibration for baking contrast units |
US20190072932A1 (en) * | 2017-09-01 | 2019-03-07 | Mueller International, Llc | Onsite mobile manufacturing platform |
US20190080052A1 (en) * | 2017-05-22 | 2019-03-14 | Gregory Edward Lewis | Perpetual bioinformatics and virtual colorimeter expert system |
EP3415883A4 (en) * | 2016-02-24 | 2019-04-03 | Konica Minolta, Inc. | Two-dimensional colorimetric device, two-dimensional colorimetric system, and two-dimensional colorimetric method |
US20190120694A1 (en) * | 2013-04-04 | 2019-04-25 | Apple Inc. | Spectrocolorimeter imaging system |
US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
US10412286B2 (en) * | 2017-03-31 | 2019-09-10 | Westboro Photonics Inc. | Multicamera imaging system and method for measuring illumination |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10552947B2 (en) | 2012-06-26 | 2020-02-04 | Google Llc | Depth-based image blurring |
US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
CN111707365A (en) * | 2020-05-25 | 2020-09-25 | 重庆冠雁科技有限公司 | Detector adjusting system and method for spectrum module |
CN111919105A (en) * | 2018-02-20 | 2020-11-10 | 派拉斯科技术公司 | Method and system for on-line monitoring and controlling color decoration specification of beverage cans |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019176556A1 (en) * | 2018-03-13 | 2019-09-19 | ソニー株式会社 | Medical instrument management system and medical instrument management method |
CN112067127B (en) * | 2020-08-26 | 2021-07-27 | 中国科学院西安光学精密机械研究所 | Real-time calibration device of slit type spectrometer |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5850472A (en) * | 1995-09-22 | 1998-12-15 | Color And Appearance Technology, Inc. | Colorimetric imaging system for measuring color and appearance |
US6856354B1 (en) * | 1998-11-13 | 2005-02-15 | Olympus Optical Co., Ltd. | Color reproducing system for reproducing a color of an object under illumination light |
US20050248786A1 (en) * | 2004-05-06 | 2005-11-10 | Tobie C D | Method and system for correcting color rendering devices |
US20060197757A1 (en) * | 1997-08-25 | 2006-09-07 | Holub Richard A | System for distributing and controlling color reproduction at multiple sites |
US20070139648A1 (en) * | 2005-12-16 | 2007-06-21 | Asml Netherlands B.V. | Lithographic apparatus and method |
US20070272844A1 (en) * | 2006-05-25 | 2007-11-29 | Photo Research, Inc. | Apparatus with multiple light detectors and methods of use and manufacture |
US20090141042A1 (en) * | 2007-11-29 | 2009-06-04 | Colman Shannon | Method and apparatus for calibrating a display-coupled color measuring device |
US20090201498A1 (en) * | 2008-02-11 | 2009-08-13 | Ramesh Raskar | Agile Spectrum Imaging Apparatus and Method |
US20100245650A1 (en) * | 2009-03-27 | 2010-09-30 | Radiant Imaging, Inc. | Imaging devices with components for reflecting optical data and associated methods of use and manufacture |
US20100302596A1 (en) * | 2009-05-29 | 2010-12-02 | Kyocera Mita Corporation | Color look up table adjusting apparatus, recording medium on which a color look up table adjusting program is recorded and color look up table adjusting system |
US20140111807A1 (en) * | 2012-10-23 | 2014-04-24 | Apple Inc. | High accuracy imaging colorimeter by special designed pattern closed-loop calibration assisted by spectrograph |
US20140192209A1 (en) * | 2013-01-07 | 2014-07-10 | Apple Inc. | Parallel sensing configuration covers spectrum and colorimetric quantities with spatial resolution |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5272518A (en) * | 1990-12-17 | 1993-12-21 | Hewlett-Packard Company | Colorimeter and calibration system |
FR2749077B1 (en) * | 1996-05-23 | 1999-08-06 | Oreal | COLOR MEASURING METHOD AND DEVICE |
JPH1196333A (en) * | 1997-09-16 | 1999-04-09 | Olympus Optical Co Ltd | Color image processor |
US6381009B1 (en) * | 1999-06-29 | 2002-04-30 | Nanometrics Incorporated | Elemental concentration measuring methods and instruments |
EP1208367A4 (en) * | 1999-08-06 | 2007-03-07 | Cambridge Res & Instrmnt Inc | Spectral imaging system |
US7714301B2 (en) * | 2000-10-27 | 2010-05-11 | Molecular Devices, Inc. | Instrument excitation source and calibration method |
US7362357B2 (en) * | 2001-08-07 | 2008-04-22 | Signature Research, Inc. | Calibration of digital color imagery |
TWI280409B (en) * | 2006-04-14 | 2007-05-01 | Asustek Comp Inc | Reflective photo device, an electronic apparatus with a built-in camera using the device for providing colorimeter and ambient light sensor functions and its method |
US20070291277A1 (en) * | 2006-06-20 | 2007-12-20 | Everett Matthew J | Spectral domain optical coherence tomography system |
WO2008075266A2 (en) * | 2006-12-19 | 2008-06-26 | Philips Intellectual Property & Standards Gmbh | Colour sequential flash for digital image acquisition |
WO2009070121A1 (en) * | 2007-11-30 | 2009-06-04 | Hamed Hamid Muhammed | Miniaturized all-reflective holographic fourier transform imaging spectrometer based on a new all-reflective interferometer |
US8807751B2 (en) * | 2008-04-22 | 2014-08-19 | Annidis Health Systems Corp. | Retinal fundus surveillance method and apparatus |
EP2473834B1 (en) * | 2009-09-03 | 2021-09-08 | National ICT Australia Limited | Illumination spectrum recovery |
US8559014B2 (en) * | 2009-09-25 | 2013-10-15 | Hwan J. Jeong | High-resolution, common-path interferometric imaging systems and methods |
US8976239B2 (en) * | 2012-08-24 | 2015-03-10 | Datacolor Holding Ag | System and apparatus for color correction in transmission-microscope slides |
US20140300753A1 (en) * | 2013-04-04 | 2014-10-09 | Apple Inc. | Imaging pipeline for spectro-colorimeters |
-
2013
- 2013-12-06 US US14/099,802 patent/US20140300753A1/en not_active Abandoned
-
2018
- 2018-10-15 US US16/160,584 patent/US11193830B2/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5850472A (en) * | 1995-09-22 | 1998-12-15 | Color And Appearance Technology, Inc. | Colorimetric imaging system for measuring color and appearance |
US20060197757A1 (en) * | 1997-08-25 | 2006-09-07 | Holub Richard A | System for distributing and controlling color reproduction at multiple sites |
US6856354B1 (en) * | 1998-11-13 | 2005-02-15 | Olympus Optical Co., Ltd. | Color reproducing system for reproducing a color of an object under illumination light |
US20050248786A1 (en) * | 2004-05-06 | 2005-11-10 | Tobie C D | Method and system for correcting color rendering devices |
US20070139648A1 (en) * | 2005-12-16 | 2007-06-21 | Asml Netherlands B.V. | Lithographic apparatus and method |
US20070272844A1 (en) * | 2006-05-25 | 2007-11-29 | Photo Research, Inc. | Apparatus with multiple light detectors and methods of use and manufacture |
US20090141042A1 (en) * | 2007-11-29 | 2009-06-04 | Colman Shannon | Method and apparatus for calibrating a display-coupled color measuring device |
US20090201498A1 (en) * | 2008-02-11 | 2009-08-13 | Ramesh Raskar | Agile Spectrum Imaging Apparatus and Method |
US20100245650A1 (en) * | 2009-03-27 | 2010-09-30 | Radiant Imaging, Inc. | Imaging devices with components for reflecting optical data and associated methods of use and manufacture |
US20100302596A1 (en) * | 2009-05-29 | 2010-12-02 | Kyocera Mita Corporation | Color look up table adjusting apparatus, recording medium on which a color look up table adjusting program is recorded and color look up table adjusting system |
US20140111807A1 (en) * | 2012-10-23 | 2014-04-24 | Apple Inc. | High accuracy imaging colorimeter by special designed pattern closed-loop calibration assisted by spectrograph |
US8988682B2 (en) * | 2012-10-23 | 2015-03-24 | Apple Inc. | High accuracy imaging colorimeter by special designed pattern closed-loop calibration assisted by spectrograph |
US20140192209A1 (en) * | 2013-01-07 | 2014-07-10 | Apple Inc. | Parallel sensing configuration covers spectrum and colorimetric quantities with spatial resolution |
US9076363B2 (en) * | 2013-01-07 | 2015-07-07 | Apple Inc. | Parallel sensing configuration covers spectrum and colorimetric quantities with spatial resolution |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10298834B2 (en) | 2006-12-01 | 2019-05-21 | Google Llc | Video refocusing |
US10552947B2 (en) | 2012-06-26 | 2020-02-04 | Google Llc | Depth-based image blurring |
US20190120694A1 (en) * | 2013-04-04 | 2019-04-25 | Apple Inc. | Spectrocolorimeter imaging system |
US11193830B2 (en) * | 2013-04-04 | 2021-12-07 | Instrument Systems Optische Messtechnik Gmbh | Spectrocolorimeter imaging system |
US10334151B2 (en) | 2013-04-22 | 2019-06-25 | Google Llc | Phase detection autofocus using subaperture images |
CN105588642A (en) * | 2014-11-11 | 2016-05-18 | 仪器系统光学测量科技有限公司 | Colorimeter Calibration |
EP3021096A1 (en) * | 2014-11-11 | 2016-05-18 | Instrument Systems Optische Messtechnik Gmbh | Colorimeter calibration |
KR20160098083A (en) * | 2015-02-09 | 2016-08-18 | 인스트루먼트 시스템즈 옵티쉐 메스테크닉 게엠베하 | Colorimetry system for display testing |
KR102069935B1 (en) | 2015-02-09 | 2020-01-23 | 인스트루먼트 시스템즈 옵티쉐 메스테크닉 게엠베하 | Colorimetry system for display testing |
CN105865630A (en) * | 2015-02-09 | 2016-08-17 | 仪器系统光学测量技术有限责任公司 | Colorimetry system for display testing |
JP2016145829A (en) * | 2015-02-09 | 2016-08-12 | インストルメント システムズ オプティッシェ メステヒニク ゲゼルシャフト ミット ベシュレンクテル ハフツングInstrument Systems Optische Messtechnik GmbH | Colorimetry system for display testing |
TWI626433B (en) * | 2015-02-09 | 2018-06-11 | 儀器系統光學測量技術有限公司 | Method for two-dimensional, spatially resolved measurement and imaging colorimeter system capable of said measurement |
EP3054273A1 (en) * | 2015-02-09 | 2016-08-10 | Instrument Systems Optische Messtechnik Gmbh | Colorimetry system for display testing |
US10275898B1 (en) | 2015-04-15 | 2019-04-30 | Google Llc | Wedge-based light-field video capture |
US10341632B2 (en) | 2015-04-15 | 2019-07-02 | Google Llc. | Spatial random access enabled video system with a three-dimensional viewing volume |
US10567464B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video compression with adaptive view-dependent lighting removal |
US11328446B2 (en) | 2015-04-15 | 2022-05-10 | Google Llc | Combining light-field data with active depth data for depth map generation |
US10565734B2 (en) | 2015-04-15 | 2020-02-18 | Google Llc | Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline |
US10546424B2 (en) | 2015-04-15 | 2020-01-28 | Google Llc | Layered content delivery for virtual and augmented reality experiences |
US10540818B2 (en) | 2015-04-15 | 2020-01-21 | Google Llc | Stereo image generation and interactive playback |
US10469873B2 (en) | 2015-04-15 | 2019-11-05 | Google Llc | Encoding and decoding virtual reality video |
US10419737B2 (en) | 2015-04-15 | 2019-09-17 | Google Llc | Data structures and delivery methods for expediting virtual reality playback |
US10412373B2 (en) | 2015-04-15 | 2019-09-10 | Google Llc | Image capture for virtual reality displays |
JPWO2016181750A1 (en) * | 2015-05-14 | 2018-03-08 | コニカミノルタ株式会社 | Spectral colorimetry apparatus and conversion rule setting method |
US10514300B2 (en) * | 2015-05-14 | 2019-12-24 | Konica Minolta, Inc. | Spectrocolorimetric device and conversation rule setting method |
US20180136044A1 (en) * | 2015-05-14 | 2018-05-17 | Konica Minolta, Inc. | Spectroscopic Color Measurement Device and Method for Setting Conversion Rule |
US10205896B2 (en) | 2015-07-24 | 2019-02-12 | Google Llc | Automatic lens flare detection and correction for light-field images |
EP3415883A4 (en) * | 2016-02-24 | 2019-04-03 | Konica Minolta, Inc. | Two-dimensional colorimetric device, two-dimensional colorimetric system, and two-dimensional colorimetric method |
US20190051022A1 (en) * | 2016-03-03 | 2019-02-14 | Sony Corporation | Medical image processing device, system, method, and program |
US11244478B2 (en) * | 2016-03-03 | 2022-02-08 | Sony Corporation | Medical image processing device, system, method, and program |
US11614363B2 (en) | 2016-04-20 | 2023-03-28 | Leica Biosystems Imaging, Inc. | Digital pathology color calibration and validation |
US10845245B2 (en) | 2016-04-20 | 2020-11-24 | Leica Biosystems Imaging, Inc. | Digital pathology color calibration and validation |
WO2017184821A1 (en) | 2016-04-20 | 2017-10-26 | Leica Biosystems Imaging, Inc. | Digital pathology color calibration and validation |
EP3446083A4 (en) * | 2016-04-20 | 2019-12-25 | Leica Biosystems Imaging Inc. | Digital pathology color calibration and validation |
CN109073454A (en) * | 2016-04-20 | 2018-12-21 | 徕卡生物系统成像股份有限公司 | Digital pathology colorific adjustment and verifying |
US10275892B2 (en) | 2016-06-09 | 2019-04-30 | Google Llc | Multi-view scene segmentation and propagation |
US11025845B2 (en) * | 2016-10-12 | 2021-06-01 | Samsung Electronics Co., Ltd. | Method, apparatus, and recording medium for processing image |
US11689825B2 (en) * | 2016-10-12 | 2023-06-27 | Samsung Electronics Co., Ltd. | Method, apparatus, and recording medium for processing image |
US20210274113A1 (en) * | 2016-10-12 | 2021-09-02 | Samsung Electronics Co., Ltd. | Method, apparatus, and recording medium for processing image |
US20180103219A1 (en) * | 2016-10-12 | 2018-04-12 | Samsung Electronics Co., Ltd. | Method, apparatus, and recording medium for processing image |
US10679361B2 (en) | 2016-12-05 | 2020-06-09 | Google Llc | Multi-view rotoscope contour propagation |
US10412286B2 (en) * | 2017-03-31 | 2019-09-10 | Westboro Photonics Inc. | Multicamera imaging system and method for measuring illumination |
US10594945B2 (en) | 2017-04-03 | 2020-03-17 | Google Llc | Generating dolly zoom effect using light field image data |
US10440407B2 (en) | 2017-05-09 | 2019-10-08 | Google Llc | Adaptive control for immersive experience delivery |
US10474227B2 (en) | 2017-05-09 | 2019-11-12 | Google Llc | Generation of virtual reality with 6 degrees of freedom from limited viewer data |
US10444931B2 (en) | 2017-05-09 | 2019-10-15 | Google Llc | Vantage generation and interactive playback |
US10878942B2 (en) * | 2017-05-22 | 2020-12-29 | Gregory Edward Lewis | Perpetual bioinformatics and virtual colorimeter expert system |
US20190080052A1 (en) * | 2017-05-22 | 2019-03-14 | Gregory Edward Lewis | Perpetual bioinformatics and virtual colorimeter expert system |
US10354399B2 (en) | 2017-05-25 | 2019-07-16 | Google Llc | Multi-view back-projection to a light-field |
US20190049306A1 (en) * | 2017-08-10 | 2019-02-14 | Westco Scientific Instruments, Inc | Calibration for baking contrast units |
US10558198B2 (en) * | 2017-09-01 | 2020-02-11 | Mueller International, Llc | Onsite mobile manufacturing platform |
US11287804B2 (en) | 2017-09-01 | 2022-03-29 | Mueller International, Llc | Onsite mobile manufacturing platform |
US20190072932A1 (en) * | 2017-09-01 | 2019-03-07 | Mueller International, Llc | Onsite mobile manufacturing platform |
US10545215B2 (en) | 2017-09-13 | 2020-01-28 | Google Llc | 4D camera tracking and optical stabilization |
US10965862B2 (en) | 2018-01-18 | 2021-03-30 | Google Llc | Multi-camera navigation interface |
CN111919105A (en) * | 2018-02-20 | 2020-11-10 | 派拉斯科技术公司 | Method and system for on-line monitoring and controlling color decoration specification of beverage cans |
CN111707365A (en) * | 2020-05-25 | 2020-09-25 | 重庆冠雁科技有限公司 | Detector adjusting system and method for spectrum module |
Also Published As
Publication number | Publication date |
---|---|
US20190120694A1 (en) | 2019-04-25 |
US11193830B2 (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11193830B2 (en) | Spectrocolorimeter imaging system | |
US8988682B2 (en) | High accuracy imaging colorimeter by special designed pattern closed-loop calibration assisted by spectrograph | |
TWI626433B (en) | Method for two-dimensional, spatially resolved measurement and imaging colorimeter system capable of said measurement | |
JP5541644B2 (en) | Method for determining calibration parameters of a spectrometer | |
US9076363B2 (en) | Parallel sensing configuration covers spectrum and colorimetric quantities with spatial resolution | |
US7974466B2 (en) | Method for deriving consistent, repeatable color measurements from data provided by a digital imaging device | |
US7616314B2 (en) | Methods and apparatuses for determining a color calibration for different spectral light inputs in an imaging apparatus measurement | |
US20140375994A1 (en) | Measuring apparatus, measuring system, and measuring method | |
US9826226B2 (en) | Expedited display characterization using diffraction gratings | |
Bongiorno et al. | Spectral characterization of COTS RGB cameras using a linear variable edge filter | |
US7557826B2 (en) | Method for device spectral sensitivity reconstruction | |
Fiorentin et al. | Calibration of digital compact cameras for sky quality measures | |
Pointer et al. | Practical camera characterization for colour measurement | |
US20220060683A1 (en) | Methods and systems of determining quantum efficiency of a camera | |
Bae | Image-quality metric system for color filter array evaluation | |
Wang | Design and Construction of a Multispectral Camera for Spectral and Colorimetric Reproduction | |
LV15705B (en) | Method and device for determination of photocamera relative spectral sensitivity at selected wavelengths | |
Vuorinen | Camera Characterization System Calibration and Verification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: APPLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YIN, YE;CHOU, YI-FAN;BHATNAGAR, ANUJ;SIGNING DATES FROM 20130519 TO 20130527;REEL/FRAME:031736/0198 |
|
AS | Assignment |
Owner name: INSTRUMENT SYSTEMS OPTISCHE MESSTECHNIK GMBH, GERM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:APPLE INC.;REEL/FRAME:038679/0293 Effective date: 20160125 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: INSTRUMENT SYSTEMS OPTISCHE MESSTECHNIK GMBH, GERMANY Free format text: CHANGE OF ADDRESS;ASSIGNOR:INSTRUMENT SYSTEMS OPTISCHE MESSTECHNIK GMBH;REEL/FRAME:063410/0513 Effective date: 20181228 |
|
AS | Assignment |
Owner name: INSTRUMENT SYSTEMS GMBH, GERMANY Free format text: CHANGE OF NAME;ASSIGNOR:INSTRUMENT SYSTEMS OPTISCHE MESSTECHNIK GMBH;REEL/FRAME:063490/0306 Effective date: 20220427 |