WO2005099916A1 - Methods and system for color recognition and enhancing monochrome image recognition - Google Patents

Methods and system for color recognition and enhancing monochrome image recognition Download PDF

Info

Publication number
WO2005099916A1
WO2005099916A1 PCT/SG2005/000124 SG2005000124W WO2005099916A1 WO 2005099916 A1 WO2005099916 A1 WO 2005099916A1 SG 2005000124 W SG2005000124 W SG 2005000124W WO 2005099916 A1 WO2005099916 A1 WO 2005099916A1
Authority
WO
WIPO (PCT)
Prior art keywords
intensity
hue
average
image
values
Prior art date
Application number
PCT/SG2005/000124
Other languages
French (fr)
Inventor
Siaw Ling Lai
Lai Lien Beh
Original Assignee
At Engineering Sdn Bhd
Pintas Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by At Engineering Sdn Bhd, Pintas Pte Ltd filed Critical At Engineering Sdn Bhd
Publication of WO2005099916A1 publication Critical patent/WO2005099916A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Definitions

  • the invention described herein is a method of color recognition applied using conventional video inspection systems and related devices as such. This method can also be modified to enhance recognition of monochrome images.
  • color recognition systems should ideally be able to recognize, not measuring, "same” colors with different peak wavelength (e.g. 680 nm, 650 nm, 6J0 nm) as a designated color (e.g. red).
  • peak wavelength e.g. 680 nm, 650 nm, 6J0 nm
  • color recognition systems should recognize different lig-ht with different peak wavelength that falls within a specified range of value just as a human operator would pass off a broad range of "red” on an apple with different peak wavelength as a designated color (that is "red”).
  • an example of whei this feature of the color inspection system may be applied would be inspection of different colors of light emitted from a LED.
  • Different color LEDs are used as indicators in electrical or electronic appliances and proper color LEDs need to be installed as specified.
  • a different color LED may be installed where it is not supposed to be. Therefore, a color inspection system based on color recognition would be useful to detect such errors.
  • Color inspection systems that has been applied in a manufacturing environment operates by either recognizing colors or measuring the wavelengths of colors.
  • Color inspection systems operating by the first principle uses electronic video cameras to ca-pture one or more images of the object under inspection. Then, the captured image is compared with a color template, which specifies the acceptable color for that particular inspection run.
  • the major shortcoming of this method is tolerance for nonconformance is too narrow thus resulting in high over rej ects .
  • Color inspection system operating by the second principle uses a photosensor coupled to a spectrometer.
  • the disadvantages of this method are the photosensor would be easily saturated when intensity of light from the object inspected is high and narrow tolerance (e.g. ⁇ 1 ram or less) inherent in this method would result in much higher over reject than the method "using color template matching.
  • a color recognition system is superior to color measuring system because it can accept a much broader range of light as the designated color whereas color measuring system can pass off only colors of a narrow range of wavelength as the designated color since it is a measuring system. Furthermore, the ran-ge of light accepted would not be so broad as to produce erroneous results by accepting apparently different colors. This feature of broader color acceptance range is necessary because color or light from similar objects under inspection may have different wavelength values d ⁇ ie to various reasons described below.
  • Machine vision system may detect different hue, saturation and intensity values from LEDs in the same production batch because of various reasons such as: i) Bias voltage that is used with different LED in that batch. Slight variation in the bias voltage may result in different brightness of the LED since brightness depends on current flowing through. When the bias voltage is larger than the forward breakdown voltage of the LED, current passing through LED varies non-linearly with the changes of voltage; ii) Different orientation of the XED. Since the radiation intensity profile of a LED is directed to front, which is unlike a light bulb which possess a spherical radiation profile. Therefore, misalignment of LED by a, few degrees or more than ten degrees may cause the intensity received by the video camera, to drop very much; iii) LED placed out of focus.
  • LED intensity captured by video camera is brightest when image of LED is in focus and; iv) Incorrect video camera exposure settings.
  • the shutter speed and mechanical aperture setting both affects the amount of light collected overtime, that is the LED intensity received by the video camera; v) Last but not least, the inherent performance variations within a batch of LED itself, which is what the mactiine vision system should detect provided previous mentioned factors i), ii), iii) and iv) are not present.
  • a monochrome machine vision system rather than its color counterpart can be employed to save cost since color information (hue) is not needed.
  • color information hue
  • obyect recognition can be problematic when images taken by the machine vision system are saturated. This occasionally happens to images of reflective objects taken at outdoor.
  • This method for color recognition can be modified to identify objects in monochrome images while maintaining its working principle of taking multiple images at different intensities.
  • the invention should be capable of overcoming the difficulty of erroneou-s recognition.
  • Such a capability includes the ability to recognize light or radiation having different wavelength but belonging to the designated color, as determined by its user. Th-e different wavelength recognized by the system may be apparent wavelength, which may be due to factors i), ii), iii) iv) or other unmentioned factors, such as different lighting conditio-n for a non-luminescence objector actual wavelength (factor v).
  • the invention is meant to be applied in various manufacturing environments that possess different color recognition situations.
  • Another advantage of the invention over existing systems that perform the same purpose i_n the prior art is the ability to recognize colors using existing video inspection systems wit_h minimal hardware requirements, whether optical or electronic.
  • th_e method may also be applied in field work.
  • a method for recognizing color of an object which emits light by itself or under illumination and a machine system to carry it out is disclosed.
  • the method allows one to recognize color instead of measuring the wavelength of the color. This is carried out by comparing a representative hue value of captured object and comparing it with preset hue values in a hu-e table.
  • Said representative hue of the object under inspection is obtained from at least two average hue derived from one or more images of the object.
  • the method includes steps to capture images of same object at different intensity levels.
  • the extracted hue value can also be converted to wavelength for comparison with different wavelength values to identify any particular color.
  • the method can be expanded to identifying an object by its color.
  • the method can also be modified to identify objects in monochrome images from captured images that have different intensities.
  • Fig. 1 is a schematic diagram of a machine vision system in which the methods and systems of the present invention can be used.
  • Fig. 2 is a flow chart showing the basic implementation of the present invention to recognize color in a captured image.
  • Fig. 3 is a flow chart showing the basic implementation of the present invention to enhance object recognition in a captured monochrome image.
  • the invention is a new method for inspecting or recognizing color using a machine vision system (100).
  • the machine vision system (100) would include a color CCD camera (102) or image source (102) connected to a frame grabber (103).
  • the frame grabber (103) is preferably controlled by a computer algorithm (200) known as color chart system (CCS) which implements said invention.
  • the frame grabber (103) is installed in a computer or signal processor (101) of the machine vision system (100), linked to the microprocessor (104) via the system bus (105), while the computer algorithm CCS (200) is stored in and executed from the mass storage unit (106) of the computer.
  • the computer algorithm CCS (200) may include custom controls such as minimum and maximum wavelength values for any specific color.
  • the inspection results may be displayed on a display unit (107) to an operator or be used in a feedback loop to control a machine, a process or quality control via the input/output port (108) of the system, a controller (109) and any related machine (110).
  • the camera may be any image source operating in analog or digital mode or line scan camera such as NTSC and PAL.
  • Analog outputs of images from any image source used, such as the color CCD camera (102) would be sampled and digitized by the frame grabber (103). Digitized images are stored into a frame buffer having many pixels.
  • a digital camera (102a) can be directly connected to system bus (105), eliminating the used of a fraixie grabber (103).
  • the system bus (105) used may be either a PCI, an EISA, ISA or NL syst&m bus or any other standard bus. In a typical system such as these, hue H, saturation S and intensity I value for each pixel can be easily derived from RGB values of each pixel as provided by the camera or image source.
  • the method of color recognition (200) as shown in figure 2 can be applied to any object (111) but it will be exemplified as follow using color inspection of a single LED as an example.
  • the object (111) is preferably a unit which can produce light but for the object which is a non-produce light unit, an illuminator (not shown) will be added to the system.
  • One shot (one static image) of the LED will be captured (201) by the color camera (102) at a default aperture and speed setting to obtain an image of the LED at a first intensity level (202).
  • the image of the object is digitized, at least one or more region of pixels of Qie object (LED) is selected (203).
  • the region or regions of pixels to be selected can be predetermined since similar image of object can be easily recapture.
  • the selected region would correspond to part of the object (LED) image. For example, should the object (LED) image cover a continuous region of 30 pixels, then t_ e entire selected region would be within the object (LED) image (i.e. solid region of 30 pixels) and covers a substantial portion of it, such as 20 pixels of the LED image.
  • hue H, saturation S and intensity I for each pixel in the selected region of the object (LED) is derived (204).
  • average hue Hsub(avg) is derived from the selected pixels (206). Average values of hue Hsub(avg) would be a better representation of the LED's coLor than hue H of any single pixel alone because light emitted from different points on the LED do not have uniform hue values.
  • the first arbitrary threshold hue value Hsub(thres(l)) used cannot be accepted as the average hue Hsub(avg) for that region, then thresholding and blob analysis on the hue value is reiterated (207) at a second chosen threshold hue value Hsub(thres(2)) and so on, until every possible Hsub(thres(n)) has been tried or one of the possible Hsub(thres(n)) is accepted as the Hsub(avg) for the selected region of pixels.
  • each selected pixels may have some intensity value.
  • a second or more images of the object will be recapture a different shutter speed (210, 201) so that the intensity level of the second image is different from the first image.
  • Eacfe. subsequent image obtained (202) will be subjected the preceding steps described above (20 ,
  • the CCS is programmed to take more shots at different intensity to derive a few more average hue value Hsub(avg(m)) (m-l,2,...i) even though a first average hue Hsub(avg(l)) may be successfully derived from the first captured image.
  • the invention include a means of deriving shutter speed such as using curve fitting and combined with interpolation or extrapolation, or using fuzzy logic or neural network techniques to derived the new shutter speed.
  • a means of deriving shutter speed such as using curve fitting and combined with interpolation or extrapolation, or using fuzzy logic or neural network techniques to derived the new shutter speed.
  • the average hue Hsub(avg) or Hsub(avg(n)) should be derived using the least number of shots while maintaining the imposed color recognition accuracy.
  • a representative hue value Hsub(rep) can be derived (214) from all the derived average hue values Hsub(avg(m)) or Hsub(avg(n)) after these values are successfully obtained.
  • Hsub(rep) may be derived from a weighted averaging method, with coefficient or more specifically weightage assigned to the different average hue value Hsub(avg(m)) or Hsub avg(n)) obtained dependent on their corresponding intensity value.
  • Other means of deriving Hsub(rep) such as mean, quantiles, mod or other measures of central tendency may used.
  • Mean used may be an exact arithmetic mean or an approximate mean for a group distribution of the average hue values.
  • Quantiles used may be median, quartiles, deciles or percentiles each of which can be obtained after all the obtained average hue values are ranked in increasing or decreasing order.
  • Hsub(preset(n)) preset hue Hsub(preset(n))
  • the derived representative hue Hsub(rep) from the accepted images can also be converted to wavelength LAMBDAsub(rep) using known transformation.
  • the visual display unit ( 107) of the inspection system may preferably display the representative LAMBDAsub(rep). the name of the corresponding color of LAMBDAsub(rep) and the inspection status such as whether the color is accepted or rejected and the captured image to a human operator.
  • CCS may also be programmed to compare LAMBDAsub(rep) with wavelength values set in a wavelength table in order to determine in what color region it belongs.
  • the advantages of using such a machine vision system with such a color recognition algorithm is that there is low rate of over rejects. Furthermore, colors which have close hues that can be differentiated by human eyes can all be differentiated by the inventio n. Therefore, the invention can recognized LEDs with "same" color produced in different batches or by different manufacturers. Furthermore, the colors can be correctly identified for light-emitting objects or light-reflective objects whether the reflections are diffuse or specular or a mixture of both as found in a typical recognition task. Besides recognizing color from surface of varying features, the invention also can recognized color under various lighting conditions. These have been proven in field tests conducted in manufacturing environments. In order to enhance the color identification in different lighting environment, suitable filters may be used for filtering out stray lights.
  • Objects under inspection may also include colonies of microorganisms, or a single microorganism with their organels visible to observer, or grains of rice, and not just limited to large objects.
  • future applications may include making maps from aerial photos by applying this color recognition method. Based on color recognition, coLonies can be counted, the physiology of the microorganisms can be studied and rice grains can be selected for packing according to their shades of white.
  • This invention may also be applied to enhance quality of monochrome image captured, especially those captured at outdoor so that predetermined objects can be identified.
  • At least two or more monochrome images can be captured using a modified algorithm (Fig. 3) that retains the basic concept of taking multiple images at different intensities as outlined in the color recognition algorithm (Fig. 2). Modifications to be made on the algorithm are only on those steps that derive and make use of hue and saturation values since monochrome images have intensity value only. This specifically means that the new modified algorithm (Fig. 3) will retain the general order of execution and executes certain similar conditional terms, except where various hue H and saturation S values which were there before this are now being removed or replaced by intensity I values.
  • a first image is captured and regions of pixels are selected just as before (201, 202, 203).
  • intensity I for each- pixel in the selected regions are derived (304) and for each region, an average intensity Isub(avg) is derived (306) by thresholding a chosen intensity value.
  • the object is identified (3 12) by verifying the derived average intensity Isub(avg) (307) for the captured image.
  • the average intensity Isub(avg) of each selected regions are compared with average intensity of corresponding selected regions on the image template (307).
  • step 210 If the derived average intensity Isub(avg) of the captured image is not the same as that in the image template (208), th-e image of the object will be retaken at different shutter speed (210, 201). Algorithm used in step 210 will chose a higher shutter speed when captured image is much brighter than the image template and a slower speed when captured image is dimmer.
  • the derived average intensity whether Isub(avg) or Isub(avg(m)) or Isub(avg(n)) matches that on the image template to high confidence level, these values are stored (311) for further image processing (314). But basically, steps 201, 202, 203, 304, 306, 307, 208, 210, 209, 312, 313 of the algorithm enables objects in monochrome images to be correctly identified.

Abstract

Methods and System for Color Recognition and Enhancing Monochrome Image Recognition. The invention is a color recognition algorithm applied through a machine vision system which takes different shots of the object inspected at different intensity level until the needed average hue Hsub(avg) or Hsub(avg(m)) or Hsub(avg(n)) is or are derived, regardless of the number of shots captured. A representative hue Hsub(rep) is derived from these average hue and used for comparison with preset hue values Hsub(preset(n)) in a hue table. When the representative hue value matches a particular preset hue value at or above prescribed level of confidence, the color of the object is identified. Its output that is the identified color may be converted to wavelength for the convenience of its operator. Otherwise, it may be feedback to a process or control loop. It may also be used with other image processing and analysis 15 methods on the machine vision system itself to perform other tasks. The algorithm can, also be modified for identifying objects in monochrome images before the images are subjected to subsequent image processing and analysis.

Description

Methods and System for Color Recognition and Enhancing Monochrome Image Recognition
Field of Invention
The invention described herein is a method of color recognition applied using conventional video inspection systems and related devices as such. This method can also be modified to enhance recognition of monochrome images.
Background of the Invention
There are various applications such as in manufacturing environments or in field works where colors need to be recognized rather than measured. Ideally these color inspection systems are developed for recognizing colors like a human eye does. Human eyes can distinguished easily the different hues (colors) in the case where there are different color samples, each having uniform intensity but with very similar hues, such as different shades of red (red, scarlet, crimson etc.) or different shades of orange. However, human eyes can also identify the color of objects such an apple or an orange although there are variation of hue on them. Furthermore, human eyes still can identify these colors under different lighting intensity.
In order to do this, such systems should ideally be able to recognize, not measuring, "same" colors with different peak wavelength (e.g. 680 nm, 650 nm, 6J0 nm) as a designated color (e.g. red). In other words for single object, color recognition systems should recognize different lig-ht with different peak wavelength that falls within a specified range of value just as a human operator would pass off a broad range of "red" on an apple with different peak wavelength as a designated color (that is "red").
In the manufacturing environment, an example of whei this feature of the color inspection system may be applied would be inspection of different colors of light emitted from a LED. Different color LEDs are used as indicators in electrical or electronic appliances and proper color LEDs need to be installed as specified. However, due to worker's fatigue or sheer carelessness, a different color LED may be installed where it is not supposed to be. Therefore, a color inspection system based on color recognition would be useful to detect such errors.
Various color inspection systems that has been applied in a manufacturing environment operates by either recognizing colors or measuring the wavelengths of colors. Color inspection systems operating by the first principle uses electronic video cameras to ca-pture one or more images of the object under inspection. Then, the captured image is compared with a color template, which specifies the acceptable color for that particular inspection run. The major shortcoming of this method is tolerance for nonconformance is too narrow thus resulting in high over rej ects .
Color inspection system operating by the second principle uses a photosensor coupled to a spectrometer. The disadvantages of this method are the photosensor would be easily saturated when intensity of light from the object inspected is high and narrow tolerance (e.g. ±1 ram or less) inherent in this method would result in much higher over reject than the method "using color template matching.
In applications where color recognition is needed, a color recognition system is superior to color measuring system because it can accept a much broader range of light as the designated color whereas color measuring system can pass off only colors of a narrow range of wavelength as the designated color since it is a measuring system. Furthermore, the ran-ge of light accepted would not be so broad as to produce erroneous results by accepting apparently different colors. This feature of broader color acceptance range is necessary because color or light from similar objects under inspection may have different wavelength values dπie to various reasons described below.
Machine vision system may detect different hue, saturation and intensity values from LEDs in the same production batch because of various reasons such as: i) Bias voltage that is used with different LED in that batch. Slight variation in the bias voltage may result in different brightness of the LED since brightness depends on current flowing through. When the bias voltage is larger than the forward breakdown voltage of the LED, current passing through LED varies non-linearly with the changes of voltage; ii) Different orientation of the XED. Since the radiation intensity profile of a LED is directed to front, which is unlike a light bulb which possess a spherical radiation profile. Therefore, misalignment of LED by a, few degrees or more than ten degrees may cause the intensity received by the video camera, to drop very much; iii) LED placed out of focus. LED intensity captured by video camera is brightest when image of LED is in focus and; iv) Incorrect video camera exposure settings. The shutter speed and mechanical aperture setting both affects the amount of light collected overtime, that is the LED intensity received by the video camera; v) Last but not least, the inherent performance variations within a batch of LED itself, which is what the mactiine vision system should detect provided previous mentioned factors i), ii), iii) and iv) are not present.
When different hue, saturation and intensity values are converted to wavelength, the light from each LED of the same production batch would apparently has different wavelength when this is actually not so. Besides this, LEDs of a single color, e.g. red produced, by different manufacturers have different peak wavelength. The aggregate of all these reasons results in high over rejects when color inspection systems in the prior art are applied in color recognition situations.
However, since the LEDs are used as indicators, whether actual or apparent variation in peak wavelength of the light emitted is not an issue as long as these different batches of LEDs much gives out the designated color. Therefore, a visual inspection system using color recognition is more suitable for said task in hand rather than an inspection system basec i on color measurement.
In other image recognition situations or tasks, such as objects recognition, shape recognition etc., a monochrome machine vision system rather than its color counterpart can be employed to save cost since color information (hue) is not needed. With monochrome images, obyect recognition can be problematic when images taken by the machine vision system are saturated. This occasionally happens to images of reflective objects taken at outdoor. This method for color recognition can be modified to identify objects in monochrome images while maintaining its working principle of taking multiple images at different intensities.
Therefore it is an objective of the invention to provide a method for recognizing colors of light emitting object, which method at the same time may also be use to recognize color non-luminescence object that reflects light by using a machine vision system. Specifically, the invention should be capable of overcoming the difficulty of erroneou-s recognition. Such a capability includes the ability to recognize light or radiation having different wavelength but belonging to the designated color, as determined by its user. Th-e different wavelength recognized by the system may be apparent wavelength, which may be due to factors i), ii), iii) iv) or other unmentioned factors, such as different lighting conditio-n for a non-luminescence objector actual wavelength (factor v).
Furthermore, the invention is meant to be applied in various manufacturing environments that possess different color recognition situations.
Another advantage of the invention over existing systems that perform the same purpose i_n the prior art is the ability to recognize colors using existing video inspection systems wit_h minimal hardware requirements, whether optical or electronic.
It is also intended that the use of the invention may be extended to recognize colors of various objects such as objects emitting light or objects reflecting light. Furthermore, both kinds of objects may even have irregular surface features or uneven surface. Therefore, th_e method may also be applied in field work.
It is also intended that when the invention is modified and applied on monochrome images, it will enhance object recognition especially with images of reflective objects taken at outdoor. Images having such objects can be easily overexposed. However, some parts of the object may be underexposed if a higher shutter speed is used, thus posing problems in subsequent processing of the image.
Summary of the Invention
A method for recognizing color of an object which emits light by itself or under illumination and a machine system to carry it out is disclosed. The method allows one to recognize color instead of measuring the wavelength of the color. This is carried out by comparing a representative hue value of captured object and comparing it with preset hue values in a hu-e table. Said representative hue of the object under inspection is obtained from at least two average hue derived from one or more images of the object. The method includes steps to capture images of same object at different intensity levels. The extracted hue value can also be converted to wavelength for comparison with different wavelength values to identify any particular color. The method can be expanded to identifying an object by its color. The method can also be modified to identify objects in monochrome images from captured images that have different intensities.
Brief Description of the Drawings
Fig. 1 is a schematic diagram of a machine vision system in which the methods and systems of the present invention can be used.
Fig. 2 is a flow chart showing the basic implementation of the present invention to recognize color in a captured image.
Fig. 3 is a flow chart showing the basic implementation of the present invention to enhance object recognition in a captured monochrome image.
Detailed Description of the Preferred Embodiment
The invention is a new method for inspecting or recognizing color using a machine vision system (100). With reference to figure 1, the machine vision system (100) would include a color CCD camera (102) or image source (102) connected to a frame grabber (103). The frame grabber (103) is preferably controlled by a computer algorithm (200) known as color chart system (CCS) which implements said invention. The frame grabber (103) is installed in a computer or signal processor (101) of the machine vision system (100), linked to the microprocessor (104) via the system bus (105), while the computer algorithm CCS (200) is stored in and executed from the mass storage unit (106) of the computer. The computer algorithm CCS (200) may include custom controls such as minimum and maximum wavelength values for any specific color. The inspection results may be displayed on a display unit (107) to an operator or be used in a feedback loop to control a machine, a process or quality control via the input/output port (108) of the system, a controller (109) and any related machine (110).
The camera may be any image source operating in analog or digital mode or line scan camera such as NTSC and PAL. Analog outputs of images from any image source used, such as the color CCD camera (102) would be sampled and digitized by the frame grabber (103). Digitized images are stored into a frame buffer having many pixels. Meanwhile, a digital camera (102a) can be directly connected to system bus (105), eliminating the used of a fraixie grabber (103). The system bus (105) used may be either a PCI, an EISA, ISA or NL syst&m bus or any other standard bus. In a typical system such as these, hue H, saturation S and intensity I value for each pixel can be easily derived from RGB values of each pixel as provided by the camera or image source.
The method of color recognition (200) as shown in figure 2 can be applied to any object (111) but it will be exemplified as follow using color inspection of a single LED as an example. The object (111) is preferably a unit which can produce light but for the object which is a non-produce light unit, an illuminator (not shown) will be added to the system.
One shot (one static image) of the LED will be captured (201) by the color camera (102) at a default aperture and speed setting to obtain an image of the LED at a first intensity level (202). After the image of the object is digitized, at least one or more region of pixels of Qie object (LED) is selected (203). In a manufacturing environment, the region or regions of pixels to be selected can be predetermined since similar image of object can be easily recapture. The selected region would correspond to part of the object (LED) image. For example, should the object (LED) image cover a continuous region of 30 pixels, then t_ e entire selected region would be within the object (LED) image (i.e. solid region of 30 pixels) and covers a substantial portion of it, such as 20 pixels of the LED image.
Then, hue H, saturation S and intensity I for each pixel in the selected region of the object (LED) is derived (204). Next, average hue Hsub(avg) is derived from the selected pixels (206). Average values of hue Hsub(avg) would be a better representation of the LED's coLor than hue H of any single pixel alone because light emitted from different points on the LED do not have uniform hue values.
One of the many ways to derive the average hue Hsub(avg) (206) is by applying thresholα technique and followed by blob analysis on the selected pixels. After these two steps a fϊ_rst manipulated image of the selected pixels based on the chosen threshold hue Hsub(thres(r_ι)) (n= l,2,...i) is obtained. The first manipulated image is compared with a second manipulated image of the selected pixels that is based on thresholded intensity I of each pixels to verify the validity of the derived hue value (207). Basically, as long as intensity of each selected pixels is not equal to 0 (i.e. that is shutter speed too high, no colored image on the selected region) or 255 (i.e. that is shutter speed too slow, overexposure on the selected region) (205 there is one chosen threshold hue Hsub(thres(n)) that can be accepted (209) as the average hue Hsub(avg) for the selected region when the first manipulated image matches the second manipulated image.
Let's suppose after the first round of thresholding and blob analysis treatment on the selected pixels, the first arbitrary threshold hue value Hsub(thres(l)) used cannot be accepted as the average hue Hsub(avg) for that region, then thresholding and blob analysis on the hue value is reiterated (207) at a second chosen threshold hue value Hsub(thres(2)) and so on, until every possible Hsub(thres(n)) has been tried or one of the possible Hsub(thres(n)) is accepted as the Hsub(avg) for the selected region of pixels.
Let's suppose another situation where after the reiteration, the needed average hue Hsub(avg) couldn't be obtained (208). In this case, each selected pixels may have some intensity value. Then a second or more images of the object will be recapture a different shutter speed (210, 201) so that the intensity level of the second image is different from the first image. Eacfe. subsequent image obtained (202) will be subjected the preceding steps described above (20 ,
204, 205, 206, 207) until the needed average hue Hsub(avg) is obtained (209) for that region. After that, the algorithm will store each average hue Hsub(avg) (211) that is derived.
In the preferred embodiment, there also could be more than one region to be selected for the purpose of color recognition. Therefore, the subsequent steps mentioned before (203, 204- ,
205, 206, 207) may be repeated for subsequent images that are captured. All these images will have different intensity value and there will be an average hue Hsub(avg(n)) (n=l,2,...i) for each region. In this situation, shots with different intensity values are captured (213, 201 ) until the average hue Hsub(avg(n)) for each region is derived (212). Besides varying the shutter speed, other variables such as the camera aperture, illuminating source intensity, angle of view, the intensity of the light emission from the object can be varied in order to obtain subsequent images of the object at different intensity level.
In any actual situation, almost all color image of any object that is captured will have varying hue, intensity and saturation values from one pixel to another. Therefore, capturing a single image of an object to identify its color is possible in principle but not reliable in practice because the confidence level of its result i.e. the identified color, such as from 80 to 97% is not high enough for applications in manufacturing environment (e.g. LED color inspection) or in real world (to be exemplified later). Therefore, it is preferable that multiple images be captured in a single inspection of an object to identify its color. The resulting con-ϋdence level of its result be high such as 99.99% or more. Therefore, it is also preferable that the CCS is programmed to take more shots at different intensity to derive a few more average hue value Hsub(avg(m)) (m-l,2,...i) even though a first average hue Hsub(avg(l)) may be successfully derived from the first captured image.
It is the essence of this invention that effective color recognition is carried out by m-eans of capturing images of object at different intensities. Therefore, a few average hue Hsxιb(avg) should derived from the images in order that high accuracy of color recognition can be achieved. While too few shots captured at different intensity would compromise the accuracy of color recognition, too many shots captured at different intensity for the sake of increasing accuracy would not be cost effective, especially in a manufacturing setting.
Suppose a number of images of the same object has been captured. Out of these shots, there may be large number of shots that could not be used to derived the needed average hue value Hsub(avg) or Hsub(avg(m)) or Hsub(avg(n)). Furthermore, when two setting variable s of the color recognition system, such as shutter speed and aperture, may be changed (210, 213) to obtain images with various intensity levels, certain setting combinations would be redundant as they would result in same intensity level.
Therefore, it is preferable that the invention include a means of deriving shutter speed such as using curve fitting and combined with interpolation or extrapolation, or using fuzzy logic or neural network techniques to derived the new shutter speed. Preferably, whatever method is used to derive the shutter speed, the average hue Hsub(avg) or Hsub(avg(n)) should be derived using the least number of shots while maintaining the imposed color recognition accuracy.
A representative hue value Hsub(rep) can be derived (214) from all the derived average hue values Hsub(avg(m)) or Hsub(avg(n)) after these values are successfully obtained. Hsub(rep) may be derived from a weighted averaging method, with coefficient or more specifically weightage assigned to the different average hue value Hsub(avg(m)) or Hsub avg(n)) obtained dependent on their corresponding intensity value. Other means of deriving Hsub(rep) such as mean, quantiles, mod or other measures of central tendency may used. Mean used may be an exact arithmetic mean or an approximate mean for a group distribution of the average hue values. Quantiles used may be median, quartiles, deciles or percentiles each of which can be obtained after all the obtained average hue values are ranked in increasing or decreasing order.
Then the representative hue Hsub(rep) would be compared (215) with preset hue Hsub(preset(n)) (where n=l,2,...i) values in a hue table. This is different from conventional inspection systems which compares all three values Hsub(avg), Ssub(avg) and Isub(avg) with a color template which has hue, saturation and intensity values. When the representative hue Hsub(rep) matches a particular preset hue Hsub(preset(n)) (216) at or above prescribed level of confidence, the color of the object is identified (217).
As it is not intuitive for a human operator to described color in terms of hue. the derived representative hue Hsub(rep) from the accepted images can also be converted to wavelength LAMBDAsub(rep) using known transformation. The visual display unit ( 107) of the inspection system may preferably display the representative LAMBDAsub(rep). the name of the corresponding color of LAMBDAsub(rep) and the inspection status such as whether the color is accepted or rejected and the captured image to a human operator.
In still another embodiment, CCS may also be programmed to compare LAMBDAsub(rep) with wavelength values set in a wavelength table in order to determine in what color region it belongs.
The advantages of using such a machine vision system with such a color recognition algorithm is that there is low rate of over rejects. Furthermore, colors which have close hues that can be differentiated by human eyes can all be differentiated by the inventio n. Therefore, the invention can recognized LEDs with "same" color produced in different batches or by different manufacturers. Furthermore, the colors can be correctly identified for light-emitting objects or light-reflective objects whether the reflections are diffuse or specular or a mixture of both as found in a typical recognition task. Besides recognizing color from surface of varying features, the invention also can recognized color under various lighting conditions. These have been proven in field tests conducted in manufacturing environments. In order to enhance the color identification in different lighting environment, suitable filters may be used for filtering out stray lights.
It is also intended that the application of the invention be extended to recognized different color present in a particular frame of image. The basic steps of the invention as described above allows this to be done. An example of application would be recognizing many LEDs that has different colors in a single shot. The steps are:
1) Identifying different regions based on different colors. A region having similar color is identified on the basis of having similar hue, but average hue Hsub(avg(m)) for ea-ch region are not derived yet by that software at this juncture; 2) Zooming in to one of the regions and choose a few LEDs in that region and extract hue H, saturation S and intensity I values for pixels in each region as described beforehand; 3) All the subsequent steps described beforehand (after H, S and I are extracted for each selected pixels) are carried out to identify different color LEDs.
Objects under inspection may also include colonies of microorganisms, or a single microorganism with their organels visible to observer, or grains of rice, and not just limited to large objects. On the other hand, future applications may include making maps from aerial photos by applying this color recognition method. Based on color recognition, coLonies can be counted, the physiology of the microorganisms can be studied and rice grains can be selected for packing according to their shades of white.
This invention may also be applied to enhance quality of monochrome image captured, especially those captured at outdoor so that predetermined objects can be identified. At least two or more monochrome images can be captured using a modified algorithm (Fig. 3) that retains the basic concept of taking multiple images at different intensities as outlined in the color recognition algorithm (Fig. 2). Modifications to be made on the algorithm are only on those steps that derive and make use of hue and saturation values since monochrome images have intensity value only. This specifically means that the new modified algorithm (Fig. 3) will retain the general order of execution and executes certain similar conditional terms, except where various hue H and saturation S values which were there before this are now being removed or replaced by intensity I values.
In the modified algorithm (Fig. 3), steps which were similar with that in the CSS algorithm are labeled with the same number. The aim of using this algorithm is to correctly identify objects in monochrome images when they are taken under different lighting condition, especially at outdoor environments. All the monochrome images are taken from the same viewing angle. Furthermore, the system learns the shape of the object or objects before hand. The pre-learned shapes are stored in the form of image templates.
In the execution of this modified algorithm, a first image is captured and regions of pixels are selected just as before (201, 202, 203). After that, intensity I for each- pixel in the selected regions are derived (304) and for each region, an average intensity Isub(avg) is derived (306) by thresholding a chosen intensity value. The object is identified (3 12) by verifying the derived average intensity Isub(avg) (307) for the captured image. The average intensity Isub(avg) of each selected regions are compared with average intensity of corresponding selected regions on the image template (307). If the derived average intensity Isub(avg) of the captured image is not the same as that in the image template (208), th-e image of the object will be retaken at different shutter speed (210, 201). Algorithm used in step 210 will chose a higher shutter speed when captured image is much brighter than the image template and a slower speed when captured image is dimmer. Suppose the derived average intensity whether Isub(avg) or Isub(avg(m)) or Isub(avg(n)) matches that on the image template to high confidence level, these values are stored (311) for further image processing (314). But basically, steps 201, 202, 203, 304, 306, 307, 208, 210, 209, 312, 313 of the algorithm enables objects in monochrome images to be correctly identified.
In a typical situation, different parts of an object having the same color may have very different intensity values due to uneven illumination or lighting condition, e.g. when there is a shadow falling part of the object. For a single monochrome image, when more than one region are selected, average intensity Isub(avg(n)) (n=l,2,...i) for each region are derived (307). When more than one image at different intensities are taken, average intensity value Isub(avg(m)) (m=l,2,...i) of the selected region are derived (307) for each m-th image taken. Steps 201 through 312 may be carried more than once so that more than one image is captured and all average intensities of every selected region are luentifϊed. This ensures that derived average intensities may match that in the image template to a high confidence level, thus identifying the object at a high confidence level (313).
Taking images at different intensity (210, 201) can be effected in similar manners described before as with the color images. In the case of images captured at outdoor, the only probable ways are to change the shutter speeds, mechanical apertures or applying neutral density filters.
While that which have been described are considered to be preferred embodiments of the present invention, it will be apparent to those skilled in the art that various modifications and variations can be made, and equivalents may be substituted for elements thereof without departing from the spirit or scope of the present invention. Thus, it is intended that the present invention not be limited to the particular embodiments disclosed as the best mode contemplated for carrying out the present invention, but that the present invention includes all embodiments falling within the scope of the appended claims and their equivalents.

Claims

Claims
1. A method of color recognition of an object in a machine vision system comprising the steps of: capturing at least two images of said object, with intensity of each said image having different values; deriving two or more average hue values from selected regions on said image of object, with one average hue value for each region selected; deriving a representative hue value of said object using said average hue values; comparing said representative hue value with preset hue values in a hue table; and recognizing color of object when said representative hue value matches one of said preset hue values at or above an imposed confidence level.
2. The method of color recognition of an object as claimed in claim 1, wherein said step of deriving two or more average hue values includes the step of deriving hue values of individual pixels that makes up said selected region on said image of object.
3. The method of color recognition of an object as claimed in claim 2, wherein each said average hue values of each selected region are derived from said hue values of individual pixels that makes up that selected region.
4. The method of color recognition of an object as claimed in claim 3, wherein said step of deriving two or more average hue values includes the step of checking intensity of said selected regions.
5. The method of color recognition of an object as claimed in claim 4, wherein said step checking intensity of said selected region is followed by recapturing image of object at different intensity level if intensity of said selected regions is O or 255.
6. The method of color recognition of an object as claimed in claim 5, wherein said step of deriving two or more average hue values includes the step of verifying a chosen threshold hue as the value of said average hue by comparing a first manipulated image that is based on said thresholded hue against a second manipulated image that is based on thresholded intensity that is derived for a particular said selected region.
7. The method of color recognition of an object as claimed iα claim 6, wherein said step of deriving two or more average hue values is followed by recapturing of image of object at different intensity level if said average hue for any particular selected region could not be derived.
8. The method of color recognition of an object as claimed ia claim 7, wherein said step of deriving two or more average hue values includes recapturing image of object at different intensity level until every said average hue for every said selected region is derived.
9. The method of color recognition of an object as claimed in claim 4,7 and 8 wherein said steps of recapturing image or images of said object includes the step of altering light intensity of image or images to be recapture by altering camera shutter speed, or by altering camera aperture, or by altering angle of view, or by altering intensity of illuminating light source, or by altering intensity of light emitted from said object should said object is a light emitting object.
10. The method of color recognition of an object as claimed in claim 9, wherein said step of recapturing image or images of said object includes the step determining the subsequent machine vision setup variable such as camera shutter speed to be used.
11. The method of color recognition of an object as claimed in claim 6, wherein said step of deriving a representative hue value is by deriving a weighted average of said average hue values, wherein coefficient or specifically weightage of each said average hue are assigned to each said average hue according to corresponding said thresholded intensity.
12. The method of color recognition of an object as claimed in claim 6, wherein said step of deriving a representative hue value is by deriving a measure of central tendency such as mean, said mean can be the exact value or approximate mean, or quantiles such as median, quartiles, deciles, percentiles or mod of said average hue.
13. A method of color recognition of an object in a machine vision system comprising the steps of: capturing at least two images of said object, with intensity of each said image having different values; deriving two or more average hue values from selected regions on said image of object; deriving a representative hue value of said object using said average hue values; transforming said representative hue value to its corresponding representative wavelength, comparing said representative wavelength with preset
Figure imgf000017_0001
values in a wavelength table; recognizing color of object when said representative wavelength value matches one of said preset wavelength values at or above an imposed confidence level.
14. A method of object recognition in a machine vision system comprising the steps of: capturing at least two images of the object, with intensity of each said image having different values; deriving two or more average intensity values from selected regions on each said image of the object, with one average intensity value for each region selected; comparing each said average intensity with preset average intensity on corresponding regions of a pre-learned image template; recognizing the object when each said average intensity of selected regions of captured matches the corresponding average intensity of said pre-learned image template at or above an impose confidence level.
15. The method of object recognition as claimed in claim 14, wherein said step of deriving two or more average intensity values includes the step of deriving intensity values of individual pixels that makes up said selected region on said image of object.
16. The method of object recognition as claimed in claim 15, wherein each said average intensity values of each selected region are derived from said intensity values of individual pixels that makes up that selected region.
17. The method of object recognition as claimed in claim 16, wherein said step of deriving two or more average intensity values includes the step of thresholding a hosen intensity.
18. The method of object recognition as claimed in claim 17, wherein said step of deriving two or more average intensity values is followed by recapturing of image of object at different intensity level if said average intensity for any particular selected region could not be derived.
19. The method of object recognition as claimed in claim 18, wherein said step of deriving two or more average intensity values includes recapturing image of object at different intensity level until every said average intensity for every said selected region is derived.
20. The method of object recognition as claimed in claim 18 and 19 wherein said steps of recapturing image or images of said object includes the step of altering light intensity of image or images to be recapture by altering camera shutter speed, or by altering camera aperture, or by altering intensity of light emitted from said object should said object is a light emitting object.
21. The method of object recognition as claimed in claim 20, wherein said step of recapturing image or images of said object includes the step determining the subsequent machine vision setup variable such as camera shutter speed to be used.
PCT/SG2005/000124 2004-04-16 2005-04-14 Methods and system for color recognition and enhancing monochrome image recognition WO2005099916A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI20041413 2004-04-16
MYPI20041413 2004-04-16

Publications (1)

Publication Number Publication Date
WO2005099916A1 true WO2005099916A1 (en) 2005-10-27

Family

ID=35149820

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2005/000124 WO2005099916A1 (en) 2004-04-16 2005-04-14 Methods and system for color recognition and enhancing monochrome image recognition

Country Status (1)

Country Link
WO (1) WO2005099916A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103521464A (en) * 2013-10-25 2014-01-22 华中农业大学 Method and device for identification and separation of yolk-dispersed eggs based on machine vision
CN104741325A (en) * 2015-04-13 2015-07-01 浙江大学 Fruit surface color grading method based on normalization hue histogram
CN106238350A (en) * 2016-09-12 2016-12-21 佛山市南海区广工大数控装备协同创新研究院 A kind of solar battery sheet method for separating based on machine vision and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5339963A (en) * 1992-03-06 1994-08-23 Agri-Tech, Incorporated Method and apparatus for sorting objects by color
US5432545A (en) * 1992-01-08 1995-07-11 Connolly; Joseph W. Color detection and separation method
US5813542A (en) * 1996-04-05 1998-09-29 Allen Machinery, Inc. Color sorting method
WO1999060353A1 (en) * 1998-05-19 1999-11-25 Active Silicon Limited Method of detecting colours

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5432545A (en) * 1992-01-08 1995-07-11 Connolly; Joseph W. Color detection and separation method
US5339963A (en) * 1992-03-06 1994-08-23 Agri-Tech, Incorporated Method and apparatus for sorting objects by color
US5813542A (en) * 1996-04-05 1998-09-29 Allen Machinery, Inc. Color sorting method
WO1999060353A1 (en) * 1998-05-19 1999-11-25 Active Silicon Limited Method of detecting colours

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103521464A (en) * 2013-10-25 2014-01-22 华中农业大学 Method and device for identification and separation of yolk-dispersed eggs based on machine vision
CN104741325A (en) * 2015-04-13 2015-07-01 浙江大学 Fruit surface color grading method based on normalization hue histogram
CN106238350A (en) * 2016-09-12 2016-12-21 佛山市南海区广工大数控装备协同创新研究院 A kind of solar battery sheet method for separating based on machine vision and system

Similar Documents

Publication Publication Date Title
CN108445007B (en) Detection method and detection device based on image fusion
CN100359924C (en) Image processor and face detector using the same
US7995058B2 (en) Method and system for identifying illumination fields in an image
CN103344563B (en) A kind of self-adaptation toning light modulation machine vision light source pick-up unit and method
CN101882034B (en) Device and method for discriminating color of touch pen of touch device
KR101284268B1 (en) Color lighting control method for improving image quality of vision system
KR20160007361A (en) Image capturing method using projecting light source and image capturing device using the method
CN106161974B (en) Utilize the display apparatus and its method of high dynamic range function
US9894255B2 (en) Method and system for depth selective segmentation of object
Ismail et al. Development of a webcam based lux meter
CN110062502A (en) A kind of online predicting residual useful life of LED illumination lamp based on machine vision and reliability estimation method
CN108184286A (en) The control method and control system and electronic equipment of lamps and lanterns
US10823682B2 (en) Water measurement apparatus
WO2005099916A1 (en) Methods and system for color recognition and enhancing monochrome image recognition
CN108235831A (en) The control method and control system and electronic equipment of lamps and lanterns
TWI695969B (en) Inspecting system and method for light emitting source
CN106033062B (en) Automatic dimming method for optical detection and optical detection machine platform thereof
CN111886492B (en) Color grading process and system for jadeite
CN109975299B (en) Light source detection system and method
CN113892111A (en) Method and apparatus for detecting fluids by computer vision applications
CN114450579A (en) Image processing system, setting method, and program
CN110274911B (en) Image processing system, image processing apparatus, and storage medium
CN111351078B (en) Lampblack identification method of range hood and range hood
CN108462819A (en) The method of light source for running camera, light source, camera
Mills et al. Automated control of LED-based illumination using image analysis metrics

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DPEN Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase