US4897795A - Digital image analysis system - Google Patents

Digital image analysis system Download PDF

Info

Publication number
US4897795A
US4897795A US07/155,807 US15580788A US4897795A US 4897795 A US4897795 A US 4897795A US 15580788 A US15580788 A US 15580788A US 4897795 A US4897795 A US 4897795A
Authority
US
United States
Prior art keywords
clump
feature value
image
scanning lines
scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US07/155,807
Inventor
Haruo Yoda
Hidenori Inouchi
Hiroshi Sakou
Yozo Ohuchi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Assigned to HITACHI, LTD., A CORP. OF JAPAN reassignment HITACHI, LTD., A CORP. OF JAPAN ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: INOUCHI, HIDENORI, OHUCHI, YOZO, SAKOU, HIROSHI, YODA, HARUO
Application granted granted Critical
Publication of US4897795A publication Critical patent/US4897795A/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder

Definitions

  • the present invention relates to an image processing method for measuring the area, dimensions and others of each of particle images (that is, clumps) appearing on a picture image and an apparatus for carrying out the image processing method, and more particularly to an image processing method and an image processing apparatus suitable for use in an automatic image analysis system and an automatic pattern-defect inspection system.
  • an automatic visual inspection system which can automatically inspect fine pattern defects generated in fabricating the printed circuit board or semiconductor circuit, with the aid of image processing techniques. That is, owing to the increase in the integration density, it has become very difficult to visually inspect the above pattern defects. Hence, the automatic visual inspection system which can make the above inspection in place of human eyes, is very important.
  • the basic difficulty of the real-time, one-pass processing for input images formed by a raster scan method is based upon the variations in shape of clumps appearing on a binary image.
  • the processing for each clump is relatively simple.
  • intersecting portions the following method has been proposed. (1) The end of each intersecting portion on a scanning line is so deformed as to reach a more extending one of the end of the intersecting portion and the deformed end of a corresponding, intersecting portion on a preceding scanning line.
  • the above problem of the conventional method is caused by a fact that the intermediate value of a feature is stored in a one-scanning-line delay circuit. That is, in order to correctly read out the feature value calculated up to the preceding scanning line from the delay circuit, the deformed end of an intersecting portion is required to reach the deformed end of a corresponding portion on the preceding scanning line or to be placed behind the latter end, and thus end of clump is obliged to be deformed in a great degree.
  • an ultrahigh-speed inspection system comprises means for storing the intermediate one of a feature value in a memory at an address determined on the basis of the arrangement order of intersecting portions on a scanning line, means for converting an input image into a digital image, and for generating a calculation control signal necessary for image analysis, on the basis of the values of adjacent pixels on two consecutive scanning lines, and means for taking the logical product of two similar deformed images formed by raster scanning operations in opposite directions, to fill a hole in a clump or a recess at the bottom thereof with a logical value "1".
  • the feature value of a clump can be measured at a video-rate by the one-pass method, and moreover the deformation of the clump can be reduced to a minimum.
  • the probability of erroneous measurement due to the overlapping of deformed images is reduced in a great degree, and the reliability of inspection is enhanced.
  • An inspection system is applied with input images continuously by a raster scan method, and the processing for a defect image is completed each time a defect image appears on the input images. That is, the inspection system uses a real-time one-pass method. Hence, it is unnecessary to store the input image in an image memory, and thus a feature value of a clump appearing on the input image can be determined by real-time processing.
  • FIG. 1 shows the whole construction of an embodiment of a visual inspection system according to the present invention.
  • FIG. 2 is a schematic diagram showing how an input image is formed by a raster scan method.
  • FIGS. 3A to 3D are schematic diagrams for explaining how a control image is formed from an input image.
  • FIG. 4 is a schematic diagram showing various clumps on two consecutive scanning lines.
  • FIG. 5 is a diagram showing the state transition of a sequential machine which is the gist of the present invention.
  • FIG. 6 is a diagram showing the state transition of the sequential machine for a case where a feature value of an intersecting portion on a scanning line is calculated.
  • FIGS. 7A to 7H are schematic diagrams for explaining a method of calculating the length of the periphery of a clump.
  • FIG. 8 is a block diagram showing an embodiment of a digital image analysis system according to the present invention.
  • FIG. 9 is a block diagram showing an example of the sequential machine of FIG. 8.
  • FIG. 10 is a block diagram showing an example of a circuit for obtaining a control image.
  • FIG. 1 shows the whole construction of an embodiment of an inspection system for inspecting the repetition of the same pattern (for example, a die pattern) on a substrate such as a repeated circuit pattern on a semiconductor wafer on a mask pattern.
  • a substrate such as a repeated circuit pattern on a semiconductor wafer on a mask pattern.
  • two dice 3 and 3' of a substrate 2 placed on a moving stage 1 are simultaneously scanned by two similar optical systems 4 and 4' each made up of a lens and an image sensor, to obtain video signals from corresponding portions of the dice 3 and 3'.
  • the video signals thus obtained are converted by analog-digital converters 5 and 6 into digital signals v 1 (t) and v 2 (t), and a signal f indicating the absolute value of the difference between the signals v 1 (t) and v 2 (t) is delivered from a subtracter 7. Since the signals v 1 (t) and v 2 (t) are digital signals from corresponding portions of the dice 3 and 3', the difference signal f is a defect video signal which emphasizes the difference in pattern between the dice 3 and 3', that is, a defect. Accordingly, when the defect video signal f is converted into a binary signal by using a threshold value and a signal portion having a logical value "1" is taken out, a defect can be detected.
  • the present embodiment includes an image analyzer 10 which is applied with the defect video signal f.
  • the image analyzer 10 analyzes the defect video signal f, and measures the position, dimensions, area and others of an image proposed for a defect. Further, the analyzer 10 disseminate between a false defect and a true defect on the basis of the result of the above measurement, and writes the information on the true defect in a result memory 18.
  • the contents of the result memory 18 are collected by a central processing unit (CPU) 8, and then displayed by display means to show the result of inspection.
  • a stage controller 9 controls the movement of the stage 1.
  • the gist of the present invention resides in the image analyzer indispensable for a reliable visual inspection system.
  • the operation of the image analyzer is very complicated. Hence, the deformation of input image will first be explained, and then the contents of a sequential machine applied with the deformed image will be explained. Finally, a method of calculating a feature value of a clump will be explained.
  • the present invention is intended to directly process a binary image from an imaging device of the raster scan type at a video-rate.
  • the binary input image by f(i, j), where
  • a control signal necessary for image processing is automatically made from the input image. Accordingly, it is necessary to convert the input image into a binary control image suitable for generating the control signal.
  • a right-direction control image h 1 (i, j) is produced by subjecting a binary input image f(i, j) which is formed by the raster scan method and shown in FIG. 3A, to recursive digital filtering given by the following equation: ##EQU1## As shown in FIG.
  • clumps on the input image f(i, j) are extended on the lower right side, to fill up the recess at the bottom of a clump and a hole in another clump.
  • the raster scan conversion processing is carried out so that the scanning order in an i-direction is reversed, and then a left-direction control image h 2 (i, j) is produced by carrying out recursive digital filtering given by the following equation: ##EQU2## As shown in FIG.
  • the clumps on the input image f(i, j) are extended on the lower left side, to fill up the above hole and recess. Then, the raster scan conversion processing with respect to the i-direction is carried out for the left-direction control image h 2 (i, j) so that the ordinary raster scan method is used for the image h 2 (i, j), and a logical product of the images h 1 (i, j) and h 2 (i, j) is made as follows.
  • FIG. 3D shows an example of the control image g(i, j).
  • FIG. 3D shows an example of the control image g(i, j).
  • the processing for reversing the scanning order in the i-direction can be carried out in such a manner that an input image corresponding to one scanning line is written in a memory, and the input image is read out of the memory in the order opposite to the writing order.
  • the control image has the property of eliminating a hole in a clump and a recess at the bottom thereof.
  • control image This property of the control image is very important for the operation of the image analyzer, and hence the deformed image g(i, j) is herein referred to as "control image".
  • the intermediate images h 1 (i, j) and h 2 (i, j) also have the above property.
  • the image h 1 (i, j) or h 2 (i, j) may be used as a control image, in place of the image g(i, j).
  • FIGS. 3B and 3C the images h 1 (i, j) and h 2 (i, j) have been deformed in a great degree.
  • the probability that a plurality of independent clumps are united in a single clump is high. That is, the image g(i, j) is far superior to the images h 1 (i, j) and h 2 (i, j).
  • FIG. 4 shows typical shapes of clump by using images on the j-th and (j-1)th scanning lines.
  • a clump indicated by reference symbol (a) in FIG. 4 can be expressed by a numeral string 0111---10. This numeral string means that the clump (a) starts from the j-th scanning line, and hence it is required to open a feature value memory for the clump (a).
  • a clump indicated by reference symbol (b) in FIG. 4 can be expressed by a numeral string 022---20.
  • This numeral string means that the clump (b) terminates at the (j-1)th scanning line. Accordingly, it is required to deliver a feature value of the clump (b) calculated up to the (j-1)th scanning line to the outside as the final feature value of the clump (b).
  • a clump indicated by reference symbol (c) can be expressed by a numeral string 0133320.
  • numeral string 0133320 the repetition of numeral "3" (that is, a string of 3's) occurs only once.
  • a clump extended up to the (j-1)th scanning line is connected with another clump on the j-th scanning line so as to show one-to-one correspondence. Accordingly, it is necessary to update a feature value calculated up to the (j-1)th scanning line so that a new feature value includes a feature value due to the clump on the j-th scanning line, and to store the new feature value in a memory.
  • a clump indicated by reference symbol (d) in FIG. 4 can be expressed by a numeral string 022333111333220.
  • a string of 3's occurs twice, and a string of 1's is sandwiched between the first string of 3's and the second string of 3's.
  • the above numeral string means that first and second clumps each extended up to the (j-1)th scanning line are connected with a third clump on the j-th scanning line, and that position in the numeral string where a numeral in the numeral string is changed from "1" to "3", indicates a position where the second clump on the (j-1)th scanning line is first connected with the third clump.
  • the first and second clumps on the (j-1)th scanning line are never combined with each other at the zero-th to (j-2)th scanning lines, and hence it is unnecessary to consider the preceding combination of the first and second clumps.
  • the third clump on the j-th scanning line is never divided into a plurality of parts at the (j+1)th and following scanning lines, and hence it is unnecessary to consider how a feature value is divided and how divided feature values are stored.
  • the concept of control image is introduced to evade such problems and to facilitate the construction of the image analyzer. As can be seen from the above explanation, when a sequential machine applied with the control image for carrying out the above processing is constructed, the image analysis can be made in accordance with the raster scan method.
  • the image analyzer it is important to consider how the intermediate feature value of a clump calculated up to the (j-1)th scanning line is read out of a memory and how a new feature value of the clump calculated up to the j-th scanning line is stored in the memory.
  • This problem can be solved by allotting serial numbers to clumps (that is, intersecting portions) on each of the (j-1)th and j-th scanning lines in the order of appearance, and by using the serial numbers as the inner addresses of each of a pair of memories.
  • the clump (c) in FIG. 4 is the second intersecting portion on each of the (j-1)th and j-th scanning lines.
  • a feature value of the clump (c) calculated up to the (j-1)th scanning line is read out from the address "2" of a first memory, and an updated feature value including a feature value due to the intersecting portion on the j-th scanning line is stored in a second memory at an address "2" thereof.
  • the first and second memories act as read-out and write-in memories, respectively.
  • each of the first and second memories is used as a read-out memory for a scanning line and used as a write-in memory for the next scanning line.
  • the calculation of a feature value can be correctly carried out even when a newly generated clump and a vanishing clump such as the clumps (a) and (b) exist on the control image.
  • FIG. 5 shows the above method in the form of a state transition diagram.
  • reference symbols S 0 to S 4 designate transition states of the sequential machine, F#1 a first memory for storing feature values calculated up to the (j-1)th scanning line, F#2 a second memory for storing feature values calculated up to the j-th scanning line, n 1 an inner address of the first memory F#1, n 2 an inner address of the second memory F#2, n 1 + an operation for incrementing the address n 1 by one, n 2 + an operation for incrementing the address n 2 by one, Q a feature value calculated for one intersecting portion on the j-th scanning line, W a register for storing intermediate feature values, and ⁇ a function for combining two feature values.
  • the sequential machine is reset to the initial state S 0 , and the addresses n 1 and n 2 are reset to zero.
  • the feature values of all clumps on the control image are calculated in accordance with the state transition of FIG. 5, and as soon as all the clumps terminates in the course of the raster scan, the feature values of all the clumps are delivered to the outside.
  • Each of the first and second memories F#1 and F#2 is used for alternate ones of scanning lines, and stores feature values calculated up to a scanning line. Accordingly, the storage capacity of each of the memories F#1 and F#2 corresponds to the maximum number of intersecting portions on one scanning line, at most.
  • the value Q and the function ⁇ shown in FIG. 5 depend upon the kind of feature value to be determined. Methods of calculating the value Q and examples of the function ⁇ for various feature values will be explained below.
  • the calculation of the feature value of one clump can be expressed by the state transition diagram of FIG. 6, provided that a value 2 0 ⁇ g(i,j)+2 1 ⁇ f(i,j) made by combining the input image f(i, j) and the control image g(i, j) is used as an input value (where 2 0 indicates 1 or 0, and 2 1 indicates 2 or 0).
  • An area where the control image g(i, j) has a logical value "1” includes an area where the input image f(i, j) has a logical value "1". Accordingly, the input value is one of the numerals 0, 1 and 3.
  • the X- and Y-coordinates X m and Y m of the center of a clump, the length X p the projection of the clump onto an X-direction and the length Y p of the projection of the clump onto a Y-direction can be calculated on the basis of feature values of the table I in accordance with the following table.
  • the volume of a clump is defined as the summation of multi-level values at a region which contains the clump.
  • a grey level image namely, multi-level image
  • the volume of the clump can be calculated.
  • the binary image f(i, j) can be readily obtained by carrying out threshold processing or appropriate preprocessing for the multi-level image f'(i, j) which depends upon the property of the clump.
  • the multilevel image f'(i, j) can be used in various application fields.
  • the total length of the projection of a clump onto an X- or Y-direction is defined as the number of pixels each having the boundary of the clump in the X- or Y-direction. Accordingly, in a case where the total length of the projection onto the X-direction is calculated, an image f'(i, j) is obtained from the image f(i, j) by the following equation:
  • an image f'(i, j) is obtained from the image f(i, j) by the following equation:
  • the total length of the projection is calculated in the same manner as used for calculating the volume.
  • the value of the image f'(i, j) is calculated only at a position where the image f(i, j) has a logical value "1", and hence it is necessary to define the value of image f'(i, j) contributing to the calculation, at a pixel where the image f(i, j) has the logical value "1".
  • the length of periphery of a clump can be calculated in the following manner. That is, a boundary line in each pixel contributing to the length of periphery is allotted to a pixel where the image f(i, j) has a logical value "1", in the form of a density value, and the length of periphery is calculated in the same manner as used for the volume calculation. Referring to FIGS. 7A to 7H, length of a boundary line segment on the boundary of or within a "1" pixel is used alotted to as its density value of the pixel (refer to FIGS. 7A to 7C), and length of a boundary line segment contained in a "0" pixel having a logical value "0" as shown in FIGS.
  • FIGS. 7D and 7G are alloted to a "1" pixel adjacent to the "0" pixel in a clockwise direction.
  • boundary line segments allotted to the center pixel of the 3 ⁇ 3 pixels are shown in FIGS. 7A to 7D and FIG. 7G.
  • FIGS. 7E, 7F and 7H no boundary line exists.
  • some of the above boundary lines are allotted to the center pixel.
  • the sum of density values due to the above boundary lines is given to the center pixel.
  • the above method can be expressed by an equation mentioned below. Now, let us express a local image which contains 3 ⁇ 3 pixels and has a pixel (i, j) of the binary input image f(i, j) as the center pixel of the 3 ⁇ 3 pixels, as follows:
  • FIG. 8 shows an embodiment of an image analyzer according to the present invention.
  • reference numeral 11 designates a sequential machine which can be expressed by the state transition diagram of FIG. 5, and 12 to 18 a circuit part for carrying out the calculation necessary for image analysis.
  • the sequential machine 11 is applied with the control image g(i, j), and has a function of generating a control signal for controlling the calculation.
  • a calculation circuit 12 is applied with the input image f(i, j) and the control image g(i, j), and is used for calculating a feature value with respect to an intersecting portion on a scanning line by real-time processing.
  • the calculation is carried out in a manner shown in the 1-line calculation column of the tables I and III, and hence varies with a feature value to be determined.
  • the memories 15a and 15b are used for storing a feature value of a clump calculated up to a scanning line.
  • Each of the memories 15a and 15b is changed from one of a read-out memory and a write-in memory to the other by selectors 13 and 14 at intervals of one scanning period corresponding to one scanning line.
  • the memory 15a is used as the read-out memory for even-numbered scanning lines, and is used as the write-in memory for odd-numbered scanning lines.
  • a feature value of the clump calculated up to the (j-1)th scanning line is read out from the memory 15a, and the read-out feature value is updated by the calculation circuit 12 so that a new-feature value includes a feature value due to the intersecting portion on the j-th scanning line.
  • the new feature value calculated up to the j-th scanning line is written in the memory 15b.
  • the feature value calculated up to the j-th scanning line is read out from the memory 15b, and an updated feature value is written in the memory 15a.
  • a register 16 is used for temporarily storing the result of an arithmetic operation performed by an arithmetic unit 17.
  • the arithmetic unit 17 is used for carrying out the inter-line calculation. The selection of inputs to the unit 17 and the timing of arithmetic operation are controlled by the sequential machine 11 in accordance with the state transition shown in FIG. 5.
  • FIG. 9 shows the detailed circuit configuration of an example of the sequential machine 11.
  • reference numeral 21 designates a 1-line delay circuit for delivering the value of a pixel g(i, j-1) which precedes an input pixel g(i, j) by one scanning line, 23 a register, and 22 a read only memory (namely, ROM) for storing control data.
  • the contents of the register 23 are updated by a clock pulse each time input data g(i, j) corresponding to one pixel is applied to the sequential machine.
  • the ROM 22 is applied with two bits indicative of g(i, j) and g(i, j-1) and a 3-bit signal 25 indicative of that one of transition states S 0 to S 4 of sequential machine which is held by the register 23. At this time, the contents of the ROM 22 are changed by the clock pulse so that the next one of transition states S 0 to S 4 can be delivered, and the ROM 22 delivers a control signal for performing an arithmetic operation necessary for the above state transition. Control information required includes address signals for specifying the inner addresses n1 and n2 of the memories 15a and 15b. The address signals are readily obtained by additionally providing counters 24a and 24b for counting the control signal from the ROM 22.
  • control information required to be delivered from the ROM 22 includes the write-in timing for the memory 15a or 15b, the write-in timing for the register 16, the specification of input data to the arithmetic unit 17, and the write-in timing for the result memory 18.
  • the above control information can be readily obtained by writing data in the ROM 22 so that the ROM 22 delivers a pulse having a level "1" simultaneously with the state transition of the ROM.
  • Means for resetting the counters 24a and 24b at the beginning of each scanning line and means for causing the selectors 13 and 14 to perform switching operations at the same time, are omitted from FIG. 9. However, it is easy to add these means to the circuit configurations of FIGS. 8 and 9.
  • the state transition and arithmetic operations shown in FIG. 5 can be completely carried out by the circuit configurations of FIGS. 8 and 9.
  • FIG. 10 shows a circuit configuration for producing the control image g(i, j).
  • reference symbols 31a and 31b designate scan direction converters for reversing the scanning order in an i-direction.
  • the scan direction converter 31a includes selectors 41a and 41b, 1-line memory circuits 42a and 42b a selector 43, and address counters 44 and 45.
  • the address counter 44 counts up addresses from the zero-th address to the (k-1)th address in the ascending order (where k is equal to the number of pixels on one scanning line), and the address counter 45 counts up addresses from the (k-1)th address to the zero-th address in the descending order.
  • one of the memory circuits 41a and 41b is selected by the selector 41a, and image data corresponding to one scanning line is stored in one memory circuit in the ascending order. While, the other memory circuit is selected by the selector 41b, and image data is read out from the other memory circuit in the descending order.
  • Each of the selectors 41a, b and 43 changes one of connecting states over to the other each time a scanning operation for one scanning line is completed. Thus, read-out and write-in operations are alternately performed for each of the memory circuits 42a and 42b at intervals of one scanning period corresponding to one scanning line. Accordingly, input data are written in and read out from one of the memory circuits 42a and 42b in opposite scanning orders viewed in the i-direction. The same operation as in the scan direction converter 31a is performed in the scan direction converter 31b.
  • Reference symbols 32a and 32b in FIG. 10 designate circuits for generating control images h 1 (i, j) and h 2 (i, j).
  • the circuit 32a for generating the left-direction control image h 2 (i, j) includes a 1-line delay circuit 46, a register 47, an OR circuit 48 and an AND circuit 49.
  • the input image f(i, j) is applied to the circuit 32a in accordance with the raster scan method, the input image f(i, j) is converted into the left-direction control image h 2 (i, j) which is expressed by the equation (3).
  • the circuit 32b for generating the right-direction control image h 1 (i, j) has a circuit configuration similar to that of the circuit 32a, and can convert the input image f(i, j) into the right-direction control image h 1 (i, j).
  • One-line delay circuit 33a and 33b compensate the delay of image due to the scan direction converters 31a and 32b, respectively.
  • the control images h 1 (i, j) and h 2 (i, j) synchronized with each other are applied to an AND circuit 34, which delivers the logical product of the control images h 1 (i, j) and h 2 (i, j), that is, the control image g(i, j).
  • image data is delayed little by little in the course of the above processing, the final result of measurement will not be affected by such delay.
  • the image analysis can be completely carried out by the circuit configurations shown in FIGS. 8, 9 and 10. That is, the image analysis can be carried out by one-pass processing at a video-rate.
  • the results of image analysis are stored in the result memory, and can be fetched from the result memory into an external computer, at need.
  • a plurality of kinds of feature values of a clump that is, particle
  • a filter on the input side of the result memory so that only a clump having a specified combination of feature values is written in the result memory.
  • Such processing is very effective for pattern inspection, since false alarms are eliminated and only true defects are extracted.
  • the present invention has the following advantages.
  • Image processing is carried out in accordance with a raster scanning operation for obtaining an input image, and hence only a very small amount of image data is stored in a memory. As a result, it is not required to provide a memory circuit having a large capacity, and thus the manufacturing cost of an image analyzer is greatly reduced.

Abstract

A digital image analysis system is disclosed in which a digital input image formed by a raster scan method is so modified as to fill up a hole in a clump and a recess at the bottom of a clump viewed in the sub-scanning direction of the raster scan method, for the purpose of forming a control image, the state of a clump at two consecutive scanning lines of the control image (that is, the generation and termination of the clump at one of the scanning lines or the continuity of the clump at the scanning lines) is detected from the values of adjacent pixels on the two consecutive scanning lines, and a feature value of the clump is calculated on the basis of the detected state of the clump.

Description

BACKGROUND OF THE INVENTION
The present invention relates to an image processing method for measuring the area, dimensions and others of each of particle images (that is, clumps) appearing on a picture image and an apparatus for carrying out the image processing method, and more particularly to an image processing method and an image processing apparatus suitable for use in an automatic image analysis system and an automatic pattern-defect inspection system.
With the recent increase in the integration density of a printed circuit board and a semiconductor circuit, an automatic visual inspection system has been earnestly developed which can automatically inspect fine pattern defects generated in fabricating the printed circuit board or semiconductor circuit, with the aid of image processing techniques. That is, owing to the increase in the integration density, it has become very difficult to visually inspect the above pattern defects. Hence, the automatic visual inspection system which can make the above inspection in place of human eyes, is very important. In order to prevent the false alarm due to pattern noise and ensure the reliability of the automatic pattern-defect inspection, it is necessary to use image processing techniques for measuring the position, dimensions, area and others of each of defect images which are obtained by defect extraction processing and indicated by a logical value "1", and for judging whether or not each defect image indicates a true defect, on the basis of the measured values. As to techniques for analyzing clumps which appear in a binary image and have a logical value "1", and for measuring the position, dimensions, area and others of each clump, many methods have been developed in the name of digital image analysis method. In order to use these methods in the automatic pattern inspection, the methods are required to carry out real-time processing. The reason for this is as follows. For example, in a case where a body to be inspected moves at a constant speed and is imaged continuously by a one-dimensional line sensor, input images are continuously applied to an inspection system by a raster scan method for a long time, and each input image contains a vast amount of data. Hence, it is very difficult to store the input images in an image memory and to process the images read out from the image memory, as in a conventional system. Thus, it is necessary to use a real-time one-pass technique, in which input images are continuously applied to an inspection system by a raster scan method, and the processing for a defect image is completed each time the defect image appears on the input images. In conventional digital image analysis, a labeling method, a tracking method and improved methods thereof have been used. In each of these methods, however, it is required to store an input image corresponding to the whole area of a display screen or an equivalent feature data in a memory. Thus, the above methods cannot be used for high-speed inspection.
The basic difficulty of the real-time, one-pass processing for input images formed by a raster scan method is based upon the variations in shape of clumps appearing on a binary image. In a case where all the clumps have convex surfaces, the processing for each clump is relatively simple. For example, in a case where a clump has the form of a spiral, it is difficult to check the continuity between those portions of the clump which intersect with one scanning line (hereinafter referred to as "intersecting portions"), and thus the processing for the clump becomes difficult. In order to solve this difficulty, the following method has been proposed. (1) The end of each intersecting portion on a scanning line is so deformed as to reach a more extending one of the end of the intersecting portion and the deformed end of a corresponding, intersecting portion on a preceding scanning line.
(2) When the end of the intersecting portion is deformed as mentioned above, a feature value of clump calculated up to the present scanning line is applied to a one-scanning-line delay circuit. A feature value due to a corresponding, intersecting portion on the next scanning line is added to the feature value outputted from the delay circuit, and the feature value thus modified is applied to the delay circuit after the end of the corresponding, intersecting portion has been deformed.
(3) The above operation is repeated and feature values are summed up till the final intersecting portion of each clump is deformed. Then, the feature value of each clump thus obtained is delivered to the outside. This method is described in U.S. Pat. No. 3,619,494.
SUMMARY OF THE INVENTION
In the above method, however, it is required to form a large, useless, deformed portion at the end of each intersecting portion of a clump. Accordingly, in a case where a portion of another clump is contained in the deformed portion, two clumps are regarded as a single clump, and thus there arises a problem that a feature value of the false clump is calculated.
It is therefore an object of the present invention to provide an ultrahigh-speed inspection system which can reduce the probability of carrying out such erroneous measurement in a great degree and is far superior in reliability to a conventional system.
The above problem of the conventional method is caused by a fact that the intermediate value of a feature is stored in a one-scanning-line delay circuit. That is, in order to correctly read out the feature value calculated up to the preceding scanning line from the delay circuit, the deformed end of an intersecting portion is required to reach the deformed end of a corresponding portion on the preceding scanning line or to be placed behind the latter end, and thus end of clump is obliged to be deformed in a great degree.
In order to solve the above problem, an ultrahigh-speed inspection system according to the present invention comprises means for storing the intermediate one of a feature value in a memory at an address determined on the basis of the arrangement order of intersecting portions on a scanning line, means for converting an input image into a digital image, and for generating a calculation control signal necessary for image analysis, on the basis of the values of adjacent pixels on two consecutive scanning lines, and means for taking the logical product of two similar deformed images formed by raster scanning operations in opposite directions, to fill a hole in a clump or a recess at the bottom thereof with a logical value "1".
According to the above inspection system, the feature value of a clump can be measured at a video-rate by the one-pass method, and moreover the deformation of the clump can be reduced to a minimum. As a result, the probability of erroneous measurement due to the overlapping of deformed images is reduced in a great degree, and the reliability of inspection is enhanced.
An inspection system according to the present invention is applied with input images continuously by a raster scan method, and the processing for a defect image is completed each time a defect image appears on the input images. That is, the inspection system uses a real-time one-pass method. Hence, it is unnecessary to store the input image in an image memory, and thus a feature value of a clump appearing on the input image can be determined by real-time processing.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows the whole construction of an embodiment of a visual inspection system according to the present invention.
FIG. 2 is a schematic diagram showing how an input image is formed by a raster scan method.
FIGS. 3A to 3D are schematic diagrams for explaining how a control image is formed from an input image.
FIG. 4 is a schematic diagram showing various clumps on two consecutive scanning lines.
FIG. 5 is a diagram showing the state transition of a sequential machine which is the gist of the present invention.
FIG. 6 is a diagram showing the state transition of the sequential machine for a case where a feature value of an intersecting portion on a scanning line is calculated.
FIGS. 7A to 7H are schematic diagrams for explaining a method of calculating the length of the periphery of a clump.
FIG. 8 is a block diagram showing an embodiment of a digital image analysis system according to the present invention.
FIG. 9 is a block diagram showing an example of the sequential machine of FIG. 8.
FIG. 10 is a block diagram showing an example of a circuit for obtaining a control image.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Now, explanation will be made of an embodiment of a visual inspection system according to the present invention, with reference to FIG. 1. FIG. 1 shows the whole construction of an embodiment of an inspection system for inspecting the repetition of the same pattern (for example, a die pattern) on a substrate such as a repeated circuit pattern on a semiconductor wafer on a mask pattern. Referring to FIG. 1, two dice 3 and 3' of a substrate 2 placed on a moving stage 1 are simultaneously scanned by two similar optical systems 4 and 4' each made up of a lens and an image sensor, to obtain video signals from corresponding portions of the dice 3 and 3'. The video signals thus obtained are converted by analog- digital converters 5 and 6 into digital signals v1 (t) and v2 (t), and a signal f indicating the absolute value of the difference between the signals v1 (t) and v2 (t) is delivered from a subtracter 7. Since the signals v1 (t) and v2 (t) are digital signals from corresponding portions of the dice 3 and 3', the difference signal f is a defect video signal which emphasizes the difference in pattern between the dice 3 and 3', that is, a defect. Accordingly, when the defect video signal f is converted into a binary signal by using a threshold value and a signal portion having a logical value "1" is taken out, a defect can be detected. This is the operation principle of the present embodiment. However, owing to the positional deviation between patterns on the dice 3 and 3' and the difference in surface state between the above patterns, various noise patterns are superposed on the defect video signal f. Accordingly, when the defect video signal f including the noise patterns is simply converted into a binary signal, many false alarms will be generated. In view of the above fact, the present embodiment includes an image analyzer 10 which is applied with the defect video signal f. The image analyzer 10 analyzes the defect video signal f, and measures the position, dimensions, area and others of an image proposed for a defect. Further, the analyzer 10 disseminate between a false defect and a true defect on the basis of the result of the above measurement, and writes the information on the true defect in a result memory 18. The contents of the result memory 18 are collected by a central processing unit (CPU) 8, and then displayed by display means to show the result of inspection. Incidentally, a stage controller 9 controls the movement of the stage 1.
By adding such an image analyzer to an inspection system, a practical, reliable inspection system can be obtained, in which inspection is not disturbed by false alarms, and a result memory does not overflow with false alarms. The gist of the present invention resides in the image analyzer indispensable for a reliable visual inspection system. The operation of the image analyzer is very complicated. Hence, the deformation of input image will first be explained, and then the contents of a sequential machine applied with the deformed image will be explained. Finally, a method of calculating a feature value of a clump will be explained.
(1) Formation of Input Image
The present invention is intended to directly process a binary image from an imaging device of the raster scan type at a video-rate. In order to clarify the following explanation, let us express the binary input image by f(i, j), where
i=0, 1, 2, ---, (I-1)
j=0, 1, 2, ---, (j-1)                                      . . . (1)
Further, let us suppose that the raster scan is carried out in a manner shown in FIG. 2.
(2) Formation of Control Image
In the image analyzer, a control signal necessary for image processing is automatically made from the input image. Accordingly, it is necessary to convert the input image into a binary control image suitable for generating the control signal. Now, explanation will be made of a method for producing a binary control image g (i, j) from the binary input image f(i, j). A right-direction control image h1 (i, j) is produced by subjecting a binary input image f(i, j) which is formed by the raster scan method and shown in FIG. 3A, to recursive digital filtering given by the following equation: ##EQU1## As shown in FIG. 3B, in the right-direction control image h1 (i, j), clumps on the input image f(i, j) are extended on the lower right side, to fill up the recess at the bottom of a clump and a hole in another clump. Further, the raster scan conversion processing is carried out so that the scanning order in an i-direction is reversed, and then a left-direction control image h2 (i, j) is produced by carrying out recursive digital filtering given by the following equation: ##EQU2## As shown in FIG. 3C, in the left-direction control image h2 (i, j), the clumps on the input image f(i, j) are extended on the lower left side, to fill up the above hole and recess. Then, the raster scan conversion processing with respect to the i-direction is carried out for the left-direction control image h2 (i, j) so that the ordinary raster scan method is used for the image h2 (i, j), and a logical product of the images h1 (i, j) and h2 (i, j) is made as follows.
g(i, j)=h.sub.1 (i, j)Λh.sub.2 (i, j)               . . . (4)
Thus, a desired control image g(i, j) is obtained. FIG. 3D shows an example of the control image g(i, j). As shown in FIG. 3D, useless portions appearing in FIGS. 3B and 3C are eliminated, and moreover the input image is subjected to minimum deformation so that the recess and hole are filled up. In the above operation, the processing for reversing the scanning order in the i-direction can be carried out in such a manner that an input image corresponding to one scanning line is written in a memory, and the input image is read out of the memory in the order opposite to the writing order. As mentioned above, the control image has the property of eliminating a hole in a clump and a recess at the bottom thereof. This property of the control image is very important for the operation of the image analyzer, and hence the deformed image g(i, j) is herein referred to as "control image". However, the intermediate images h1 (i, j) and h2 (i, j) also have the above property. Hence, the image h1 (i, j) or h2 (i, j) may be used as a control image, in place of the image g(i, j). However, as can be seen from FIGS. 3B and 3C, the images h1 (i, j) and h2 (i, j) have been deformed in a great degree. Accordingly, when the image h1 (i, j) or h2 (i, j) is used, the probability that a plurality of independent clumps are united in a single clump is high. That is, the image g(i, j) is far superior to the images h1 (i, j) and h2 (i, j).
(3) Method of Analyzing Images
In a case where a binary image obtained by the raster scan method is analyzed, it is a very important problem how the continuity between individual clumps on a plurality of consecutive scanning lines is treated. This problem can be solved by observing two consecutive scanning lines simultaneously. FIG. 4 shows typical shapes of clump by using images on the j-th and (j-1)th scanning lines. Now, let us express the logical value of a pixel on the j-th scanning line by the position of 20 (that is, "0" or "1" ) and express the logical value of a pixel on the (j-1)th scanning line by the position of 21 (that is, "0" or "2" ), to express a signal on the j-th and (j-1)th scanning lines by a string of numerals. It will be explained below what kind of processing is necessary for the numeral string thus obtained. When the above expression is used, a clump indicated by reference symbol (a) in FIG. 4 can be expressed by a numeral string 0111---10. This numeral string means that the clump (a) starts from the j-th scanning line, and hence it is required to open a feature value memory for the clump (a).
A clump indicated by reference symbol (b) in FIG. 4 can be expressed by a numeral string 022---20. This numeral string means that the clump (b) terminates at the (j-1)th scanning line. Accordingly, it is required to deliver a feature value of the clump (b) calculated up to the (j-1)th scanning line to the outside as the final feature value of the clump (b).
A clump indicated by reference symbol (c) can be expressed by a numeral string 0133320. In this case, the repetition of numeral "3" (that is, a string of 3's) occurs only once. This means that a clump extended up to the (j-1)th scanning line is connected with another clump on the j-th scanning line so as to show one-to-one correspondence. Accordingly, it is necessary to update a feature value calculated up to the (j-1)th scanning line so that a new feature value includes a feature value due to the clump on the j-th scanning line, and to store the new feature value in a memory.
A clump indicated by reference symbol (d) in FIG. 4 can be expressed by a numeral string 022333111333220. In this case, a string of 3's occurs twice, and a string of 1's is sandwiched between the first string of 3's and the second string of 3's. The above numeral string means that first and second clumps each extended up to the (j-1)th scanning line are connected with a third clump on the j-th scanning line, and that position in the numeral string where a numeral in the numeral string is changed from "1" to "3", indicates a position where the second clump on the (j-1)th scanning line is first connected with the third clump. Accordingly, it is necessary to combine two feature values with respect to the first and second clumps into a new feature value as soon as a numeral in the numeral train is changed from "1" to "3", and to update the new feature value so as to include a feature value due to the third clump on the j-th scanning line. Further, it is necessary to store the updated feature value in a memory. In a case where a string of 3' s occurs three or more times, that is, a string portion "3"→"1" is repeated many times, a multiplicity of clumps can be combined with one another by repeating the above-mentioned processing. When the control image g(i, j) is used, the first and second clumps on the (j-1)th scanning line are never combined with each other at the zero-th to (j-2)th scanning lines, and hence it is unnecessary to consider the preceding combination of the first and second clumps. Further, the third clump on the j-th scanning line is never divided into a plurality of parts at the (j+1)th and following scanning lines, and hence it is unnecessary to consider how a feature value is divided and how divided feature values are stored. In the present invention, the concept of control image is introduced to evade such problems and to facilitate the construction of the image analyzer. As can be seen from the above explanation, when a sequential machine applied with the control image for carrying out the above processing is constructed, the image analysis can be made in accordance with the raster scan method.
Further, in order to construct the image analyzer, it is important to consider how the intermediate feature value of a clump calculated up to the (j-1)th scanning line is read out of a memory and how a new feature value of the clump calculated up to the j-th scanning line is stored in the memory. This problem can be solved by allotting serial numbers to clumps (that is, intersecting portions) on each of the (j-1)th and j-th scanning lines in the order of appearance, and by using the serial numbers as the inner addresses of each of a pair of memories. For example, the clump (c) in FIG. 4 is the second intersecting portion on each of the (j-1)th and j-th scanning lines. Accordingly, a feature value of the clump (c) calculated up to the (j-1)th scanning line is read out from the address "2" of a first memory, and an updated feature value including a feature value due to the intersecting portion on the j-th scanning line is stored in a second memory at an address "2" thereof. In this case, the first and second memories act as read-out and write-in memories, respectively. When the (j+1)th scanning line is inspected, the feature value calculated up to the j-th scanning line is read out from the address "2" of the second memory, and an updated feature value is stored in the first memory at the address "2" thereof. Thus, the calculation of a feature value of the clump (c) proceeds correctly. As mentioned above, each of the first and second memories is used as a read-out memory for a scanning line and used as a write-in memory for the next scanning line. When the above read/write operation is performed, the calculation of a feature value can be correctly carried out even when a newly generated clump and a vanishing clump such as the clumps (a) and (b) exist on the control image.
FIG. 5 shows the above method in the form of a state transition diagram. In FIG. 5, reference symbols S0 to S4 designate transition states of the sequential machine, F#1 a first memory for storing feature values calculated up to the (j-1)th scanning line, F#2 a second memory for storing feature values calculated up to the j-th scanning line, n1 an inner address of the first memory F#1, n2 an inner address of the second memory F#2, n1 + an operation for incrementing the address n1 by one, n2 + an operation for incrementing the address n2 by one, Q a feature value calculated for one intersecting portion on the j-th scanning line, W a register for storing intermediate feature values, and ψ a function for combining two feature values.
Referring to FIG. 5, at the beginning of each scanning line, the sequential machine is reset to the initial state S0, and the addresses n1 and n2 are reset to zero. The feature values of all clumps on the control image are calculated in accordance with the state transition of FIG. 5, and as soon as all the clumps terminates in the course of the raster scan, the feature values of all the clumps are delivered to the outside. Each of the first and second memories F#1 and F#2 is used for alternate ones of scanning lines, and stores feature values calculated up to a scanning line. Accordingly, the storage capacity of each of the memories F#1 and F#2 corresponds to the maximum number of intersecting portions on one scanning line, at most.
(4) Method of Calculating Feature Values
The value Q and the function ψ shown in FIG. 5 depend upon the kind of feature value to be determined. Methods of calculating the value Q and examples of the function ψ for various feature values will be explained below.
The calculation of the feature value of one clump can be expressed by the state transition diagram of FIG. 6, provided that a value 20 ·g(i,j)+21 ·f(i,j) made by combining the input image f(i, j) and the control image g(i, j) is used as an input value (where 20 indicates 1 or 0, and 21 indicates 2 or 0). An area where the control image g(i, j) has a logical value "1", includes an area where the input image f(i, j) has a logical value "1". Accordingly, the input value is one of the numerals 0, 1 and 3. In a case where the input value is equal to zero, an area having no clump is indicated, and hence the value Q is set to an initial value Q0. In a case where the input value is equal to 3, the input image f(i, j) has a logical value "1", and hence the value Q is updated with the aid of a function φ. In a case where the input value is equal to 1, an area other than a true clump is indicated, and hence the value Q is kept unchanged. As mentioned above, the calculation of feature value is controlled by the control image g(i, j), but is carried out only for the input image f(i, j). Accordingly, the result of calculation is not affected by deformation of input image. That is, the feature values of clumps on the input image can be correctly measured, notwithstanding the input image undergoes the deformation. Actual functions φ and ψ for calculating feature values will be described in the following table.
              TABLE I                                                     
______________________________________                                    
Method of Calculating Feature Values (Part I)                             
1-line calculation  inter-line calculation                                
feature                                                                   
       initial              initial                                       
value  value    Q = φ (Q, --)                                         
                            value  W = ψ (W, Q)                       
______________________________________                                    
area   0        Q = Q + f(i, j)                                           
                            0      W = W + Q                              
maximum                                                                   
       0        Q = max (Q, i)                                            
                            0      W = max (W, Q)                         
X-coor-                                                                   
dinate                                                                    
minimum                                                                   
       maximum  Q = min (Q, i)                                            
                            maximum                                       
                                   W = min (W, Q)                         
X-coor-                                                                   
       integer              integer                                       
dinate                                                                    
maximum                                                                   
       0        Q = max (Q, j)                                            
                            0      W = max (W, Q)                         
Y-coor-                                                                   
dinate                                                                    
minimum                                                                   
       maximum  Q = min (Q, j)                                            
                            maximum                                       
                                   W = min (W, Q)                         
Y-coor-                                                                   
       integer              integer                                       
dinate                                                                    
______________________________________                                    
Further, the X- and Y-coordinates Xm and Ym of the center of a clump, the length Xp the projection of the clump onto an X-direction and the length Yp of the projection of the clump onto a Y-direction can be calculated on the basis of feature values of the table I in accordance with the following table.
              TABLE II                                                    
______________________________________                                    
Method of Calculating Feature Values (Part II)                            
feature value   calculation method                                        
______________________________________                                    
X-coordinate of center                                                    
                X.sub.m = (minimum X-coordinate +                         
(X.sub.m)       maximum X-coordinate) × 1/2                         
Y-coordinate of center                                                    
                Y.sub.m = (minimum Y-coordinate +                         
(Y.sub.m)       maximum Y-coordinate) × 1/2                         
length of projection                                                      
                X.sub.p = (maximum X-coordinate -                         
onto X-direction (X.sub.p)                                                
                minimum X-coordinate)                                     
length of projection                                                      
                Y.sub.p = (maximum Y-coordinate -                         
onto Y-direction (Y.sub.p)                                                
                minimum Y-coordinate)                                     
______________________________________                                    
The volume of a clump is defined as the summation of multi-level values at a region which contains the clump. When a grey level image (namely, multi-level image)f'(i, j) of this region is used in place of the binary image f(i, j), and the equation Q=Q+f(i,j) in the table I for calculating the area of a clump up to a scanning line is replaced by an equation Q=Q+f'(i, j), the volume of the clump can be calculated. The binary image f(i, j) can be readily obtained by carrying out threshold processing or appropriate preprocessing for the multi-level image f'(i, j) which depends upon the property of the clump. The multilevel image f'(i, j) can be used in various application fields.
The total length of the projection of a clump onto an X- or Y-direction is defined as the number of pixels each having the boundary of the clump in the X- or Y-direction. Accordingly, in a case where the total length of the projection onto the X-direction is calculated, an image f'(i, j) is obtained from the image f(i, j) by the following equation:
f'(i, j)=f(i, j)·{1-f(i-1, j)}                    . . . (5)
In a case where the total length of the projection onto the Y-direction is calculated, an image f'(i, j) is obtained from the image f(i, j) by the following equation:
f'(i, j)=f(i, j)·(1-f(i, j-1))                    . . . (6)
Then, the total length of the projection is calculated in the same manner as used for calculating the volume. In the above method, it is to be noted that the value of the image f'(i, j) is calculated only at a position where the image f(i, j) has a logical value "1", and hence it is necessary to define the value of image f'(i, j) contributing to the calculation, at a pixel where the image f(i, j) has the logical value "1".
The length of periphery of a clump can be calculated in the following manner. That is, a boundary line in each pixel contributing to the length of periphery is allotted to a pixel where the image f(i, j) has a logical value "1", in the form of a density value, and the length of periphery is calculated in the same manner as used for the volume calculation. Referring to FIGS. 7A to 7H, length of a boundary line segment on the boundary of or within a "1" pixel is used alotted to as its density value of the pixel (refer to FIGS. 7A to 7C), and length of a boundary line segment contained in a "0" pixel having a logical value "0" as shown in FIGS. 7D and 7G is alloted to a "1" pixel adjacent to the "0" pixel in a clockwise direction. When attention is paid to only 2×2 pixels which exists in the lower right corner of an area containing 3×3 pixels, boundary line segments allotted to the center pixel of the 3×3 pixels are shown in FIGS. 7A to 7D and FIG. 7G. In FIGS. 7E, 7F and 7H, no boundary line exists. In a case where 2×2 pixels existing in the upper right, lower left or upper left corner of the 3×3 pixels are selected, some of the above boundary lines are allotted to the center pixel. The sum of density values due to the above boundary lines is given to the center pixel. The above method can be expressed by an equation mentioned below. Now, let us express a local image which contains 3×3 pixels and has a pixel (i, j) of the binary input image f(i, j) as the center pixel of the 3×3 pixels, as follows:
______________________________________                                    
a7               a0    a1                                                 
a6               a8    a2                                                 
a5               a4    a3                                                 
______________________________________                                    
Then, a multi-level image f'(i, j) used for calculating the length of periphery, is given by the following equation: ##EQU3## where a sign "·" indicates AND processing, and "+" arithmetic addition. In a case where the calculation error due to a value √2 in the equation (7) raises a problem, an integral part corresponding to the number of longitudinal and transverse boundaries and a part corresponding to the number of oblique boundaries are calculated as different feature values, and when the feature values of all the clump are collected, two parts are combined to calculate the sum of these parts by a computer. The above calculation of feature values will be summarized in the following table III. The results shown in the tables I, II and III are typical ones of feature values determined by the present invention, and other feature values can be calculated in a similar manner.
              TABLE III                                                   
______________________________________                                    
Method of Calculating Feature Values (Part III)                           
one-line calculation inter-line calculation                               
feature initial              initial                                      
value   value   Q = φ (Q,--)                                          
                             value W = ψ (W, Q)                       
______________________________________                                    
volume  0       Q = Q + f'(i, j)                                          
                             0     W = W + Q                              
                where f' is                                               
                multi-level                                               
                input image                                               
total length    Q = Q + f'(i, j)                                          
of projec-                                                                
        0       where f' is  0     W = W + Q                              
tion in X-      given by                                                  
direction       equation (5)                                              
total length    Q = Q + f'(i, j)                                          
of projec-                                                                
        0       where f' is  0     W = W + Q                              
tion in Y-      given by                                                  
direction       equation (6)                                              
length of                                                                 
        0       Q = Q + f'(i, j)                                          
                             0     W = W + Q                              
periphery       where f' is                                               
                given by                                                  
                equation (7)                                              
______________________________________                                    
FIG. 8 shows an embodiment of an image analyzer according to the present invention. In FIG. 8, reference numeral 11 designates a sequential machine which can be expressed by the state transition diagram of FIG. 5, and 12 to 18 a circuit part for carrying out the calculation necessary for image analysis. The sequential machine 11 is applied with the control image g(i, j), and has a function of generating a control signal for controlling the calculation. A calculation circuit 12 is applied with the input image f(i, j) and the control image g(i, j), and is used for calculating a feature value with respect to an intersecting portion on a scanning line by real-time processing. The calculation is carried out in a manner shown in the 1-line calculation column of the tables I and III, and hence varies with a feature value to be determined. The memories 15a and 15b are used for storing a feature value of a clump calculated up to a scanning line. Each of the memories 15a and 15b is changed from one of a read-out memory and a write-in memory to the other by selectors 13 and 14 at intervals of one scanning period corresponding to one scanning line. For example, the memory 15a is used as the read-out memory for even-numbered scanning lines, and is used as the write-in memory for odd-numbered scanning lines. In more detail, in a case where the j-th scanning line is an even-numbered one, a feature value of the clump calculated up to the (j-1)th scanning line is read out from the memory 15a, and the read-out feature value is updated by the calculation circuit 12 so that a new-feature value includes a feature value due to the intersecting portion on the j-th scanning line. The new feature value calculated up to the j-th scanning line is written in the memory 15b. Further, when the (j+1)th scanning line is inspected, the feature value calculated up to the j-th scanning line is read out from the memory 15b, and an updated feature value is written in the memory 15a. By exchanging the functions of the memories 15a and 15b in the above manner, a feature value of a clump can be updated successively to reach a final value. A register 16 is used for temporarily storing the result of an arithmetic operation performed by an arithmetic unit 17. The arithmetic unit 17 is used for carrying out the inter-line calculation. The selection of inputs to the unit 17 and the timing of arithmetic operation are controlled by the sequential machine 11 in accordance with the state transition shown in FIG. 5.
FIG. 9 shows the detailed circuit configuration of an example of the sequential machine 11. In FIG. 9, reference numeral 21 designates a 1-line delay circuit for delivering the value of a pixel g(i, j-1) which precedes an input pixel g(i, j) by one scanning line, 23 a register, and 22 a read only memory (namely, ROM) for storing control data. The contents of the register 23 are updated by a clock pulse each time input data g(i, j) corresponding to one pixel is applied to the sequential machine. The ROM 22 is applied with two bits indicative of g(i, j) and g(i, j-1) and a 3-bit signal 25 indicative of that one of transition states S0 to S4 of sequential machine which is held by the register 23. At this time, the contents of the ROM 22 are changed by the clock pulse so that the next one of transition states S0 to S4 can be delivered, and the ROM 22 delivers a control signal for performing an arithmetic operation necessary for the above state transition. Control information required includes address signals for specifying the inner addresses n1 and n2 of the memories 15a and 15b. The address signals are readily obtained by additionally providing counters 24a and 24b for counting the control signal from the ROM 22. Further, control information required to be delivered from the ROM 22 includes the write-in timing for the memory 15a or 15b, the write-in timing for the register 16, the specification of input data to the arithmetic unit 17, and the write-in timing for the result memory 18. The above control information can be readily obtained by writing data in the ROM 22 so that the ROM 22 delivers a pulse having a level "1" simultaneously with the state transition of the ROM. Means for resetting the counters 24a and 24b at the beginning of each scanning line and means for causing the selectors 13 and 14 to perform switching operations at the same time, are omitted from FIG. 9. However, it is easy to add these means to the circuit configurations of FIGS. 8 and 9. The state transition and arithmetic operations shown in FIG. 5 can be completely carried out by the circuit configurations of FIGS. 8 and 9.
FIG. 10 shows a circuit configuration for producing the control image g(i, j). In FIG. 10, reference symbols 31a and 31b designate scan direction converters for reversing the scanning order in an i-direction. Referring to FIG. 10, the scan direction converter 31a includes selectors 41a and 41b, 1-line memory circuits 42a and 42b a selector 43, and address counters 44 and 45. The address counter 44 counts up addresses from the zero-th address to the (k-1)th address in the ascending order (where k is equal to the number of pixels on one scanning line), and the address counter 45 counts up addresses from the (k-1)th address to the zero-th address in the descending order. When the input image f(i, j) is applied to the converter 31a, one of the memory circuits 41a and 41b is selected by the selector 41a, and image data corresponding to one scanning line is stored in one memory circuit in the ascending order. While, the other memory circuit is selected by the selector 41b, and image data is read out from the other memory circuit in the descending order. Each of the selectors 41a, b and 43 changes one of connecting states over to the other each time a scanning operation for one scanning line is completed. Thus, read-out and write-in operations are alternately performed for each of the memory circuits 42a and 42b at intervals of one scanning period corresponding to one scanning line. Accordingly, input data are written in and read out from one of the memory circuits 42a and 42b in opposite scanning orders viewed in the i-direction. The same operation as in the scan direction converter 31a is performed in the scan direction converter 31b.
Reference symbols 32a and 32b in FIG. 10 designate circuits for generating control images h1 (i, j) and h2 (i, j). The circuit 32a for generating the left-direction control image h2 (i, j) includes a 1-line delay circuit 46, a register 47, an OR circuit 48 and an AND circuit 49. When the input image f(i, j) is applied to the circuit 32a in accordance with the raster scan method, the input image f(i, j) is converted into the left-direction control image h2 (i, j) which is expressed by the equation (3). The circuit 32b for generating the right-direction control image h1 (i, j) has a circuit configuration similar to that of the circuit 32a, and can convert the input image f(i, j) into the right-direction control image h1 (i, j). One- line delay circuit 33a and 33b compensate the delay of image due to the scan direction converters 31a and 32b, respectively. Thus, the control images h1 (i, j) and h2 (i, j) synchronized with each other are applied to an AND circuit 34, which delivers the logical product of the control images h1 (i, j) and h2 (i, j), that is, the control image g(i, j). Although image data is delayed little by little in the course of the above processing, the final result of measurement will not be affected by such delay.
As mentioned above, the image analysis can be completely carried out by the circuit configurations shown in FIGS. 8, 9 and 10. That is, the image analysis can be carried out by one-pass processing at a video-rate. The results of image analysis are stored in the result memory, and can be fetched from the result memory into an external computer, at need.
According to the present invention, a plurality of kinds of feature values of a clump (that is, particle) are determined at the same time. Hence, it is easy to provide a filter on the input side of the result memory so that only a clump having a specified combination of feature values is written in the result memory. Such processing is very effective for pattern inspection, since false alarms are eliminated and only true defects are extracted.
As has been explained in the foregoing, the present invention has the following advantages.
(1) Image processing is carried out in accordance with a raster scanning operation for obtaining an input image, and hence only a very small amount of image data is stored in a memory. As a result, it is not required to provide a memory circuit having a large capacity, and thus the manufacturing cost of an image analyzer is greatly reduced.
(2) All the arithmetic operations for one pixel are performed in one clock time, and thus image analysis is carried out at a video-rate. Accordingly, an ultrahigh-speed visual inspection system or pattern analysis system can be realized.
(3) Many general feature values can be calculated, and moreover the kind of calculable feature value can be increased by carrying out appropriate pre-processing for special feature values. Further, a plurality of kinds of feature values of a clump can be obtained at the same time. Accordingly, it is easy to calculate a combined feature value and to select desired ones from detected clumps on the basis of a specified combination of feature values. Such selection greatly improves the pattern analyzing ability, and can enhance the reliability of an inspection system, in which a false alarm has to be eliminated.

Claims (10)

We claim:
1. A digital image analysis system for determining a feature value of each clump on a digital input image formed by a raster scan method, the digital image analysis system comprising:
control image producing means for producing a control image by modifying the input image so as to fill up a hole in a clump and a recess at the bottom of a clump viewed in the sub-scanning direction of the raster scan method;
state detecting means successively applied with the value of adjacent pixels on two consecutive scanning lines of the control image, for detecting the state of a clump at a predetermined one of the scanning lines (that is, the generation and termination of the clump at one of the scanning lines and the continuity of the clump at the scanning lines), on the basis of the combination of the value of the adjacent pixels; and
feature value calculating means, including two memory means each for storing a feature value of a clump calculated up to one of the scanning lines, for calculating a feature value of a clump on the basis of the detected state of the clump, updating the feature value of a clump read out from one of said two memory means so that a new-feature value is generated including a feature of a portion on the one scanning line and for storing said new feature value in the other one of said two memory means.
2. A digital image analysis system according to claim 1, wherein the control image producing means produces a right-direction control image by modifying the input image so that each clump on the input image is extended in a lower right-direction and thus a recess at the bottom of a clump is filled up, produces a left-direction control image by modifying the input image so that each clump on the input image is extended in a lower left-direction and thus a recess at the bottom of the clump is filled up, and produces the control image by taking the logical product of the right-direction and left-direction control images.
3. A digital image analysis system according to claim 2, wherein said right-direction control image (h,(i,j)) and left direction control image (h2 (i, j)) are produced by subjecting a binary input image f(i,j)) to recursive digital filtering given by the following equation ##EQU4##
4. A digital image analysis system for determining a feature value of each of clumps on a digital input image formed by a raster scan method, the digital image analysis system comprising:
means for producing a control image by modifying the input image so as to fill up a hole in a clump and a recess at the bottom of a clump viewed in the sub-scanning direction of the raster scan method;
feature value calculating means applied with the values of adjacent pixels on two consecutive scanning lines of the control image for calculating a feature value of each clump on the input image, said feature value calculating means being made up of a sequential machine having at least five internal states, a memory for storing a feature value of clump, a register, a circuit for 1-line calculation of feature value, and a circuit for inter-line calculation of feature value; and
means for controlling the initialization, update and delivery of a feature value of a clump and the synthesis of feature values of a plurality of clumps, at a time the internal state of the sequential machine is changed.
5. A digital image analysis system for determining a feature value of each of clumps on a digital input image formed by a raster scan method, the digital image analysis system comprising:
control image producing means for producing a control image by modifying the input image so as to fill up a hole in a clump and a recess at the bottom of a clump viewed in the sub-scanning direction of the raster scan method;
state detecting means successively applied with the values of adjacent pixels on two consecutive scanning lines of the control image, for detecting the state of a clump at a predetermined one of the scanning lines (that is, the generation and termination of the scanning lines and the continuity of the clump at the scanning lines), on the basis of the combination of the values of the adjacent pixels; and
feature value calculating means for calculating a feature value of a clump on the basis of the detected state of the clump;
wherein a feature value Q in the feature value calculating means, is used for determining one of the area, volume and length of periphery of a clump.
6. A digital image analysis system according to claim 5, wherein the feature value Q is calculated on the basis of the combination of the input image and the control image.
7. A digital image analysis system for determining a feature value of each of clumps on a digital input image formed by a raster scan method, the digital image analysis system comprising:
control image producing means for producing a control image by modifying the input image so as to fill up a hole in a clump and a recess at the bottom of a clump viewed in the sub-scanning direction of the raster scan method;
state detecting means successively applied with the values of adjacent pixels on two consecutive scanning lines of the control image, for detecting the state of a clump at a predetermined one of the scanning lines (that is, the generation and termination of the scanning lines and the continuity of the clump at the scanning lines), on the basis of the combination of the values of the adjacent pixels; and
feature value calculating means for calculating a feature value of a clump on the basis of the detected state of the clump;
wherein when the two consecutive scanning lines are indicated by (j-1)th and j-th scanning lines, the state detecting means expresses the value of each pixel on the j-th scanning line and the value of each pixel on the (j-1)th scanning line by the positions of 20 and 21, respectively, to express a raster signal on the (j-1)th and j-th scanning lines by a numeral string, and detects the state of a clump at the j-th scanning line (that is, the generation and termination of the clump at one of the (j-1)th and the j-th scanning lines and the continuity of the clump at the (j-1)th and the j-th scanning lines), on the basis of the numeral string.
8. A digital memory analysis system according to claim 7, wherein the feature value calculating means uses serial numbers which indicate the appearance order of a plurality of strings of "1's" at the (j-1)th and j-th scanning lines, as the inner addresses of each of a pair of memories.
9. A digital image analysis system according to claim 7, wherein when the state detecting means detects that a clump is generated at the j-th scanning line, the feature value calculating means prepares a memory area for a feature value of the clump, wherein when the state detecting means detects that a clump terminates at the (j-1)th scanning line, the feature value calculating means delivers a feature value of the clump calculated up to the (j-1)th scanning line, and wherein when the state detecting means detects the continuity of a clump at the (j-1)th and j-th scanning lines, the feature value calculating means updates a feature value of the clump calculated up to the (j-1)th scanning line so that a new feature value includes a feature value due to the j-th scanning line, to store the new feature value in a memory.
10. A digital image analysis system according to claim 9 wherein when it is detected that first and second clumps on the (j-1)th scanning line are connected with a third clump on the j-th scanning line, feature values of the first and second clumps calculated up to the (j-1)th scanning line are combined, and the combined feature value is updated so that a new feature value includes a feature value due to the third clump on the j-th scanning line, the store the new feature value in the memory.
US07/155,807 1987-03-06 1988-02-16 Digital image analysis system Expired - Lifetime US4897795A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP62050033A JP2810660B2 (en) 1987-03-06 1987-03-06 Particle image analyzer
JP62-50033 1987-03-06

Publications (1)

Publication Number Publication Date
US4897795A true US4897795A (en) 1990-01-30

Family

ID=12847685

Family Applications (1)

Application Number Title Priority Date Filing Date
US07/155,807 Expired - Lifetime US4897795A (en) 1987-03-06 1988-02-16 Digital image analysis system

Country Status (2)

Country Link
US (1) US4897795A (en)
JP (1) JP2810660B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5034985A (en) * 1989-11-13 1991-07-23 Pitney Bowes Inc. Matched mailing system employing address print array recognition
US5381344A (en) * 1992-04-06 1995-01-10 Hewlett-Packard Company Apparatus and method for obtaining a list of numbers of wafers for integrated circuit testing
US5751834A (en) * 1996-02-07 1998-05-12 Basf Corporation Image analysis method for determining pigment levels in fabric
US6011566A (en) * 1994-09-01 2000-01-04 Unisys Corporation System and method to display raster images with negligible delay time and reduced memory requirements
WO2004049258A1 (en) * 2002-11-25 2004-06-10 Sensovation Ag Method for recording a characteristic of at least one object
US20120155741A1 (en) * 2007-06-20 2012-06-21 Hisae Shibuya Visual Inspection Method And Apparatus And Image Analysis System

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6246472B1 (en) 1997-07-04 2001-06-12 Hitachi, Ltd. Pattern inspecting system and pattern inspecting method
JP5008572B2 (en) * 2004-12-21 2012-08-22 キヤノン株式会社 Image processing method, image processing apparatus, and computer-readable medium
JP6035375B1 (en) * 2015-06-02 2016-11-30 株式会社メック Defect inspection apparatus and defect inspection method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3619494A (en) * 1968-05-01 1971-11-09 Metals Research Ltd Counting systems in image analysis employing line scanning techniques
US3908118A (en) * 1973-09-27 1975-09-23 California Inst Of Techn Cross correlation anomaly detection system
US4238780A (en) * 1978-04-14 1980-12-09 Siemens Aktiengesellschaft Process and an apparatus for automatically recognizing the position of semiconductor elements
US4298944A (en) * 1979-06-22 1981-11-03 Siemens Gammasonics, Inc. Distortion correction method and apparatus for scintillation cameras
US4360799A (en) * 1980-05-22 1982-11-23 Leighty Robert D Hybrid optical-digital pattern recognition apparatus and method
US4499598A (en) * 1982-07-02 1985-02-12 Conoco Inc. Edge and line detection in multidimensional noisey, imagery data
US4528634A (en) * 1981-10-09 1985-07-09 Hitachi, Ltd. Bit pattern generator
US4648120A (en) * 1982-07-02 1987-03-03 Conoco Inc. Edge and line detection in multidimensional noisy, imagery data
US4724543A (en) * 1985-09-10 1988-02-09 Beckman Research Institute, City Of Hope Method and apparatus for automatic digital image analysis
US4764974A (en) * 1986-09-22 1988-08-16 Perceptics Corporation Apparatus and method for processing an image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS56103773A (en) * 1980-01-21 1981-08-19 Agency Of Ind Science & Technol Feature extracing system of binary pattern
JPS5723295A (en) * 1980-07-17 1982-02-06 Toray Industries Flexible substrate

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3619494A (en) * 1968-05-01 1971-11-09 Metals Research Ltd Counting systems in image analysis employing line scanning techniques
US3908118A (en) * 1973-09-27 1975-09-23 California Inst Of Techn Cross correlation anomaly detection system
US4238780A (en) * 1978-04-14 1980-12-09 Siemens Aktiengesellschaft Process and an apparatus for automatically recognizing the position of semiconductor elements
US4298944A (en) * 1979-06-22 1981-11-03 Siemens Gammasonics, Inc. Distortion correction method and apparatus for scintillation cameras
US4360799A (en) * 1980-05-22 1982-11-23 Leighty Robert D Hybrid optical-digital pattern recognition apparatus and method
US4528634A (en) * 1981-10-09 1985-07-09 Hitachi, Ltd. Bit pattern generator
US4499598A (en) * 1982-07-02 1985-02-12 Conoco Inc. Edge and line detection in multidimensional noisey, imagery data
US4648120A (en) * 1982-07-02 1987-03-03 Conoco Inc. Edge and line detection in multidimensional noisy, imagery data
US4724543A (en) * 1985-09-10 1988-02-09 Beckman Research Institute, City Of Hope Method and apparatus for automatic digital image analysis
US4764974A (en) * 1986-09-22 1988-08-16 Perceptics Corporation Apparatus and method for processing an image

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5034985A (en) * 1989-11-13 1991-07-23 Pitney Bowes Inc. Matched mailing system employing address print array recognition
US5381344A (en) * 1992-04-06 1995-01-10 Hewlett-Packard Company Apparatus and method for obtaining a list of numbers of wafers for integrated circuit testing
US6011566A (en) * 1994-09-01 2000-01-04 Unisys Corporation System and method to display raster images with negligible delay time and reduced memory requirements
US5751834A (en) * 1996-02-07 1998-05-12 Basf Corporation Image analysis method for determining pigment levels in fabric
WO2004049258A1 (en) * 2002-11-25 2004-06-10 Sensovation Ag Method for recording a characteristic of at least one object
US20060039583A1 (en) * 2002-11-25 2006-02-23 Stefan Bickert Method for recording a charcteristic of at least one object
US20120155741A1 (en) * 2007-06-20 2012-06-21 Hisae Shibuya Visual Inspection Method And Apparatus And Image Analysis System
US8620061B2 (en) * 2007-06-20 2013-12-31 Hitachi High-Technologies Corporation Visual inspection method and apparatus and image analysis system

Also Published As

Publication number Publication date
JPS63217479A (en) 1988-09-09
JP2810660B2 (en) 1998-10-15

Similar Documents

Publication Publication Date Title
US4626838A (en) Filled shaped generating apparatus
US4280143A (en) Method and means for scale-changing an array of boolean coded points
US4897795A (en) Digital image analysis system
US4528692A (en) Character segmenting apparatus for optical character recognition
JPS6232476B2 (en)
US4855933A (en) Line figure connecting apparatus
JPH055142B2 (en)
JP3676948B2 (en) Pixel number conversion circuit and image display apparatus using the same
US4710764A (en) Device for obtaining continuous plots on the screen of a display console controlled by a graphic processor
US4656468A (en) Pattern data processing apparatus
KR20050072070A (en) Data storing apparatus, data storing controlling apparatus, data storing controlling method and data storing controlling program
JPH0130180B2 (en)
JP2000242798A (en) Extraction of feature quantity of binarty image
JPH06189135A (en) Device for detection and correction of flaw of digitization picture
JPH0332723B2 (en)
JPH09185718A (en) Automatic pattern defect inspection device
JPS646508B2 (en)
JPH01205287A (en) Character line inclination detector
JP2705052B2 (en) Pattern inspection equipment
JPH0145667B2 (en)
JP2857239B2 (en) Method and apparatus for measuring flow velocity distribution
JP2690936B2 (en) Digital image analyzer
JPH08272950A (en) Image processor
JPS6312311B2 (en)
JPH0454263B2 (en)

Legal Events

Date Code Title Description
AS Assignment

Owner name: HITACHI, LTD., 6, KANDA SURUGADAI 4-CHOME, CHIYODA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:YODA, HARUO;INOUCHI, HIDENORI;SAKOU, HIROSHI;AND OTHERS;REEL/FRAME:004861/0601

Effective date: 19880202

Owner name: HITACHI, LTD., A CORP. OF JAPAN, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YODA, HARUO;INOUCHI, HIDENORI;SAKOU, HIROSHI;AND OTHERS;REEL/FRAME:004861/0601

Effective date: 19880202

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12