US5185811A - Automated visual inspection of electronic component leads prior to placement - Google Patents

Automated visual inspection of electronic component leads prior to placement Download PDF

Info

Publication number
US5185811A
US5185811A US07/825,434 US82543492A US5185811A US 5185811 A US5185811 A US 5185811A US 82543492 A US82543492 A US 82543492A US 5185811 A US5185811 A US 5185811A
Authority
US
United States
Prior art keywords
leads
peaks
component
group
lead
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US07/825,434
Inventor
Gregory E. Beers
Myron D. Flickner
William L. Kelly-Mahaffey
Darryl R. Polk
James M. Stafford
Henry E. Wattenbarger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US07/825,434 priority Critical patent/US5185811A/en
Application granted granted Critical
Publication of US5185811A publication Critical patent/US5185811A/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]

Definitions

  • This invention relates to the visual inspection of electronic component leads as a step in a total placement system. More particularly, it relates to determining presence/absence, position and orientation of fine pitch leads on a surface mount component such as a TAB device located on a robotic end effector.
  • this reference uses only one camera and determines lead position on the basis of binary pixels rather than the additional information contained in gray level pixels.
  • IBM Technical Disclosure Bulletin Vol. 31, No. 9, 2/89, p. 186, Robotic Scanning Laser Placement, Solder & Desolder Devices discloses the use of a CCD camera and a vision and system to determine X, Y and theta offsets.
  • Conventional CCD video cameras used in machine vision systems typically have resolution of 492 (vertical) by 512 (horizontal) pixels.
  • Working with such a large pixel array requires time and computational resources in large amounts. It is desirable to optimize time and resource usage without sacrificing accuracy in vision processors.
  • the present invention improves on prior art techniques for detecting presence/absence, position and orientation of fine pitch component leads by reducing the number and complexity of vision processing operations.
  • the positions of leads within the image are determined by finding peaks within summation profiles taken from regions of interest i.e. where leads are expected.
  • a summation profile is a digital integral taken along rows or columns of pixels. The sum of a brightly lit row or column of pixels which are projected by leads is higher than the projection of the relatively dark background between leads.
  • the summation profile technique reduces the number and order of calculations required, so high speed throughput is obtained as a result of performing calculations on N ⁇ 1 summation profiles rather than on N ⁇ N pixel arrays.
  • the inventive technique uses the coordinates of four regions of interest in the image within which leads along the top, left, bottom, and right of a rectangular component are expected to fall. Iteratively. each of the regions of interest is treated to find the position of leads on each side of the component. After leads are found on each side, the average of the positions is taken. This average, called the average centroid, represents the center position of that side of the component being inspected to sub-pixel accuracy.
  • the four average centroids are used to determine the center and orientation of the component. If no errors which cause processing of a component image to stop are encountered, the orientation and center of the component are used in the subsequent control of automated placement apparatus for assembling of the component on a printed circuit substrate.
  • two cameras are used to produce two overlapping gray level images of a component.
  • Two images are used to enable higher resolution. Selected portions wherein the two images leads are expected to be are mapped to a common coordinate system to facilitate subsequent determination of orientation of each selected image portion and the amount of overlap between a given selected portion and all other selected portions. Once regions corresponding to sides of the component being inspected are determined. positions of the leads are found as just indicated.
  • a single camera is used for initial image capture.
  • FIG. 1 is the schematic illustration of the system in which the present invention is embodied.
  • FIG. 2 is a schematic illustration of a rectangular, fine-pitch electronic component.
  • FIGS. 3A and 3B illustrate the images seen by cameras in FIG. 1.
  • FIG. 4 illustrates combining the images of FIG. 3.
  • FIG. 5 graphically illustrates component lead projection.
  • FIG. 6 illustrates the determination of component center and orientation.
  • FIG. 7 illustrates a modification to the system of FIG. 1.
  • FIGS. 8A and 8B are flow charts illustrating the logic executed in vision system 20.
  • FIG. 1 schematically shows a system in which the present invention may be embodied.
  • Robotic end effector 2 carries component 4 and positions at the inspection location.
  • a component 4 is illuminated by light sources 6.
  • Beam splitter 8 is interposed between component 4 and cameras 10 and 12.
  • Cameras 10 and 12 may be CCD array sensors for detecting gray level images. Cameras 10 and 12 are in a fixed location in respect to the presentation position of the component. Image data from cameras 10 and 12 is fed into vision system 20 which includes frame grabber 22 for converting analog video signals to digital and storing same.
  • Vision system 20 also includes image processing logic for first retrieving and then summing subarray data.
  • FIG. 2 shows component 4 is carried by robotic end effector 2.
  • FIG. 2 shows device 4 with its leads 30 viewed from beneath robotic end effector 2.
  • FIG. 3a shows a view of component 4 seen by camera 10 while FIG. 3b shows the view of component 4 seen by camera 12.
  • Advantages in throughput times with the use of our invention arise primarily because less than the entire digitized camera image needs to be examined in order to complete inspection of component 4.
  • FIG. 1 is shown at 34 as including a portion of the component 4 and its leads 30.
  • Image 34 corresponds to the two dimensional array 480 ⁇ 512 of pixels detected by camera 10.
  • image 36 corresponds to the 480 ⁇ 512 array of pixels detected by camera 12 and includes a portion of component 4 and its leads 30.
  • a window is a rectangular portion predefined by diagonal coordinate pairs of an image within which the search for component leads is conducted. Windows are set, predefined for each component type during systems start up. Windows 40, 42 44 in image 34 correspond to the predetermined subarray of pixels where leads 30 are expected to be. Similarly windows 46. 48. 50 referred to subarrays of pixels in image 36 where leads are expected to appear.
  • All windows in the two images 34. 36 are mapped to a common coordinate system.
  • a Euler transform may be suitable.
  • all computation on window coordinates is made on mapped windows.
  • the orientation of each mapped window is determined and the amount of overlap between each mapped window and every other mapped window is calculated.
  • each window is paired with the window with which it has the best measure of overlap.
  • the measure of overlap is the distance 52 windows have in common along the axis of orientation 52.
  • Mapped windows which are longer in the x dimension than in the y dimension are designated as being oriented from left to right and are found on the top and bottom of a rectangular component. Mapped windows which are longer in the y dimension than in the x dimension are designated as being oriented from top to bottom and are found on the left and right sides of a component.
  • Groups of windows are classified as falling into the classes top, left, bottom, or right of a component.
  • a group of windows comprises one non-overlapping mapped window such as 42 or a set of two overlapping mapped windows such as 44 and 50.
  • Groups within each class are then ordered canonically. in our preferred embodiment counter clockwise starting at top right. It is clear that other ordering may be used. This ordering in the windows in each group are used in subsequent calculations for determining the centroids of each side and amount of overlap of windows.
  • lead centroid positions are used to compute the centroid of the component and position and orientation of the component.
  • component location is known to within 5 percent of its linear dimension. This knowledge simplifies vision system computations since only very small windows, regions of interest, in the image of a component need be examined.
  • TAB components are to be visually inspected after they have been excised from a tape and their leads formed.
  • a robotic end-effector picks up the excised and formed component and after the visual inspection is completed and coordinate corrections made, places the component on the printed circuit substrate to which it is subsequently bonded.
  • FIG. 5 is a graphic representation of a summation profile for a given narrowly constrained region of interest in an image, e.g. window 42, FIG. 3.
  • the horizontal axis represents the particular position along the component windows long axis.
  • the vertical axis represents the digital integration or summation of the pixels oriented along the window's short axis in line with the leads.
  • Pixel values are summed for a given row or column.
  • the axis of each lead must be approximately parallel to the line of summation.
  • summation profiles are sums of gray level pixel values in rows or columns of two dimensional arrays as a function of the expected orientation of leads on a side of the component. Because the width of summation windows is small, the affects of non-orthogonality of image axis and lead axis is diminished. If orientation of a component does not vary by more than 2.5 degrees, it is safe to assume that any error introduced by non orthogonality is insignificant.
  • end-effector 2 (FIG. 1) is such that the leads appear as brightly reflected features against a darker background. This aspect of the illustrative system further reduces required computation time since the background is not specular.
  • the centers of the leads correspond to peaks 90. 92 in the summation profile shown in FIG. 5.
  • First differences, i.e. derivatives, of the sums are found. Peaks in the summation profile occur at a first difference transition from plus to minus. Valleys 94,96 occur at transitions from - to +. Some of the peaks correspond to lead centers.
  • the centers of the leads cannot be found by looking for the edges of leads because there is insufficient data to clearly distinguish the edges of the individual leads.
  • the leads are so close together that the summation profiles form Gaussian distributions with peaks at the centers of the leads.
  • Light gradients are corrected by subtracting summation valleys from adjoining peaks. Corrected peak values obtained in this manner are less variable since surfaces corresponding to peaks receive approximately the same amount of incident light from illumination sources 6, FIG. 1 as the surfaces corresponding to adjoining valleys.
  • the lead positions in the two images from cameras 10 and 12 in FIG. 3 are then mapped to a common coordinate space using the same Euler transform as above described with respect to windows 40, 42, 44 46, 48, and 50 in FIG. 3. Mapping only lead centroids rather than the entire image avoids a huge computational expense.
  • FIG. 6 schematically shows against reference outline 80 an outline 82 representing image frame of reference.
  • the average lead centroids are indicated at T, R, B, L on component 4, outline 82. Coordinates of the average lead centroids are (TX, TY), (RX, RY), (BX, BY) and (LX, LY).
  • the coordinates of the center (XY, CY), of component 4 are found by averaging the centroids of the sides.
  • the angle theta is calculated in accordance with the following:
  • FIG. 1 may be modified as shown in FIG. 7 to have a single camera. Such a modification is useful when less than the high resolution provided by the system of FIG. 1 is required or when a single camera can provide the high resolution required.
  • camera 10 snaps a picture of component 4 in frame grabber 22, reads into memory the array of pixels detected by camera 10 such that pixel values are stored.
  • Subarrays corresponding to the areas where leads are expected to appear are predefined as earlier described having reference in the system of FIG. 1. Summation profiles are performed as discussed having reference to FIG. 5. Corrections for light gradients are performed in the same manner and noise peaks are excluded as earlier described.
  • FIG. 8 is a flow chart of the logic followed in vision processor 20. This flow chart summarizes the process of our invention as above described having reference to FIGS. 1 through 7.

Abstract

A method and apparatus utilizing one or two cameras are described for visually inspecting a polygonal component located at the end of the robotic end effector for determining presence, position and orientation of component leads prior to placement. Image processing improvements are provided for decreasing computational complexity by representing two dimensional image areas of interest, and each containing leads along one side of a component, by one dimensional summation profiles.

Description

This is a continuation of application Ser. No. 07/634,675 filed Dec. 27, 1990, now abandoned.
CROSS REFERENCE TO RELATED APPLICATIONS
This application is related to commonly assigned, concurrently filed copending application Ser. No. 07/634,642, now U.S. Pat. No. 5,086,478.
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to the visual inspection of electronic component leads as a step in a total placement system. More particularly, it relates to determining presence/absence, position and orientation of fine pitch leads on a surface mount component such as a TAB device located on a robotic end effector.
2. Description of the Prior Art
For high speed throughput in a automated component placement system great accuracy is required. First, the presence/absence, position and orientation of component leads must be determined. It is conventional to use computer vision techniques to aid in this determination.
The prior art has illustrated in IBM Technical Disclosure Bulletin Volume 30, Number 1, 1987, pp. 228 "Surface Mounted Device Placement" discloses a technique for inspecting a component and its leads prior to placement with a high degree of accuracy. However, it differs from the present invention in that it does not provide for the accuracy required for fine pitch components.
The principle differences resulting in less accuracy are that this reference uses only one camera and determines lead position on the basis of binary pixels rather than the additional information contained in gray level pixels.
Similarly. IBM Technical Disclosure Bulletin Volume 31, Number 10, March 1989, pp. 222 "Assembly Technique Replacing the Electronic Components on Printed Circuit Wiring Patterns" discloses the use of computer vision processing, without detail, for inspecting component lead presence, condition and orientation as a step in a total placement system.
IBM Technical Disclosure Bulletin Vol. 31, No. 9, 2/89, p. 186, Robotic Scanning Laser Placement, Solder & Desolder Devices discloses the use of a CCD camera and a vision and system to determine X, Y and theta offsets. Conventional CCD video cameras used in machine vision systems typically have resolution of 492 (vertical) by 512 (horizontal) pixels. Working with such a large pixel array requires time and computational resources in large amounts. It is desirable to optimize time and resource usage without sacrificing accuracy in vision processors.
SUMMARY OF THE INVENTION
The present invention improves on prior art techniques for detecting presence/absence, position and orientation of fine pitch component leads by reducing the number and complexity of vision processing operations.
The positions of leads within the image are determined by finding peaks within summation profiles taken from regions of interest i.e. where leads are expected. A summation profile is a digital integral taken along rows or columns of pixels. The sum of a brightly lit row or column of pixels which are projected by leads is higher than the projection of the relatively dark background between leads. The summation profile technique reduces the number and order of calculations required, so high speed throughput is obtained as a result of performing calculations on N×1 summation profiles rather than on N×N pixel arrays.
The inventive technique uses the coordinates of four regions of interest in the image within which leads along the top, left, bottom, and right of a rectangular component are expected to fall. Iteratively. each of the regions of interest is treated to find the position of leads on each side of the component. After leads are found on each side, the average of the positions is taken. This average, called the average centroid, represents the center position of that side of the component being inspected to sub-pixel accuracy.
The four average centroids are used to determine the center and orientation of the component. If no errors which cause processing of a component image to stop are encountered, the orientation and center of the component are used in the subsequent control of automated placement apparatus for assembling of the component on a printed circuit substrate.
In a preferred embodiment two cameras are used to produce two overlapping gray level images of a component. Two images are used to enable higher resolution. Selected portions wherein the two images leads are expected to be are mapped to a common coordinate system to facilitate subsequent determination of orientation of each selected image portion and the amount of overlap between a given selected portion and all other selected portions. Once regions corresponding to sides of the component being inspected are determined. positions of the leads are found as just indicated. In an alternative embodiment a single camera is used for initial image capture.
BRIEF DESCRIPTION OF THE DRAWING
The above and other features and advantages of the present invention will be described in connection with the following drawing wherein:
FIG. 1 is the schematic illustration of the system in which the present invention is embodied.
FIG. 2 is a schematic illustration of a rectangular, fine-pitch electronic component.
FIGS. 3A and 3B illustrate the images seen by cameras in FIG. 1.
FIG. 4 illustrates combining the images of FIG. 3.
FIG. 5 graphically illustrates component lead projection.
FIG. 6 illustrates the determination of component center and orientation.
FIG. 7 illustrates a modification to the system of FIG. 1.
FIGS. 8A and 8B are flow charts illustrating the logic executed in vision system 20.
DESCRIPTION OF THE PREFERRED EMBODIMENT
Refer now to FIG. 1 which schematically shows a system in which the present invention may be embodied. Robotic end effector 2 carries component 4 and positions at the inspection location. A component 4 is illuminated by light sources 6. Beam splitter 8 is interposed between component 4 and cameras 10 and 12. Cameras 10 and 12 may be CCD array sensors for detecting gray level images. Cameras 10 and 12 are in a fixed location in respect to the presentation position of the component. Image data from cameras 10 and 12 is fed into vision system 20 which includes frame grabber 22 for converting analog video signals to digital and storing same.
Vision system 20 also includes image processing logic for first retrieving and then summing subarray data.
FIG. 2 shows component 4 is carried by robotic end effector 2. FIG. 2 shows device 4 with its leads 30 viewed from beneath robotic end effector 2.
FIG. 3a shows a view of component 4 seen by camera 10 while FIG. 3b shows the view of component 4 seen by camera 12. Advantages in throughput times with the use of our invention arise primarily because less than the entire digitized camera image needs to be examined in order to complete inspection of component 4.
In FIG. 3 in view seen by camera 10, FIG. 1 is shown at 34 as including a portion of the component 4 and its leads 30. Image 34 corresponds to the two dimensional array 480×512 of pixels detected by camera 10. Similarly image 36 corresponds to the 480×512 array of pixels detected by camera 12 and includes a portion of component 4 and its leads 30. A window is a rectangular portion predefined by diagonal coordinate pairs of an image within which the search for component leads is conducted. Windows are set, predefined for each component type during systems start up. Windows 40, 42 44 in image 34 correspond to the predetermined subarray of pixels where leads 30 are expected to be. Similarly windows 46. 48. 50 referred to subarrays of pixels in image 36 where leads are expected to appear.
All windows in the two images 34. 36 are mapped to a common coordinate system. By way of example an Euler transform may be suitable. Subsequently, all computation on window coordinates is made on mapped windows. The orientation of each mapped window is determined and the amount of overlap between each mapped window and every other mapped window is calculated.
Refer now to FIG. 4. In order for two windows to overlap they must be oriented in the same direction e.g. 44 and 50. Within a set of overlapping windows, each window is paired with the window with which it has the best measure of overlap. The measure of overlap is the distance 52 windows have in common along the axis of orientation 52.
Mapped windows which are longer in the x dimension than in the y dimension are designated as being oriented from left to right and are found on the top and bottom of a rectangular component. Mapped windows which are longer in the y dimension than in the x dimension are designated as being oriented from top to bottom and are found on the left and right sides of a component.
Groups of windows are classified as falling into the classes top, left, bottom, or right of a component. A group of windows comprises one non-overlapping mapped window such as 42 or a set of two overlapping mapped windows such as 44 and 50. Groups within each class are then ordered canonically. in our preferred embodiment counter clockwise starting at top right. It is clear that other ordering may be used. This ordering in the windows in each group are used in subsequent calculations for determining the centroids of each side and amount of overlap of windows.
Once a position of the lead controls and a common coordinate system are determined and overlapping leads are compensated for, lead centroid positions are used to compute the centroid of the component and position and orientation of the component.
In this illustrative embodiment, component location is known to within 5 percent of its linear dimension. This knowledge simplifies vision system computations since only very small windows, regions of interest, in the image of a component need be examined.
In this particular embodiment, TAB components are to be visually inspected after they have been excised from a tape and their leads formed. A robotic end-effector picks up the excised and formed component and after the visual inspection is completed and coordinate corrections made, places the component on the printed circuit substrate to which it is subsequently bonded.
Coordinates of windows are fixed within the camera frame of reference.
Now, the technique for finding lead centroids. unmapped windows, will be described. A summation profile for each set of leads 30 in each unmapped window such as 42. FIG. 3 is found as follows.
Refer now to FIG. 5 which is a graphic representation of a summation profile for a given narrowly constrained region of interest in an image, e.g. window 42, FIG. 3. The horizontal axis represents the particular position along the component windows long axis. The vertical axis represents the digital integration or summation of the pixels oriented along the window's short axis in line with the leads.
Pixel values are summed for a given row or column. For summation profiles to accurately reflect the position of leads, the axis of each lead must be approximately parallel to the line of summation. Thus summation profiles are sums of gray level pixel values in rows or columns of two dimensional arrays as a function of the expected orientation of leads on a side of the component. Because the width of summation windows is small, the affects of non-orthogonality of image axis and lead axis is diminished. If orientation of a component does not vary by more than 2.5 degrees, it is safe to assume that any error introduced by non orthogonality is insignificant.
The material of end-effector 2 (FIG. 1) is such that the leads appear as brightly reflected features against a darker background. This aspect of the illustrative system further reduces required computation time since the background is not specular.
The centers of the leads correspond to peaks 90. 92 in the summation profile shown in FIG. 5. First differences, i.e. derivatives, of the sums are found. Peaks in the summation profile occur at a first difference transition from plus to minus. Valleys 94,96 occur at transitions from - to +. Some of the peaks correspond to lead centers. The centers of the leads cannot be found by looking for the edges of leads because there is insufficient data to clearly distinguish the edges of the individual leads. The leads are so close together that the summation profiles form Gaussian distributions with peaks at the centers of the leads.
Light gradients are corrected by subtracting summation valleys from adjoining peaks. Corrected peak values obtained in this manner are less variable since surfaces corresponding to peaks receive approximately the same amount of incident light from illumination sources 6, FIG. 1 as the surfaces corresponding to adjoining valleys.
The lead positions in the two images from cameras 10 and 12 in FIG. 3 are then mapped to a common coordinate space using the same Euler transform as above described with respect to windows 40, 42, 44 46, 48, and 50 in FIG. 3. Mapping only lead centroids rather than the entire image avoids a huge computational expense.
It is necessary to determine overlap between leads and overlapping windows for each group. Two leads are deemed overlapping if their proximity is less than that which can be attributed to errors in calibration and initial estimates of position in image space. If the number of leads found after correction for overlap is not that expected and no leads are determined to be missing, then a check of calibration accuracy is required. Further the absence of overlapping leads within overlapping windows indicates a need for a calibration accuracy check.
The exclusion of false leads is enhanced as noise peaks of the corrected peak values are excluded by ignoring all peaks below an empirically determined minimum. Thus, false leads are prevented from appearing where a missing lead might be. From the peaks remaining in a region of interest after noise suppression, a group is selected for which all of the peaks are 0.75 to 1.25 pitch distance from their nearest neighboring peaks. For any given component placement operation nominal pitch and lead size are known. In addition, the number of peaks in the group must be at least the number of expected leads. Hence, missing or bent lead detection is indicated by absence of an acceptable group of peaks. False leads at either end of a selected group are eliminated by discarding corrected peaks until the two peaks at either end are within a normal range of corrected summation values of the other peaks in the groups.
This method is successful since a region of interest. i.e. a window is only 5 percent larger than the length of a group of leads. Thus, there is little room in which false peaks can appear. Even fewer false peaks occur in a selected group since "good", true, peaks are required to occur at the distance of one pitch from their nearest neighboring corrected peak. The ratio of true leads to false leads in a selected group is therefore high. A group of true leads produces corrected summation peaks varying little from each other in their values, while the corrected summation peak for a false lead noticeably differs from those of true peaks.
Since the ratio of true peaks to false leads is high, measures of the first and second moments of a selected group of leads will be most reflective of true leads and so false leads may be excluded using these measures.
The determination of component center to sub pixel accuracy will now be described having reference to FIG. 6 which schematically shows against reference outline 80 an outline 82 representing image frame of reference. The average lead centroids are indicated at T, R, B, L on component 4, outline 82. Coordinates of the average lead centroids are (TX, TY), (RX, RY), (BX, BY) and (LX, LY). The coordinates of the center (XY, CY), of component 4 are found by averaging the centroids of the sides.
CX=the absolute value of (TX-BX)/2
CY=the absolute value of (LY-RY)/2
The orientation of component 4 as the angle theta between image axes 84, 86 and the T-B and L-R axes of component 4. The angle theta is calculated in accordance with the following:
Theta=arcsin ((CX-TX)/(TY-CY))=((LY-CY)/(LX-CX))
There follows below a Pseudo code description of the logic comprising the inspection technique of the present invention.
______________________________________                                    
Block Subroutine Hierarchy:                                               
Block 0: Examine a component                                              
Block 1: Find centers of leads                                            
Block 1A: Correct for light gradients                                     
Block 1B: Select a group of corrected                                     
peaks.                                                                    
Block 1B1: Find all the groups                                            
whose peaks are pitch                                                     
distance apart.                                                           
Block 1B2: Choose one of the                                              
groups                                                                    
Block 1C: Iteratively exclude peaks                                       
from either end of the group                                              
Block 1C1: Remove peaks                                                   
Block 2: Find average centroid                                            
Block 3: Find position and orientation                                    
Block 0: Examine a Component:                                             
Iteratively examine each region of interest on                            
each of the four sides of a component:                                    
Block 1: Find centers of leads in the                                     
region of interest.                                                       
if no errors encountered in Block 1:                                      
then                                                                      
Block 2: Find average centroid of                                         
leads in the region of                                                    
interest.                                                                 
if no errors have been found in any of the four                           
sides of a component                                                      
then                                                                      
Block 3: Find position and orientation of                                 
the component                                                             
if no errors found                                                        
then                                                                      
Use the position and orientation of the                                   
component to place it.                                                    
else                                                                      
Reject the component.                                                     
Block 1: Find Centers of Leads:                                           
If the region of interest is oriented vertically:                         
then                                                                      
form the summation profile by summing rows                                
of pixels from the top of the region of                                   
interest to the bottom.                                                   
else the region of interest is oriented horizon-                          
tally:                                                                    
form the summation profile by summing                                     
columns of pixels from the left side of the                               
region of interest to the right side.                                     
Find first differences of the sums in the summa-                          
tion profile.                                                             
Using the first differences. find the peaks and                           
valleys in the summation profile:                                         
Some of the peaks correspond to the centers                               
of the leads.                                                             
Block 1A: Correct for light gradients by                                  
subtracting adjacent valleys from                                         
peaks, the result of which are correct-                                   
ed peaks.                                                                 
Exclude corrected peaks whose average per                                 
pixel value is < an empirically derived                                   
minimum.                                                                  
if number of corrected peaks left in the                                  
region of interest < expected number of                                   
leads                                                                     
then                                                                      
return Error: WRONG NUMBER OF LEADS                                       
else the number of correct peaks is at least                              
the number of expected leads:                                             
Block 1B: Select a group of corrected                                     
peaks for which:                                                          
* all of the peaks are 0.75 to 1.25                                       
pitch from their nearest neighbor-                                        
ing peaks.                                                                
* The number of peaks in the group                                        
is at least the number of expected                                        
peaks.                                                                    
if errors encountered in Block 1B:                                        
then                                                                      
return errors                                                             
else a group of corrected peaks has                                       
been selected by Block 1B:                                                
Block 1C:                                                                 
Iteratively exclude peaks                                                 
from either end of the group                                              
until:                                                                    
The two end peaks are                                                     
within a normal range of                                                  
values of the other                                                       
peaks in the group                                                        
or                                                                        
The number of peaks                                                       
remaining 9n the group                                                    
is = the expected number                                                  
of leads.                                                                 
return any errors encountered in Block                                    
1C.                                                                       
if no errors encountered:                                                 
then                                                                      
the remaining peaks in the selected group                                 
correspond to the centers of leads in this                                
region of interest.                                                       
Block 1A: Correct for light Gradients:                                    
There is only one valley next to the last peak:                           
Reset the last peak summation valley to:                                  
peak summation value - valley summation                                   
value                                                                     
Iteratively treat all peaks except the last:                              
There are two valleys on either side of a                                 
peak:                                                                     
val1 = peak summation value - first                                       
valley summation value                                                    
va12 = peak summation value - second                                      
valley summation value                                                    
The least non-negative of val1 and the va12                               
is the best correction for a potential light                              
gradient:                                                                 
if val1 < va12                                                            
then                                                                      
reset the peak summation value to                                         
val1                                                                      
else                                                                      
if va12 is non-negative                                                   
then                                                                      
reset the peak summation                                                  
value to va12                                                             
else                                                                      
reset the peak summation                                                  
value to zero                                                             
else va12 < val1:                                                         
if va12 is non-negative                                                   
then                                                                      
reset the peak summation                                                  
value to va12                                                             
else                                                                      
if val1 is non-negative                                                   
then                                                                      
reset the peak summation                                                  
else                                                                      
reset the peak qummation                                                  
value to zero                                                             
Block 1B: Select a Group of Corrected Peaks:                              
In what follows:                                                          
group number is the index to groups                                       
group (group number) is the group whose                                   
index is group number                                                     
number of peaks (group number) is the number                              
of peaks in the set group (group number)                                  
Block 1B1: Find all the groups in the region                              
of interest for which:                                                    
* all of the peaks in the group are                                       
0.75 to 1.25 of the pitch from                                            
their nearest neighboring peaks.                                          
if there are too many groups:                                             
then                                                                      
return Error: TOO MANY GROUPS.                                            
(probably the expected pitch                                              
is incorrect.)                                                            
else                                                                      
Block 1B2: Choose one group from the groups                               
found.                                                                    
return errors found in Block 1B2.                                         
Block 1B1: Find all the Groups Whose Peaks are Pitch                      
Distance Apart:                                                           
Group number = 0                                                          
Assign the first peak to group [0]                                        
Iteratively examine all peaks in the region of                            
interest except the last:                                                 
At the ith iteration:                                                     
gap = absolute value of:                                                  
position of peak at ith position -                                        
position of peak at ith + 1 position                                      
if (gap < 1.25*pitch) and (gap >                                          
.075*pitch):                                                              
assign the peak at position i + 1 to                                      
group [group number]                                                      
else gap separates the ith + 1 peak from the                              
last group:                                                               
increment group number by 1                                               
if group number < maximum number of                                       
groups:                                                                   
then                                                                      
assign the peak at position i + 1 to                                      
group [group number]                                                      
else                                                                      
return Error: TOO MANY GROUPS.                                            
(probably the                                                             
expected pitch is                                                         
incorrect.)                                                               
Block 1B2: Choose One Group:                                              
Iteratively examine groups:                                               
(I.E. for all group number. 0 <= group number                             
<=n, where n is the number of groups)                                     
Select all groups for which:                                              
number of peaks [group number] >=                                         
expected number of leads.                                                 
if there exists more than one group for which:                            
for all group numbers 0 to n:                                             
number of peaks [group number] >= expected                                
number of leads.                                                          
then                                                                      
return Error: The REGION of INTEREST is                                   
MUCH LARGER THAN it NEED be.                                              
else                                                                      
if there exist no groups for which:                                       
for all group number 0 to n:                                              
number of peaks [group number] >=                                         
expected number of leads.                                                 
then                                                                      
return Error: WRONG NUMBER of LEADS.                                      
else only one group selected:                                             
Remove all peaks from the region of                                       
interest which are not in the selected                                    
group.                                                                    
Block 1C: Iteratively Exclude Peaks From Either End                       
of the Group:                                                             
In what follows:                                                          
group [i] is the ith peak in array of peaks                               
which form the group.                                                     
group [0] is the first peak and group [n] is                              
the last peak in the array of peaks.                                      
pos(group[i]) is the position within the                                  
summation profile of group[i]                                             
sum(pos(group[i])) is the corrected summa-                                
tion value at pos(group[i])                                               
Measure average and standard deviation of cor-                            
rected summation values at positions of peaks in                          
the group                                                                 
while (number of peaks in group > expected number                         
of leads)                                                                 
and                                                                       
(sum(pos(group[0])) or sum(pos(group[n]))                                 
are outside the normal range of values of                                 
the other peaks in group):                                                
Set maximum difference = average + fac-                                   
tor*standard deviation                                                    
Block 1C1: Remove peaks which are outside                                 
the maximum difference.                                                   
if (number peaks in group > expected number                               
of leads)                                                                 
and                                                                       
(peaks have been removed from the group                                   
since the average and standard deviation                                  
have been measured last):                                                 
then                                                                      
Measure average and standard deviation                                    
of corrected summation values at                                          
positions of peaks in the group                                           
if (number peaks in group > expected number of                            
leads)                                                                    
return Error: WRONG NUMBER of LEADS                                       
Block 1C1: Remove Peaks:                                                  
remove peaks at the beginning of the array group:                         
while (absolute value(sum(pos(group[0])) -                                
average) maximum difference)                                              
and                                                                       
(number peaks in group > expected number of                               
leads):                                                                   
Iteratively for all i, 1<=i<=n:                                           
group[i-1] = group[i]                                                     
n = n - 1                                                                 
peaks at the end of the array group:                                      
while (absolute value(sum(pos(group[n])) -                                
average) > maximum difference)                                            
and                                                                       
(number peaks in group > expected number of                               
leads):                                                                   
n = n - 1                                                                 
Block 2: Find Average Centroid:                                           
The average centroid of this region of interest                           
to sub-pixel accuracy is the average of the                               
centers of all the leads in the region of inter-                          
est.                                                                      
if distance between the middle lead position and                          
the average centroid position is not within an                            
acceptable range                                                          
then                                                                      
return Error: in the REGION of INTEREST                                   
MEASUREMENTS:                                                             
Probably a missing or bent lead                                           
if the number of leads found is not = the number                          
of leads expected                                                         
then                                                                      
return Error: WRONG NUMBER of LEADS:                                      
Leads overlooked or leads                                                 
missing or bent                                                           
Block 3: Find Position and Orientation:                                   
Use the four average centroids found to:                                  
* find the center of the component                                        
* find the horizontal and vertical angles                                 
of orientnation                                                           
if the absolute value of:                                                 
the difference between the horizontal and                                 
vertical angles > a maximum                                               
then                                                                      
return Error: in ANGLE MEASUREMENTS:                                      
One or more of the average                                                
centroid measurements are in                                              
error probably due to inclu-                                              
sion of (a) false leads(s).                                               
______________________________________                                    
Those having skill in the art will understand that the system of FIG. 1 may be modified as shown in FIG. 7 to have a single camera. Such a modification is useful when less than the high resolution provided by the system of FIG. 1 is required or when a single camera can provide the high resolution required.
In this embodiment, camera 10 snaps a picture of component 4 in frame grabber 22, reads into memory the array of pixels detected by camera 10 such that pixel values are stored. Subarrays corresponding to the areas where leads are expected to appear are predefined as earlier described having reference in the system of FIG. 1. Summation profiles are performed as discussed having reference to FIG. 5. Corrections for light gradients are performed in the same manner and noise peaks are excluded as earlier described.
Further, while we have described the use of a certain type camera, it should be understood that any kind of image acquisition digitizing apparatus is suitable for use with our invention.
Refer now to FIG. 8 which is a flow chart of the logic followed in vision processor 20. This flow chart summarizes the process of our invention as above described having reference to FIGS. 1 through 7.
While the present invention has been described having reference to a particular preferred embodiment and a modification thereto, those having skill in the art will appreciate that the above and other modifications and changes in form and detail may be made without departing from the spirit and scope of the invention as claimed.

Claims (4)

What is claimed is:
1. Apparatus for inspecting leaded electronic components comprising:
means for acquiring a two dimensional array of grey level image data including two cameras and means for combining images therefrom, said means for combining including:
means for mapping images from said two cameras to a common coordinate system;
means for determining overlap between said images;
means for computing to sub-pixel accuracy a centroid of a component,
means for operating on preselected subarrays of said array of image data for forming a plurality of one dimensional summation profiles; and
vision logic means for operating on said summation profiles.
2. A method for visually inspecting a leaded component comprising the steps of:
capturing images of the component with a plurality of cameras;
digitizing images so captured;
mapping portions of said digitized images corresponding to locations where leads are expected to a single coordinate system;
determining overlap between mapped image portions;
finding lead centroid position to sub-pixel accuracy in each mapped image portion;
mapping lead positions from each image to a common coordinate system;
identifying overlapping leads; and
calculating component position and orientation as a function of lead centroids found.
3. The method of claim 2 wherein the finding step comprises:
developing one dimensional summation profiles corresponding to each mapped image portion;
correlating profile peaks to individual lead centers;
correcting profile peaks for light gradients; and
excluding false peaks.
4. The method of claim 3 wherein said developing step comprises:
summing gray level pixel values in rows or columns as a function of which is substantially parallel to expected lead axis.
US07/825,434 1990-12-27 1992-01-21 Automated visual inspection of electronic component leads prior to placement Expired - Fee Related US5185811A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US07/825,434 US5185811A (en) 1990-12-27 1992-01-21 Automated visual inspection of electronic component leads prior to placement

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US63467590A 1990-12-27 1990-12-27
US07/825,434 US5185811A (en) 1990-12-27 1992-01-21 Automated visual inspection of electronic component leads prior to placement

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US63467590A Continuation 1990-12-27 1990-12-27

Publications (1)

Publication Number Publication Date
US5185811A true US5185811A (en) 1993-02-09

Family

ID=27092209

Family Applications (1)

Application Number Title Priority Date Filing Date
US07/825,434 Expired - Fee Related US5185811A (en) 1990-12-27 1992-01-21 Automated visual inspection of electronic component leads prior to placement

Country Status (1)

Country Link
US (1) US5185811A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5408537A (en) * 1993-11-22 1995-04-18 At&T Corp. Mounted connector pin test using image processing
US5509597A (en) * 1994-10-17 1996-04-23 Panasonic Technologies, Inc. Apparatus and method for automatic monitoring and control of a soldering process
US5600733A (en) * 1993-11-01 1997-02-04 Kulicke And Soffa Investments, Inc Method for locating eye points on objects subject to size variations
US5608816A (en) * 1993-12-24 1997-03-04 Matsushita Electric Industrial Co., Ltd. Apparatus for inspecting a wiring pattern according to a micro-inspection and a macro-inspection performed in parallel
US5642442A (en) * 1995-04-10 1997-06-24 United Parcel Services Of America, Inc. Method for locating the position and orientation of a fiduciary mark
US5719953A (en) * 1993-09-20 1998-02-17 Fujitsu Limited Image processing apparatus for determining positions of objects based on a projection histogram
US5777886A (en) * 1994-07-14 1998-07-07 Semiconductor Technologies & Instruments, Inc. Programmable lead conditioner
US5825914A (en) * 1995-07-12 1998-10-20 Matsushita Electric Industrial Co., Ltd. Component detection method
WO1999001014A1 (en) * 1997-06-27 1999-01-07 Siemens Aktiengesellschaft Method for automatically controlling the intensity of a lighting used in units for detecting a position and/or for quality control
EP0919804A1 (en) * 1997-12-01 1999-06-02 Hewlett-Packard Company Inspection system for planar object
US5912985A (en) * 1994-11-08 1999-06-15 Matsushita Electric Industrial Co., Ltd. Pattern detection method
US5926557A (en) * 1997-02-26 1999-07-20 Acuity Imaging, Llc Inspection method
US6139078A (en) * 1998-11-13 2000-10-31 International Business Machines Corporation Self-aligning end effector for small components
WO2001084499A2 (en) * 2000-04-28 2001-11-08 Mydata Automation Ab Method and device for determining nominal data for electronic circuits by capturing a digital image and compare with stored nonimal data
WO2003023846A1 (en) * 2001-09-13 2003-03-20 Optillion Ab Method and apparatus for high-accuracy placing of an optical component on a component carrier
US6571006B1 (en) * 1998-11-30 2003-05-27 Cognex Corporation Methods and apparatuses for measuring an extent of a group of objects within an image
US20030133420A1 (en) * 2002-01-09 2003-07-17 Wassim Haddad Load balancing in data transfer networks
US20030165264A1 (en) * 2000-07-07 2003-09-04 Atsushi Tanabe Part recognition data creation method and apparatus, electronic part mounting apparatus, and recorded medium
US20050196036A1 (en) * 2004-03-05 2005-09-08 Leonard Patrick F. Method and apparatus for determining angular pose of an object
US20060050267A1 (en) * 2004-09-06 2006-03-09 Kiyoshi Murakami Substrate inspection method and apparatus
CN111161446A (en) * 2020-01-10 2020-05-15 浙江大学 Image acquisition method of inspection robot
US11538148B2 (en) * 2020-05-22 2022-12-27 Future Dial, Inc. Defect detection of a component in an assembly
US11551349B2 (en) * 2020-05-22 2023-01-10 Future Dial, Inc. Defect detection and image comparison of components in an assembly

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4696047A (en) * 1985-02-28 1987-09-22 Texas Instruments Incorporated Apparatus for automatically inspecting electrical connecting pins
US4805123A (en) * 1986-07-14 1989-02-14 Kla Instruments Corporation Automatic photomask and reticle inspection method and apparatus including improved defect detector and alignment sub-systems
US4847911A (en) * 1986-11-12 1989-07-11 Matsushita Electric Industrial Co., Ltd. Electronic parts recognition method and apparatus therefore
US4926442A (en) * 1988-06-17 1990-05-15 International Business Machines Corporation CMOS signal threshold detector
US4969199A (en) * 1986-02-28 1990-11-06 Kabushiki Kaisha Toshiba Apparatus for inspecting the molded case of an IC device
US5023916A (en) * 1989-08-28 1991-06-11 Hewlett-Packard Company Method for inspecting the leads of electrical components on surface mount printed circuit boards

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4696047A (en) * 1985-02-28 1987-09-22 Texas Instruments Incorporated Apparatus for automatically inspecting electrical connecting pins
US4969199A (en) * 1986-02-28 1990-11-06 Kabushiki Kaisha Toshiba Apparatus for inspecting the molded case of an IC device
US4805123A (en) * 1986-07-14 1989-02-14 Kla Instruments Corporation Automatic photomask and reticle inspection method and apparatus including improved defect detector and alignment sub-systems
US4805123B1 (en) * 1986-07-14 1998-10-13 Kla Instr Corp Automatic photomask and reticle inspection method and apparatus including improved defect detector and alignment sub-systems
US4847911A (en) * 1986-11-12 1989-07-11 Matsushita Electric Industrial Co., Ltd. Electronic parts recognition method and apparatus therefore
US4926442A (en) * 1988-06-17 1990-05-15 International Business Machines Corporation CMOS signal threshold detector
US5023916A (en) * 1989-08-28 1991-06-11 Hewlett-Packard Company Method for inspecting the leads of electrical components on surface mount printed circuit boards

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
IBM Technical Disclosure Bulletin, vol. 30, No. 1, p. 228, Surface Mounted Device Placement Jun. 1987 USA. *
IBM Technical Disclosure Bulletin, vol. 31, No. 10, p. 222, "Assembly Technique for Placing Electronic Components on Printed Circuit Wiring Patterns", Mar. 1989, USA.
IBM Technical Disclosure Bulletin, vol. 31, No. 10, p. 222, Assembly Technique for Placing Electronic Components on Printed Circuit Wiring Patterns , Mar. 1989, USA. *
IBM Technical Disclosure Bulletin, vol. 31, No. 9, p. 186, "Robotic Scanning Laser Placement, Solder and Desolder Device", Feb. 1989, USA.
IBM Technical Disclosure Bulletin, vol. 31, No. 9, p. 186, Robotic Scanning Laser Placement, Solder and Desolder Device , Feb. 1989, USA. *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5719953A (en) * 1993-09-20 1998-02-17 Fujitsu Limited Image processing apparatus for determining positions of objects based on a projection histogram
US5600733A (en) * 1993-11-01 1997-02-04 Kulicke And Soffa Investments, Inc Method for locating eye points on objects subject to size variations
US5408537A (en) * 1993-11-22 1995-04-18 At&T Corp. Mounted connector pin test using image processing
US5608816A (en) * 1993-12-24 1997-03-04 Matsushita Electric Industrial Co., Ltd. Apparatus for inspecting a wiring pattern according to a micro-inspection and a macro-inspection performed in parallel
US5777886A (en) * 1994-07-14 1998-07-07 Semiconductor Technologies & Instruments, Inc. Programmable lead conditioner
US5509597A (en) * 1994-10-17 1996-04-23 Panasonic Technologies, Inc. Apparatus and method for automatic monitoring and control of a soldering process
US5912985A (en) * 1994-11-08 1999-06-15 Matsushita Electric Industrial Co., Ltd. Pattern detection method
US5642442A (en) * 1995-04-10 1997-06-24 United Parcel Services Of America, Inc. Method for locating the position and orientation of a fiduciary mark
US5825914A (en) * 1995-07-12 1998-10-20 Matsushita Electric Industrial Co., Ltd. Component detection method
US5872863A (en) * 1995-07-12 1999-02-16 Matsushita Electric Industrial Co., Ltd. Component detection method
CN1091270C (en) * 1995-07-12 2002-09-18 松下电器产业株式会社 Parts testing method
US5926557A (en) * 1997-02-26 1999-07-20 Acuity Imaging, Llc Inspection method
WO1999001014A1 (en) * 1997-06-27 1999-01-07 Siemens Aktiengesellschaft Method for automatically controlling the intensity of a lighting used in units for detecting a position and/or for quality control
US6546126B1 (en) 1997-06-27 2003-04-08 Siemiens Aktiengesellschaft Method for automatically setting intensity of illumination fpr positional recognion and quality controlde
EP0919804A1 (en) * 1997-12-01 1999-06-02 Hewlett-Packard Company Inspection system for planar object
US6139078A (en) * 1998-11-13 2000-10-31 International Business Machines Corporation Self-aligning end effector for small components
US6571006B1 (en) * 1998-11-30 2003-05-27 Cognex Corporation Methods and apparatuses for measuring an extent of a group of objects within an image
US7324710B2 (en) 2000-04-28 2008-01-29 Mydata Automation Ab Method and device for determining nominal data for electronic circuits by capturing a digital image and compare with stored nominal data
WO2001084499A2 (en) * 2000-04-28 2001-11-08 Mydata Automation Ab Method and device for determining nominal data for electronic circuits by capturing a digital image and compare with stored nonimal data
WO2001084499A3 (en) * 2000-04-28 2002-01-24 Mydata Automation Ab Method and device for determining nominal data for electronic circuits by capturing a digital image and compare with stored nonimal data
US20030113039A1 (en) * 2000-04-28 2003-06-19 Niklas Andersson Method and device for processing images
US7539339B2 (en) * 2000-07-07 2009-05-26 Panasonic Corporation Part recognition data creation method and apparatus, electronic part mounting apparatus, and recorded medium
US20030165264A1 (en) * 2000-07-07 2003-09-04 Atsushi Tanabe Part recognition data creation method and apparatus, electronic part mounting apparatus, and recorded medium
WO2003023846A1 (en) * 2001-09-13 2003-03-20 Optillion Ab Method and apparatus for high-accuracy placing of an optical component on a component carrier
US20030133420A1 (en) * 2002-01-09 2003-07-17 Wassim Haddad Load balancing in data transfer networks
US20050196036A1 (en) * 2004-03-05 2005-09-08 Leonard Patrick F. Method and apparatus for determining angular pose of an object
US7349567B2 (en) * 2004-03-05 2008-03-25 Electro Scientific Industries, Inc. Method and apparatus for determining angular pose of an object
US20060050267A1 (en) * 2004-09-06 2006-03-09 Kiyoshi Murakami Substrate inspection method and apparatus
US7512260B2 (en) * 2004-09-06 2009-03-31 Omron Corporation Substrate inspection method and apparatus
CN111161446A (en) * 2020-01-10 2020-05-15 浙江大学 Image acquisition method of inspection robot
US11538148B2 (en) * 2020-05-22 2022-12-27 Future Dial, Inc. Defect detection of a component in an assembly
US11551349B2 (en) * 2020-05-22 2023-01-10 Future Dial, Inc. Defect detection and image comparison of components in an assembly
US11842482B2 (en) 2020-05-22 2023-12-12 Future Dial, Inc. Defect detection of a component in an assembly
US11900666B2 (en) 2020-05-22 2024-02-13 Future Dial, Inc. Defect detection and image comparison of components in an assembly

Similar Documents

Publication Publication Date Title
US5185811A (en) Automated visual inspection of electronic component leads prior to placement
US6064757A (en) Process for three dimensional inspection of electronic components
EP0186874B1 (en) Method of and apparatus for checking geometry of multi-layer patterns for IC structures
US5371690A (en) Method and apparatus for inspection of surface mounted devices
EP0563829B1 (en) Device for inspecting printed cream solder
US5774572A (en) Automatic visual inspection system
US4653104A (en) Optical three-dimensional digital data acquisition system
US7239399B2 (en) Pick and place machine with component placement inspection
EP0493106A2 (en) Method of locating objects
US5774573A (en) Automatic visual inspection system
US6075881A (en) Machine vision methods for identifying collinear sets of points from an image
KR0129169B1 (en) Method and apparatus for inspecting apertured mask sheets
KR940006220B1 (en) Method of and apparatus for inspectiong pattern on printed circuit board
JP3028016B2 (en) 3D image measurement method for cargo
USRE38716E1 (en) Automatic visual inspection system
US5912985A (en) Pattern detection method
EP0493105A2 (en) Data processing method and apparatus
WO1991020054A1 (en) Patterned part inspection
US4641256A (en) System and method for measuring energy transmission through a moving aperture pattern
JPH01236700A (en) Inspection and orientation recognition method of component lead
CN116008177A (en) SMT component high defect identification method, system and readable medium thereof
JP3260425B2 (en) Pattern edge line estimation method and pattern inspection device
WO2001004567A2 (en) Method and apparatus for three dimensional inspection of electronic components
JPH0794971B2 (en) Cross-section shape detection method
JP2601232B2 (en) IC lead displacement inspection equipment

Legal Events

Date Code Title Description
CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
FP Lapsed due to failure to pay maintenance fee

Effective date: 20010209

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362